Skip to content

Tags: doraut/Flux.jl

Tags

v0.11.6

Toggle v0.11.6's commit message
## Flux v0.11.6

[Diff since v0.11.5](FluxML/Flux.jl@v0.11.5...v0.11.6)



**Merged pull requests:**
- Release 0.11.5 (FluxML#1477) (@DhairyaLGandhi)

v0.11.5

Toggle v0.11.5's commit message
## Flux v0.11.5

[Diff since v0.11.4](FluxML/Flux.jl@v0.11.4...v0.11.5)


**Closed issues:**
- Huge performance difference between sparse and dense representation on GPU (FluxML#189)
- onecold is very slow (FluxML#556)
- onecold does not work on CuMatrix (FluxML#864)
- Multi-dimensional onehot (FluxML#1229)

**Merged pull requests:**
- Add CTC loss to new Losses module (FluxML#1287) (@maetshju)
- remove implicit conversions (FluxML#1393) (@CarloLucibello)
- Add Parallel layer (FluxML#1462) (@darsnack)
- Improve docs for `crossentropy` & friends (FluxML#1463) (@mcabbott)
- One-arg unsqueeze method (FluxML#1469) (@mcabbott)
- remove dataset tests (FluxML#1470) (@CarloLucibello)
- Fix RNN tests on GPU (FluxML#1473) (@jeremiedb)

v0.11.4

Toggle v0.11.4's commit message
## Flux v0.11.4

[Diff since v0.11.3](FluxML/Flux.jl@v0.11.3...v0.11.4)


**Closed issues:**
- add soft deprecation path for removed datasets (FluxML#1426)
- Issues about OneHotVector/OneHotMatrix (FluxML#1445)
- Zygote version (FluxML#1455)

**Merged pull requests:**
- Soft deprecation for Datasets (FluxML#1442) (@CarloLucibello)
- Arbitrary dimension one-hot arrays (FluxML#1448) (@darsnack)
- release 0.11.4 (FluxML#1451) (@CarloLucibello)
- make PackageCompiler happy (FluxML#1453) (@DhairyaLGandhi)
- Update Zygote version to 0.6 (FluxML#1456) (@CarloLucibello)

v0.11.3

Toggle v0.11.3's commit message
## Flux v0.11.3

[Diff since v0.11.2](FluxML/Flux.jl@v0.11.2...v0.11.3)


**Closed issues:**
- Better support for scalar model parameters (#214)
- MethodError with Complex inputs (#217)
- MethodError: Cannot `convert` an object of type TrackedArray{…,Array{Float64,0}} to an object of type Float64  (#230)
- BatchNorm changes the data type. (FluxML#260)
- Tracker.gradient should take Tuple for parameter arguments  (FluxML#281)
- Is testmode broken? (FluxML#333)
- Leaky abstraction: loss returning Dual instead of TrackedReal (FluxML#366)
- Batchnorm fails on GPU (FluxML#385)
- Document how to get from TrackedFoo to Foo (FluxML#398)
- Adding the derivative of fft (FluxML#410)
- Non boolean in boolean context (FluxML#431)
- Saving-loading-saving model will cause error using BSON.jl (FluxML#432)
- Gaussian process (FluxML#433)
- Error with Conv (FluxML#438)
- transposed convolution layer (FluxML#440)
- Parallel training: reset back! function (FluxML#443)
- Export `data` as well? (FluxML#457)
- Model worked with 0.5.4, fails with 0.6.8:  (FluxML#470)
- Differentiation of matrix-matrix product with CuArrays unexpectedly slow. (FluxML#486)
- Documentation Pitfall: Saving Model weights does not preserve untracked values (FluxML#492)
- Throw error for nested tracked type? (#495)
- MaxPool not working on latest master branch (FluxML#501)
- Error with negative tracked parameters when using .^ (#515)
-  Type confusion in broadcast (#523)
- Norm of tracked CuArray throws an LLVM compiler error (#537)
- No method for computing determinant of TrackedMatrix (#542)
- Can't install Flux with Julia-1.0.3 under Ubuntu18 (FluxML#551)
- Intermittent test failure in tracker (FluxML#594)
- User defined model does not work (FluxML#597)
- Documentation: activations example not working (FluxML#604)
- Regression: Ambiguous *  between Transpose and TrackedArray (#605)
- Error while running sample from docs. (FluxML#611)
- Getting "Loss is Inf" for two linear layers using Momentum() (FluxML#623)
- Support for GNN - Graph Neural Networks (FluxML#625)
- params not working with LinearAlgebra.mul! (#627)
- train! doesn't work with Trackedreal (#630)
- `MethodError: no method matching setindex_shape_check(::Int64, ::Int64)` on custom gradient example (FluxML#642)
- Train/test mode (FluxML#643)
- DepthwiseConv: no method for non Float64 (FluxML#654)
- `collect` drops gradients (#657)
- Silently dropped tracking when broadcasting on TaylorSeries (#659)
- Error during test in tracker.jl (FluxML#665)
- Backprop of sum of product takes very long time (#674)
- Basic "taking gradients" example not working (FluxML#683)
- LSTM not compatible with GPU (FluxML#686)
- backprop fails for adjoint in tuple (FluxML#688)
- Iterate over tuples fails for custom functions (#690)
- `gpu(param(randn(10,10))) - I` segfaults (#692)
- Flux params() does not retrive parameters for composed layers (FluxML#713)
- similar(x, (1,2,3)) and similar(x,1,2,3) differ for TrackedArray on GPU (#734)
- `Tracked` doesn't work with `mapslices` (#741)
- repeat on a TrackedArray gives an error with implied dimensions (#770)
- Train model with GPU, got InvalidIRError (FluxML#784)
- Diagonal (tracked) x Matrix on GPU gives ReadOnlyMemoryError (#785)
- Adding regularization to the loss results in LLVM error when using CuArrays (FluxML#787)
- Adding a TrackedReal to a vector of TrackedReal produces a double tracked variable. (#794)
- TrackedReal error (FluxML#800)
- missing values in features matrix (FluxML#804)
- MNIST conv network example errors out (FluxML#806)
- BoundsError when using Batchnorm layer inside Maxout layer (FluxML#810)
- Flux#zygote slower than Tracker (FluxML#815)
- Strange failure when using OneHotMatrix (FluxML#824)
- 3D CNN not training (FluxML#834)
- RFC: overload `eltype` for models to get current type for precision? (FluxML#843)
- Problems using binarycrossentropy() (FluxML#850)
- Functor differentiability (FluxML#878)
- Add maxpool syntax back (FluxML#880)
- normalise is not GPU compatible (FluxML#887)
- Float32 for performance improvements (code samples) (FluxML#971)
- crossentropy should have label smoothing  (FluxML#1016)
- Documentation of `pad` keyword (FluxML#1077)
- outdims function doesn't work properly for chained layers (FluxML#1086)
- RNNCell, LSTMCell and GRUCell are implemented as mutable structs, but never do mutation (FluxML#1089)
- RNN on GPU fails on first backward call (FluxML#1114)
- remove `@jit` macro (FluxML#1124)
- BatchNorm prevents 1-dimensional arrays as input (FluxML#1125)
- Broken example with onehotbatch in docs (FluxML#1214)
- cuarray gradient for RNN has too many wrappers (FluxML#1259)
- Flux.Zeros conflicts with Flux.loadparams! (FluxML#1277)
- factor out datasets (FluxML#1278)
- Flux.jl precompile takes 20-40 minutes (FluxML#1283)
- NNPACK not available for your platform: Windows(x86_64-w64-mingw32-libgfortran5-cxx11) (FluxML#1286)
- Failed to load Flux on windows machine  (FluxML#1306)
- Out-dated documentation for dataloader? (FluxML#1310)
- Loading/Saving weights from GPU (FluxML#1318)
- ConvTranspose same padding and outdims errors (FluxML#1319)
- Importing Custom Datasets (FluxML#1326)
- Various CuArray errors when trying to run examples from model zoo on GPU (FluxML#1330)
- GPU error when using Zeros() as bias in Conv layer (FluxML#1332)
- Can't differentiate foreigncall expression when trying to compute gradient (FluxML#1338)
- deprecate treelike  (FluxML#1339)
- Controlling the parameters W ,in Chain(Dense(),Dense()) neural network (FluxML#1342)
- conflicts with many packages (FluxML#1343)
- Inconsistency of ADAGrad for matrices (FluxML#1346)
- ERROR: Unknown instruction kind LLVMFNeg (FluxML#1349)
- CUDNNError: CUDNN_STATUS_BAD_PARAM (code 3) while training lstm neural network on GPU (FluxML#1360)
- Unnecessary allocations when using LayerNorm (FluxML#1361)
- Docs Basic typo/code-error after 'Stacking It Up' (FluxML#1363)
- Dense on GPU causes LLVM error: Cannot cast between two non-generic address spaces (FluxML#1364)
- Models with dropout effect GLOBAL_RNG differently when run on GPU (FluxML#1372)
- Mutating Arrays not Allowed (FluxML#1375)
- Destructure structs (FluxML#1380)
- Cannot differentiate GRUCell with CuArray (FluxML#1381)
- Gradient is NaN under certain conditions, when using CUDA.jl (FluxML#1382)
- cannot update v0.11.1 to v0.11.2 (FluxML#1387)
- Documentation should show proper use of throttle  (FluxML#1399)
- Non-reproducible example in documentation (FluxML#1403)
-  MethodError: no method matching getindex(::Pair{Symbol,Array{Float64,1}}, ::Symbol) (FluxML#1404)
- Flux.LSTM() returns a sequence of states (FluxML#1406)
- delete!() function doesn't freeze the paramters in weight matrix (FluxML#1416)
- normalise function doesn't normalise to standard deviation of 1.0 (FluxML#1417)
- LSTM fails on GPU only (FluxML#1418)
- Iris Dataset is out of date (FluxML#1433)
- Problem sending custom layers to gpu (FluxML#1437)
- Missing docstring error in dev docs (FluxML#1439)

**Merged pull requests:**
- Implementation of label smoothing with crossentropy  (FluxML#1025) (@sambitdash)
- Updates to outdims (FluxML#1305) (@darsnack)
- RNN update to drop CUDNN, fix LSTM bug and output type stability (FluxML#1367) (@jeremiedb)
- remove Datasets + additional deprecations (FluxML#1377) (@CarloLucibello)
- Fix some issues with Zeros option 2 (FluxML#1379) (@DrChainsaw)
- eliminate most allocations from get! in optimisers (FluxML#1388) (@simeonschaub)
- RNN deprecations and naming fixes (FluxML#1390) (@jeremiedb)
- Improve docs for `Conv` etc. (FluxML#1391) (@mcabbott)
- remove some unused Conv constructors (FluxML#1394) (@CarloLucibello)
- fix bias transpose conv (FluxML#1395) (@CarloLucibello)
- take gradient of a function not its arguments (FluxML#1398) (@anderson15)
- minor documentation change regarding throttle  (FluxML#1400) (@ArbitRandomUser)
- Update TagBot.yml (FluxML#1401) (@CarloLucibello)
- support multiple batch dimensions in Dense layer (FluxML#1405) (@CarloLucibello)
- add GA ci (FluxML#1411) (@DhairyaLGandhi)
- update ci (FluxML#1412) (@CarloLucibello)
- Add buildkite pipeline (FluxML#1413) (@DhairyaLGandhi)
- remove `@jit` macro (FluxML#1419) (@gxyd)
- Update functions.jl (FluxML#1420) (@Sleort)
- fix Dense's docstring (FluxML#1423) (@CarloLucibello)
- (Complete) Implementation of label smoothing with crossentropy (FluxML#1427) (@gxyd)
- RNN docs (FluxML#1428) (@jeremiedb)
- Generalize train/testmode! to all Functors (FluxML#1432) (@ToucheSir)
- transition some docs to doctests (FluxML#1436) (@gxyd)
- CompatHelper: bump compat for "Reexport" to "1.0" (FluxML#1438) (@github-actions[bot])
- fix docs (FluxML#1443) (@CarloLucibello)
- Add inference hints to SkipConnection (FluxML#1446) (@DhairyaLGandhi)
- Update GPU CI. (FluxML#1449) (@maleadt)

v0.11.2

Toggle v0.11.2's commit message
## Flux v0.11.2

[Diff since v0.11.1](FluxML/Flux.jl@v0.11.1...v0.11.2)


**Closed issues:**
- Error with Flux.crossentropy (FluxML#435)
- Unnecessary typeasserts in Flux.Optimise.apply! cause training to fail (FluxML#816)
- OneHotMatrix causes a 'scalar getindex disallowed' error on GPU (FluxML#1006)
- Higher order derivative products? (FluxML#1102)
- Gradient of Chain with respect to input on gpu (FluxML#1132)
- Backprop through time is truncated to only 1 time step (FluxML#1209)
- Failed to load Flux 1.11.0 and 1.11.1 with Julia 1.4.2 and 1.5.0 on a windows machine (FluxML#1313)
- ADAMW Optimise has no field eta (FluxML#1316)
- LayerNorm only operates on 2D tensors (also Diagonal) (FluxML#1321)
- NNlib not defined error when loading model saved with BSON (FluxML#1322)
- Map and broadcast on LSTM layers give different gradients (FluxML#1324)
- zygote (FluxML#1327)
- Error while pre-compIling Flux in Julia v1.4.2 on windows 10 (FluxML#1328)
- DepthwiseConv gives incorrect channel sizes when initialized from array (FluxML#1331)
- Flux.params return extra parameter (FluxML#1348)
- XOR Error not converging to 0 (FluxML#1352)
- Broken methods(Base.show) (FluxML#1354)
- Applying Dense layer on OneHotMatrix is very slow and can be optimized. (FluxML#1356)
- Unable to obtain gradient after flattened pooling layer. (FluxML#1359)
- "incremental compilation may be fatally broken for this module" when using Flux (FluxML#1370)

**Merged pull requests:**
- add Flux.skip() (FluxML#1232) (@Moelf)
- Add ColPrac badge (FluxML#1317) (@oxinabox)
- Change ConvTranspose with SamePad to have outsize = stride * insize (FluxML#1320) (@DrChainsaw)
- change nadam cite (FluxML#1333) (@JeffFessler)
- params([W, b]) to params(W, b) (FluxML#1334) (@paulxshen)
- export OADAM (FluxML#1336) (@cossio)
- update for Cuda 2 (FluxML#1345) (@CarloLucibello)
- Fix BPTT by overriding stateful broadcast adjoint (FluxML#1358) (@DhairyaLGandhi)
- Implement AdaBelief (FluxML#1362) (@willtebbutt)
- Update functions.jl (FluxML#1366) (@okaerin)
- Fixes FluxML#1354 (FluxML#1368) (@racinmat)
- Trailing spaces (FluxML#1369) (@racinmat)
- Update Slack URL (FluxML#1373) (@logankilpatrick)

v0.11.1

Toggle v0.11.1's commit message
## Flux v0.11.1

[Diff since v0.11.0](FluxML/Flux.jl@v0.11.0...v0.11.1)


**Closed issues:**
- ADADelta not training parameters (FluxML#1158)
- Improve repository's tags (FluxML#1181)
- CONTRIBUTING.md missing (FluxML#1182)
- Matrix times OneHotVector product does not check dimensions (FluxML#1223)
- Performance issue when calculating loss (FluxML#1255)
- Expose the RNGs used in initialization to the user (FluxML#1274)
- DataLoader fails on tuple input  (FluxML#1285)
- Unnecessarily slow normalisation, twice calculating mean (FluxML#1295)
- Basic example in docs fails (FluxML#1311)

**Merged pull requests:**
- Fixed Dimension Mismatch - AbtractMatrix and OneHotVector (FluxML#1242) (@maerory)
- Updated onehot.jl (FluxML#1256) (@Dsantra92)
- Update links and use main page of papers instead of their PDFs (FluxML#1276) (@hieronimo)
- Corrections in the Optimisers section of documents (FluxML#1290) (@coldinjection)
- Expose RNG in initializers (FluxML#1292) (@findmyway)
- Change CuArrays to CUDA on docs homepage (FluxML#1297) (@scimas)
- Fix ADADelta calculations and broken tests not catching the problems (FluxML#1299) (@scimas)

v0.11.0

Toggle v0.11.0's commit message
## Flux v0.11.0

[Diff since v0.10.4](FluxML/Flux.jl@v0.10.4...v0.11.0)


**Closed issues:**
- Support for asymmetric padding (FluxML#258)
- Support for Kaiming Initialization (FluxML#424)
- trained recurrent model can't be saved in BSON (FluxML#531)
- saving ADAM optimizer is broken [@save] [BSON] (FluxML#737)
- BatchNorm gradients return Float64 instead of Float32 (FluxML#757)
- ERROR: UndefVarError: derivative not defined (FluxML#768)
- "Same" padding for conv layers? (FluxML#813)
- Strange bug with Adjoint (FluxML#866)
- Convolution without bias (FluxML#868)
- REST API for real-time prediction (FluxML#911)
- Zygote errors building bidirectional RNN (FluxML#962)
- Batch aware binarycrossentropy and logitbinarycrossentropy (FluxML#1024)
- Ways to freeze some part of a functor during training (FluxML#1034)
- dropout function is implemented as just an identity (FluxML#1084)
- revisit DataLoader api (FluxML#1088)
- Dead link in documentation (FluxML#1097)
- Orthogonal Initialization for RNN (FluxML#1107)
- no method matching apply! (FluxML#1111)
- DOC. typo in section of DataLoader (FluxML#1112)
- InitError: could not load library "cudnn64_7.dll" (FluxML#1116)
- How to downloading only one artifact of CUDA (FluxML#1117)
- gpu function does not fully work on structs within structs (FluxML#1118)
- SGD exported but not defined (FluxML#1121)
- outdim not defined&dont know how to update flux from 0.90 to 0.10 (FluxML#1154)
- Simple regularisation fails for Flux 0.10.4 (FluxML#1157)
- DataLoader type instability (FluxML#1159)
- Remove Manifest from master (FluxML#1164)
- LSTM cannot be trained successfully with the latest release version (FluxML#1168)
- BatchNorm failed on GPU (FluxML#1172)
- ExpDecay does not decay according to the description (FluxML#1176)
- Repeating crashes of NVIDIA GPU/CUDA drivers while training on basic model zoo (FluxML#1183)
- Can't use Flux (FluxML#1193)
- Gradient Does not work on parameterized Variable  (FluxML#1196)
- Wrong MaxPool gradient? (FluxML#1197)
- Apply boolean mask in loss function (FluxML#1198)
- Passing Number of hidden units as a float has unexpected behaviour (FluxML#1199)
- Error in displying example for Flux.Dense (FluxML#1203)
- Error running Flux on Jupyter (FluxML#1205)
- MethodError: no method matching apply! in custom loss function (FluxML#1210)
- Setting input or output layer size to a float in the Dense constructor should error (FluxML#1217)
- MethodError: no method matching apply!(::Type{ADAM}, ::Array{Float64,2}, ::Array{Float64,2}) for simple example (FluxML#1219)
- Incorrect gradients LSTM (FluxML#1222)
- Create additional pooling layers (FluxML#1224)
- ANN Forecasting with Flux (FluxML#1225)
- Neural Networks for Image Segmentation  (FluxML#1228)
- Got an error while training on GPU  with Mish activation function (FluxML#1235)
- Gradient for BatchNorm no longer works (FluxML#1244)
- how to restrain each element of weights to be nonnegative? (FluxML#1250)
- Retrieving weights (FluxML#1251)
- Adding regularisation causes NaNs on first Epoch (FluxML#1254)
- ERROR: Can't differentiate foreigncall expression (FluxML#1257)
- Get wrong third order derivative of Morse potential (FluxML#1267)
- ERROR: LoadError: Need an adjoint for constructor EnsembleSolution (FluxML#1270)

**Merged pull requests:**
- Fix for onecold broadcast bug (FluxML#764) (@DhairyaLGandhi)
- Make bias optional (FluxML#873) (@DhairyaLGandhi)
- Add option for "Same" padding to conv and pooling layers (FluxML#901) (@DrChainsaw)
- Add some gradient checking tests on GPUs (FluxML#957) (@DhairyaLGandhi)
- docstring for pad, stride, dilation (FluxML#1093) (@saswatpp)
- Explicitly import `Flux.Optimiser.apply!` in optimiser docs (FluxML#1113) (@SebastianCallh)
- Fix doc indent (FluxML#1123) (@matsueushi)
- Removed deprecated SGD exports (FluxML#1127) (@bhvieira)
- Added dropgrad in huber_loss (FluxML#1129) (@HenriDeh)
- Update glorot_normal doc (FluxML#1131) (@AdarshKumar712)
- add ClipValue and ClipNorm (FluxML#1133) (@AStupidBear)
- Add functor Cholesky. (FluxML#1138) (@aterenin)
- Speedup matmul of CuMatrix and OneHotMatrix (FluxML#1141) (@AStupidBear)
- Cleaner training loop (FluxML#1149) (@DhairyaLGandhi)
- generalize and homogenize losses (FluxML#1150) (@CarloLucibello)
- extend dataloader (FluxML#1152) (@CarloLucibello)
- Add correct overload for apply! in docs (FluxML#1156) (@DhairyaLGandhi)
- Build docs on Julia 1.3 (FluxML#1160) (@DhairyaLGandhi)
- Update CompatHelper.yml (FluxML#1162) (@aminya)
- Fix docstring of logitcrossentropy (FluxML#1165) (@cossio)
- Fix crossentropy when some probabilities are zero (FluxML#1166) (@cossio)
- Update basics.md (FluxML#1167) (@mipals)
- Functors (FluxML#1174) (@MikeInnes)
- xlogy broadcast adjoint (FluxML#1175) (@MikeInnes)
- Align ExpDecay implementation with documentation (FluxML#1177) (@DrChainsaw)
- CompatHelper: add new compat entry for "Functors" at version "0.1" (FluxML#1179) (@github-actions[bot])
- Add some functions to docs (FluxML#1184) (@DhairyaLGandhi)
- Add some news (FluxML#1185) (@DhairyaLGandhi)
- LayerNorm regularization (FluxML#1187) (@sdobber)
- Correcting advanced.md (FluxML#1190) (@Sleort)
- Pull Request Template (FluxML#1191) (@MikeInnes)
- Improve `restructure` performance (FluxML#1192) (@MikeInnes)
- Fixing ambiguous remark in Preserve inputs' types (FluxML#1206) (@natema)
- Fixing typo in docs (FluxML#1207) (@natema)
- Fixing output format for `onehot` (FluxML#1208) (@natema)
- Fixing syntax in onehot docstring (FluxML#1211) (@natema)
- Fixing indentation in train! docstring (FluxML#1213) (@natema)
- Require weight and bias to be AbstractArrays (FluxML#1218) (@oxinabox)
- CompatHelper: bump compat for "Adapt" to "2.0" (FluxML#1220) (@github-actions[bot])
- DataLoader with NamedTuple (FluxML#1221) (@cossio)
- use `ntuple` in conv (FluxML#1231) (@MikeInnes)
- Fix jldoctest for Flux.Dense (FluxML#1236) (@lassepe)
- Fix inline code block (FluxML#1238) (@harryscholes)
- add adaptive pool (FluxML#1239) (@dnabanita7)
- Documentation: Move logging example outside gradient block (FluxML#1240) (@contradict)
- add kaiming initialization and relevant docstrings (FluxML#1243) (@johnnychen94)
- Optimistic ADAM (FluxML#1246) (@cossio)
- outdims: revise implementation for Chain, dimension check for Dense (FluxML#1252) (@hhaensel)
- move to CUDA.jl (FluxML#1258) (@CarloLucibello)
- improve regularisation docs (FluxML#1260) (@CarloLucibello)
- dropout function always active (FluxML#1263) (@CarloLucibello)
- create Losses module (FluxML#1264) (@CarloLucibello)
- fix a link typo in NEWS (FluxML#1265) (@johnnychen94)

v0.10.4

Toggle v0.10.4's commit message
## Flux v0.10.4

[Diff since v0.10.3](FluxML/Flux.jl@v0.10.3...v0.10.4)


**Closed issues:**
- Binary cross entropy does not work on GPUs (FluxML#464)
- Cost functions don't show up in documentation (FluxML#1003)
- freeze parameters (FluxML#1022)
- a Tracked Array mention (FluxML#1071)
- Setup BlackBoxOptim.jl and Evolutionary.jl with sciml_train (FluxML#1075)
- Using Flux.train! with train and test DataLoaders? (FluxML#1081)
- Function "DataLoader()" does not exist!  (FluxML#1109)

**Merged pull requests:**
- added GlobalMaxPool, GlobalMeanPool, and flatten layers (FluxML#950) (@gartangh)
- Adapt to CuArrays ArrayStyle changes. (FluxML#1050) (@maleadt)
- update freeze docs (FluxML#1072) (@CarloLucibello)
- fix typo in the Dropout docs (FluxML#1076) (@AzamatB)
- CompatHelper: bump compat for "CodecZlib" to "0.7" (FluxML#1078) (@github-actions[bot])
- CompatHelper: bump compat for "Colors" to "0.12" (FluxML#1080) (@github-actions[bot])
- Fix typo in the docstrings of AlphaDropout (FluxML#1083) (@AzamatB)
- fix doc typos (FluxML#1096) (@wenjie-p)
- Allow CuArrays v2.x (FluxML#1098) (@ararslan)
- fix tests and new version (FluxML#1110) (@CarloLucibello)

v0.10.3

Toggle v0.10.3's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
Merge FluxML#1072

1072: update freeze docs r=CarloLucibello a=CarloLucibello



Co-authored-by: CarloLucibello <carlo.lucibello@gmail.com>

v0.10.2

Toggle v0.10.2's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
Merge FluxML#1065

1065: update documenter r=CarloLucibello a=CarloLucibello



Co-authored-by: CarloLucibello <carlo.lucibello@gmail.com>