Skip to content

Commit

Permalink
Fix typos (tracel-ai#949)
Browse files Browse the repository at this point in the history
  • Loading branch information
dengelt authored Nov 12, 2023
1 parent 322480b commit e255472
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 2 deletions.
2 changes: 1 addition & 1 deletion burn-book/src/basic-workflow/training.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ impl<B: Backend> ValidStep<MNISTBatch<B>, ClassificationOutput<B>> for Model<B>
Here we define the input and output types as generic arguments in the `TrainStep` and `ValidStep`.
We will call them `MNISTBatch` and `ClassificationOutput`. In the training step, the computation of
gradients is straightforward, necessitating a simple invocation of `backward()` on the loss. Note
that contrary to PyTorch, gradients are not store alongside each tensor parameter, but are rather
that contrary to PyTorch, gradients are not stored alongside each tensor parameter, but are rather
returned by the backward pass, as such: `let gradients = loss.backward();`. The gradient of a
parameter can be obtained with the grad function: `let grad = tensor.grad(&gradients);`. Although it
is not necessary when using the learner struct and the optimizers, it can prove to be quite useful
Expand Down
2 changes: 1 addition & 1 deletion burn-book/src/building-blocks/autodiff.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ torch.no_grad():
```

With Burn, you don't need to wrap the backend with the `Autodiff` for inference, and you
can call `inner()` to obtain the inner tensor, which is useful for validation.ß
can call `inner()` to obtain the inner tensor, which is useful for validation.

```rust, ignore
/// Use `B: AutodiffBackend`
Expand Down

0 comments on commit e255472

Please sign in to comment.