Skip to content

Commit

Permalink
Doc/burn (tracel-ai#54)
Browse files Browse the repository at this point in the history
  • Loading branch information
nathanielsimard authored Oct 5, 2022
1 parent 5618e65 commit 7389ef2
Show file tree
Hide file tree
Showing 5 changed files with 88 additions and 18 deletions.
90 changes: 80 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,21 +4,91 @@
[![Current Crates.io Version](https://img.shields.io/crates/v/burn.svg)](https://crates.io/crates/burn)
[![Test Status](https://github.com/burn-rs/burn/actions/workflows/test-burn.yml/badge.svg)](https://github.com/burn-rs/burn/actions/workflows/test-burn.yml)
[![Documentation](https://docs.rs/burn/badge.svg)](https://docs.rs/burn)
[![Rust Version](https://img.shields.io/badge/Rust-1.65.0-blue)](https://releases.rs/docs/unreleased/1.65.0)
[![license](https://shields.io/badge/license-MIT%2FApache--2.0-blue)](https://github.com/burn-rs/burn/blob/master/LICENSE)

> This library aims to be a complete deep learning framework with extreme flexibility written in Rust.
> The goal would be to satisfy researchers as well as practitioners making it easier to experiment, train and deploy your models.
<div align="left">

> This library aims to be a complete deep learning framework with extreme flexibility written in Rust.
> The goal would be to satisfy researchers as well as practitioners making it easier to experiment, train and deploy your solution.
## Features

* Flexible and intuitive custom neural network module 🤖
* Stateless and thread safe forward pass 🚀
* Fast training with full support for `metric`, `logging` and `checkpoining` 🌟
* [Burn-Tensor](https://github.com/burn-rs/burn/burn-tensor): Tensor library with autodiff, CPU and GPU support 🔥
* [Burn-Dataset](https://github.com/burn-rs/burn/burn-dataset): Dataset library with multiple utilities and sources 📚

## Details

### Example

Full example showing most of the features from `burn` [MNIST](https://github.com/burn-rs/burn/blob/main/burn/examples/mnist.rs).

### Components

Knowing the main components will be of great help when starting playing with `burn`.

#### __Backend__

Almost everything is based on the `Backend` trait, which allows to run tensor operations with different implementations without having to change your code.
A backend does not necessary have autodiff capabilities, therefore you can use `ADBackend` when you need it.

#### Tensor

The `Tensor` struct is at the core of the `burn` framework.
It takes two generic parameters, the `Backend` and the number of dimensions `D`,

```rust
use burn::tensor::{Tensor, Shape, Data};
use burn::tensor::backend::{NdArrayBackend, TchBackend};

let my_ndarray_matrix = Tensor::<NdArrayBackend<f32>, 2>::ones(Shape::new([3, 3]));
let my_tch_matrix = Tensor::<TchBackend<f32>, 2>::from_data(
Data::from([[1.0, 7.0], [13.0, -3.0]])
);
```

Note that `Data` is not specific to any backend.

#### Module

The `Module` derive let your create your own neural network module similar to PyTorch.

```rust
use burn::nn;
use burn::module::{Param, Module};
use burn::tensor::backend::Backend;

#[derive(Module, Debug)]
struct MyModule<B: Backend> {
my_param: Param<nn::Linear<B>>,
repeat: usize,
}
```

Note that only the fields wrapped inside `Param` are updated during training, and the other ones should implement `Clone`.

#### Forward

The `Forward` trait can also be implemented by your module.

```rust
use burn::module::Forward;
use burn::tensor::Tensor;

## Why Rust?
impl<B: Backend> Forward<Tensor<B, 2>, Tensor<B, 2>> for MyModule<B> {
fn forward(&self, input: Tensor<B, 2>) -> Tensor<B, 2> {
let mut x = input;

A big benefit of using Rust instead of Python is to allow performant multi-threaded deep learning networks which might open new doors for more efficient models.
Scale seems to be very important, but the only tool we currently have to achieve it is big matrix multiplication on GPUs.
This often implies big batch sizes, which is impossible for online learning.
Also, asynchronous sparsely activated networks without copying weights is kind of impossible to achieve with Python (or really hard without proper threading).
for _ in 0..self.repeat {
x = self.my_param.forward(x);
}

## Burn-Tensor
x
}
}
```

Burn has its own tensor library supporting multiple backends, it can also be used for other scientific computing applications.
Click [here](https://github.com/burn-rs/burn/burn-tensor) for more details.
Note that you can implement multiple time the `Forward` trait with different inputs and outputs.
2 changes: 1 addition & 1 deletion burn-dataset/Cargo.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[package]
name = "burn-dataset"
version = "0.1.0"
version = "0.2.3"
authors = ["nathanielsimard <nathaniel.simard.42@gmail.com>"]

description = """
Expand Down
2 changes: 1 addition & 1 deletion burn-derive/Cargo.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[package]
name = "burn-derive"
version = "0.1.0"
version = "0.2.3"
authors = ["nathanielsimard <nathaniel.simard.42@gmail.com>"]

description = """
Expand Down
2 changes: 1 addition & 1 deletion burn-tensor/Cargo.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[package]
name = "burn-tensor"
version = "0.2.2"
version = "0.2.3"
authors = ["nathanielsimard <nathaniel.simard.42@gmail.com>"]

description = """
Expand Down
10 changes: 5 additions & 5 deletions burn/Cargo.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[package]
name = "burn"
version = "0.2.2"
version = "0.2.3"
authors = ["nathanielsimard <nathaniel.simard.42@gmail.com>"]
description = "BURN: Burn Unstoppable Rusty Neurons"
repository = "https://github.com/nathanielsimard/burn"
Expand All @@ -22,9 +22,9 @@ all-features = false
no-default-features = true

[dependencies]
burn-tensor = { path = "../burn-tensor", version = "0.2.1", default-features = false }
burn-dataset = { path = "../burn-dataset", version = "0.1.0", default-features = false }
burn-derive = { path = "../burn-derive", version = "0.1.0" }
burn-tensor = { path = "../burn-tensor", version = "0.2.3", default-features = false }
burn-dataset = { path = "../burn-dataset", version = "0.2.3", default-features = false }
burn-derive = { path = "../burn-derive", version = "0.2.3" }

thiserror = "1.0"
num-traits = "0.2"
Expand All @@ -49,4 +49,4 @@ flate2 = "1.0"
nanoid = "0.4"

[dev-dependencies]
burn-dataset = { path = "../burn-dataset", version = "0.1.0", features = ["fake"] }
burn-dataset = { path = "../burn-dataset", version = "0.2.3", features = ["fake"] }

0 comments on commit 7389ef2

Please sign in to comment.