A new flavour of deep learning ops for numpy, pytorch, tensorflow, chainer, gluon, and others.
einops
introduces a new way to manipulate tensors,
providing safer, more readable and semantically richer code.
Tutorials are the most convenient way to see einops
in action
- part1: einops fundamentals
- part2: einops for deep learning
- part3: TBD
(Tutorials are working as a documentation too.)
Plain and simple:
pip install einops
einops
has no mandatory dependencies.
To obtain the latest github version
pip install https://github.com/arogozhnikov/einops/archive/master.zip
Micro-reference on public API.
einops
API is very minimalistic and powerful.
Two operations provided (see the guide to einops
fundamentals)
from einops import rearrange, reduce
# rearrange elements according to pattern
output_tensor = rearrange(input_tensor, pattern, **axes_lengths)
# rearrange elements according to pattern
output_tensor = reduce(input_tensor, pattern, reduction, **axes_lengths)
Two auxiliary functions
from einops import asnumpy, parse_shape
# einops.asnumpy converts tensors of imperative frameworks to numpy
numpy_tensor = asnumpy(input_tensor)
# einops.parse_shape returns a shape in the form of a dictionary, axis name mapped to its length
parse_shape(input_tensor, pattern)
And two layers (separate version for each framework) with the same API.
from einops.layers.chainer import Rearrange, Reduce
from einops.layers.gluon import Rearrange, Reduce
from einops.layers.keras import Rearrange, Reduce
from einops.layers.torch import Rearrange, Reduce
Einops
layers are behaving in the same way as operations, and have same parameters
(for the exception of first argument, which should be passed during call)
layer = Rearrange(pattern, **axes_lengths)
# applying to tensor
x = layer(x)
layer = Reduce(pattern, reduction, **axes_lengths)
# applying to tensor
x = layer(x)
Usually it is more convenient to use layers, not operations, to build models
# example given for pytorch, but code in other frameworks is almost identical
from torch.nn import Sequential, Conv2d, MaxPool2d, Linear, ReLU
from einops.layers.torch import Reduce
model = Sequential(
Conv2d(3, 6, kernel_size=5),
MaxPool2d(kernel_size=2),
Conv2d(6, 16, kernel_size=5),
Reduce('b c (h h2) (w w2) -> b (c h w)', 'max', h2=2, w2=2), # combined pooling and flattening
Linear(16*5*5, 120),
ReLU(),
Linear(120, 10),
)
Layers are available for chainer
, gluon
, keras
and torch
.
einops
stays for Einstein-Inspired Notation for operations
(though "Einstein operations" sounds simpler and more attractive).
Notation was loosely inspired by Einstein summation (in particular by numpy.einsum
operation).
- Terms
tensor
andndarray
are equivalently used and refer to multidimensional array - Terms
axis
anddimension
are also equivalent
y = x.view(x.shape[0], -1)
y = rearrange(x, 'b c h w -> b (c h w)')
while these two lines are doing the same job in some context,
second one provides information about input and output.
In other words, einops
focuses on interface: what is input and output, not how output is computed.
The next operation looks similar to previous two:
y = rearrange(x, 'time c h w -> time (c h w)')
It gives reader a hint: this is not an independent batch of images we are processing, but rather a sequence (video).
Semantic information makes code easier to read and maintain.
Back to the same example:
y = x.view(x.shape[0], -1) # x: (batch, 256, 19, 19)
y = rearrange(x, 'b c h w -> b (c h w)')
second line checks that there are four dimensions in input, but you can also specify particular dimensions. That's opposed to just writing comments about shapes since comments don't work as we know
y = x.view(x.shape[0], -1) # x: (batch, 256, 19, 19)
y = rearrange(x, 'b c h w -> b (c h w)', c=256, h=19, w=19)
Below we have at least two ways to define depth-to-space operation
# depth to space
rearrange(x, 'b c (h h2) (w w2) -> b (c h2 w2) h w', h2=2, w2=2)
rearrange(x, 'b c (h h2) (w w2) -> b (h2 w2 c) h w', h2=2, w2=2)
there are at least four more ways to do it. Which one is used by the framework?
These details are ignored, since usually it makes no difference, but it can make a big difference (e.g. if you use grouped convolutions on the next stage), and you'd like to specify this in your code.
reduce(x, 'b c (x dx) -> b c x', 'max', dx=2)
reduce(x, 'b c (x dx) (y dx) -> b c x y', 'max', dx=2, dy=3)
reduce(x, 'b c (x dx) (y dx) (z dz)-> b c x y z', 'max', dx=2, dy=3, dz=4)
These examples demonstrated that we don't use separate operations for 1d/2d/3d pooling, those all are defined in a uniform way.
Space-to-depth and depth-to space are defined in many frameworks. But how about width-to-height?
rearrange(x, 'b c h (w w2) -> b c (h w2) w', w2=2)
Even simple functions may be understood differently within different frameworks
y = x.flatten() # or flatten(x)
Suppose x
shape was (3, 4, 5)
, then y
has shape ...
- numpy, cupy, chainer:
(60,)
- keras, tensorflow.layers, mxnet and gluon:
(3, 20)
- pytorch: no such function
Einops works with ...
- numpy
- pytorch
- tensorflow eager
- cupy
- chainer
- gluon
- tensorflow
- mxnet (experimental)
- and keras (experimental)
Best ways to contribute are
- spread the word about
einops
- prepare a guide/post/tutorial for your favorite deep learning framework
- translating examples in languages other than English is also a good idea
- use
einops
notation in your papers to strictly define an operation you're using
einops
works with python 3.5 or later.
There is nothing specific to python 3 in the code, we simply need to move further and I decided not to support python 2.