This page walks you through the initial steps to becoming productive with Norse. We will cover how to
Work with neuron state
Work with Norse without time
Work with Norse with recurrence
Work with Norse with time
2.1. Running off-the-shelf code¶
If you just want to get started we recommend our collection of Jupyter Notebook Tutorials. They can be run online on Google Colab.
Additionally, we provide a set of tasks that you can run right after installing Norse. One of the most common experiments is the MNIST classification task. Norse achieve on-par performance with modern non-spiking networks:
python -m norse.task.mnist
Please refer to Running Tasks for more tasks and detailed information on how to run them.
2.2. Building neural networks with state¶
If you would like to build your own models with Norse you need to know that neurons contain state. In practice, that meant that neurons in Norse outputs two things: a spike tensor and the neuron state. Norse initialises all the necessary state for you in the beginning, but you need to carry the state onwards. If you do not, the state will always be zero, the neuron will never spike and your neurons will be forever dead!
import torch import norse cell = norse.torch.LIFCell() data = torch.ones(1) spikes, state = cell(data)
The next time you call the cell, you need to pass in that state. Otherwise you will get the exact same output
spikes, state = cell(data, state)
Note: This is similar to PyTorch’s RNN module if you are looking for inspiration.
2.3. Using Norse neurons with time¶
Similar to PyTorch’s Sequential, Norse’s neuron models can be chained in a network. Unfortunately, this does not work with neurons for the same reason that it does not work with PyTorch’s own RNNs: state. Instead, Norse offers a SequentialState module that ties stateful modules together:
import torch import norse model = SequentialState( torch.nn.Linear(10, 5), norse.torch.LIFCell(), torch.nn.Linear(5, 1) ) data = torch.ones(8, 10) # (batch, input) out, state = model(data) # (8, 1) output shape
2.4. Using recurrence in Norse¶
All neuron modules have
Cell class applied above simply works as a feed-forward activation
of the neuron, while the
RecurrentCell also contains linear and
recurrent weights (we are weighing both the input and the recurrent spikes).
For that reason, we need to inform the module what shape it needs to take,
since we have to initialize weights to match the desired input/output shape.
RecurrentCell classes work out of the box and can be plugged
directly into the above code. Note that we have the same input/output shape,
but that it could easily be different:
import torch import norse model = SequentialState( torch.nn.Linear(10, 5), norse.torch.LIFRecurrentCell(5, 5), torch.nn.Linear(5, 1) ) data = torch.ones(8, 10) # (batch, input) out, state = model(data) # (8, 1) output shape
2.5. Using Norse in time¶
The above ``XCell``s follow the abstraction from PyTorch where the cells are “simple” activation functions that is applied once. However, neurons exist in time and will need to be given at least a few timesteps of input before something interesting happens (like a spike).
The network above (the one without time) works perfectly well with time, and you can easily wrap it with a for loop. However, it’s also possible to run each module individually in time.
In Norse, we model this time aspect by removing the
Cell suffix from
the model. So the a
LIFCell in time will simply be called
LIFRecurrentCell in time will simply be called
The regular Torch modules also need to run in time. For that, we added a module to lift PyTorch modules into the time domain (that is, simply run them once for every timestep).
Taken together, we get the following:
import torch import norse model = SequentialState( norse.Lift(torch.nn.Linear(10, 5)), norse.LSNNRecurrent(5, 5), norse.Lift(torch.nn.Linear(5, 1)) ) data = torch.ones(100, 8, 10) # (time, batch, input) out, state = model(data)