norse.torch.functional#
Encoding#
Stateless spiking neural network components.
Logical#
Computes a logical and provided x and y are bitvectors. |
|
Computes a logical xor provided x and y are bitvectors. |
|
Computes a logical or provided x and y are bitvectors. |
|
Computes the muller-c element next state provided x_1 and x_2 are bitvectors and y_prev is the previous state. |
|
Determines whether a transition from 0 to 1 has occured providing that z and z_prev are bitvectors |
Regularization#
This module contains functional components for regularization operations on spiking layers, where it can be desirable to regularize spikes, membrane parameters, or other properties over time.
In this functional module, the aim is to collect some state s
for each
forward step. The collection depends on the output of the layer which, by
default, simply just counts spikes. It is the job of the user to include the
regularization in an error term later.
Read more on Wikipedia.
Takes one step for a regularizer that aggregates some information (based on the spike_accumulator function), which is pushed forward and returned for future inclusion in an error term. |
|
A spike accumulator that aggregates spikes and returns the total sum as an integer. |
|
A spike accumulator that aggregates membrane potentials over time. |
Threshold functions#
Stateless spiking neural network components.
A heaviside step function that truncates numbers <= 0 to 0 and everything else to 1. |
Temporal operations#
Stateless spiking neural network components.
A module for lifting neuron activation functions in time. Simlar to the :module:`.lift`_ module. |
Neuron models#
Integrate-and-fire (IAF)#
Parametrization of an integrate-and-fire neuron |
|
State of a feed forward integrate-and-fire neuron |
|
Feedforward step of an integrate-and-fire neuron, computing a single step |
Izhikevich#
Parametrization of av Izhikevich neuron |
|
Spiking behavior of a Izhikevich neuron |
|
Spiking behavior of a Izhikevich neuron |
|
Spiking behavior of a Izhikevich neuron |
|
Spiking behavior of a Izhikevich neuron |
|
Spiking behavior of a Izhikevich neuron |
|
Spiking behavior of a Izhikevich neuron |
|
Spiking behavior of a Izhikevich neuron |
|
Spiking behavior of a Izhikevich neuron |
|
Spiking behavior of a Izhikevich neuron |
|
Spiking behavior of a Izhikevich neuron |
|
Spiking behavior of a Izhikevich neuron |
|
Spiking behavior of a Izhikevich neuron |
|
Leaky integrator#
Leaky integrators describe a leaky neuron membrane that integrates incoming currents over time, but never spikes. In other words, the neuron adds up incoming input current, while leaking out some of it in every timestep.
The first equation describes how the membrane voltage (\(v\), across the membrane) changes over time. A constant amount of current is leaked out every timestep (\(v_{\text{leak}}\)), while the current (\(i\)) is added.
The second equation describes how the current flowing into the neuron changes in every timestep.
Notice that both equations are parameterized by the time constant \(\tau\). This constant controls how fast the changes in voltage and current occurs. A large time constant means a small change. In Norse, we call this parameter the inverse to avoid having to recalculate the inverse (\(\tau_{\text{mem_inv}}\) and \(\tau_{\text{syn_inv}}\) respectively). So, for Norse a large inverse time constant means rapid changes while a small inverse time constant means slow changes.
Recall that voltage is the difference in charge between two points (in this case the neuron membrane) and current is the rate of change or the amount of current being added/subtracted at each timestep.
More information can be found on Wikipedia or in the book *Neuron Dynamics* by W. Gerstner et al., freely available online.
Parameters of a leaky integrator |
|
State of a leaky-integrator |
|
Leaky integrate-and-fire (LIF)#
Stateless spiking neural network components.
LIF, box model#
A simplified version of the popular leaky integrate-and-fire neuron model that combines a norse.torch.functional.leaky_integrator
with spike thresholds to produce events (spikes).
Compared to the norse.torch.functional.lif
modules, this model leaves out the current term, making it computationally simpler but impossible to implement in physical systems because currents cannot “jump” in nature.
It is these sudden current jumps that gives the model its name, because the shift in current is instantaneous and can be drawn as “current boxes”.
A simplified version of the popular leaky integrate-and-fire neuron model that combines a norse.torch.functional.leaky_integrator
with spike thresholds to produce events (spikes).
Compared to the norse.torch.functional.lif
modules, this model leaves out the current term, making it computationally simpler but impossible to implement in physical systems because currents cannot “jump” in nature.
It is these sudden current jumps that gives the model its name, because the shift in current is instantaneous and can be drawn as “current boxes”.
Computes a single euler-integration step for a lif neuron-model without current terms. |
LIF, conductance based#
Parameters of conductance based LIF neuron. |
|
State of a conductance based feed forward LIF neuron. |
|
Euler integration step for a conductance based LIF neuron. |
LIF, adaptive exponential#
Parametrization of an Adaptive Exponential Leaky Integrate and Fire neuron |
|
State of a feed forward LIFAdEx neuron |
|
Computes a single euler-integration step of an adaptive exponential LIF neuron-model adapted from http://www.scholarpedia.org/article/Adaptive_exponential_integrate-and-fire_model. |
|
Computes a single euler-integration step of an adaptive exponential LIF neuron-model adapted from http://www.scholarpedia.org/article/Adaptive_exponential_integrate-and-fire_model. |
LIF, exponential#
Parametrization of an Exponential Leaky Integrate and Fire neuron |
|
State of a feed forward LIFEx neuron |
|
Computes a single euler-integration step of an exponential LIF neuron-model adapted from https://neuronaldynamics.epfl.ch/online/Ch5.S2.html. |
|
Computes a single euler-integration step of a leaky integrator adapted from https://neuronaldynamics.epfl.ch/online/Ch5.S2.html. |
LIF, multicompartmental (MC)#
Stateless spiking neural network components.
LIF, refractory#
Stateless spiking neural network components.
Long short-term memory (LSNN)#
Stateless spiking neural network components.
Receptive fields#
A module for creating receptive fields.
|
Efficiently creates a differentiable 2d gaussian kernel. |
|
Creates a (size x size) receptive field kernel at a given scale, angle and ratio with respect to x and y derivatives. |
Creates a number of receptive fields based on the spatial parameters and size of the receptive field. |
|
|
Provides temporal scales according to [Lindeberg2016]. |
Plasticity models#
Spike-time dependent plasticity (STDP)#
Stateless spiking neural network components.
Tsodyks-Markram timing-dependent plasticity (TDP)#
Stateless spiking neural network components.