norse.torch.module.lif module

A very popular neuron model that combines a norse.torch.module.leaky_integrator with spike thresholds to produce events (spikes).

See norse.torch.functional.lif for more information.

class norse.torch.module.lif.LIF(p=LIFParameters(tau_syn_inv=tensor(200.), tau_mem_inv=tensor(100.), v_leak=tensor(0.), v_th=tensor(1.), v_reset=tensor(0.), method='super', alpha=tensor(100.)), **kwargs)[source]

Bases: norse.torch.module.snn.SNN

A neuron layer that wraps a LIFCell in time such that the layer keeps track of temporal sequences of spikes. After application, the layer returns a tuple containing

(spikes from all timesteps, state from the last timestep).

Example

>>> data = torch.zeros(10, 5, 2) # 10 timesteps, 5 batches, 2 neurons
>>> l = LIF()
>>> l(data) # Returns tuple of (Tensor(10, 5, 2), LIFState)
Parameters
  • p (LIFParameters) – The neuron parameters as a torch Module, which allows the module to configure neuron parameters as optimizable.

  • sparse (bool) – Whether to apply sparse activation functions (True) or not (False). Defaults to False.

  • dt (float) – Time step to use in integration. Defaults to 0.001.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

initial_state(input_tensor)[source]
Return type

LIFFeedForwardState

training: bool
class norse.torch.module.lif.LIFCell(p=LIFParameters(tau_syn_inv=tensor(200.), tau_mem_inv=tensor(100.), v_leak=tensor(0.), v_th=tensor(1.), v_reset=tensor(0.), method='super', alpha=tensor(100.)), **kwargs)[source]

Bases: norse.torch.module.snn.SNNCell

Module that computes a single euler-integration step of a leaky integrate-and-fire (LIF) neuron-model without recurrence and without time.

More specifically it implements one integration step of the following ODE

\[\begin{align*} \dot{v} &= 1/\tau_{\text{mem}} (v_{\text{leak}} - v + i) \ \dot{i} &= -1/\tau_{\text{syn}} i \end{align*}\]

together with the jump condition

\[z = \Theta(v - v_{\text{th}})\]

and transition equations

\[\begin{align*} v &= (1-z) v + z v_{\text{reset}} \end{align*}\]

Example

>>> data = torch.zeros(5, 2) # 5 batches, 2 neurons
>>> l = LIFCell(2, 4)
>>> l(data) # Returns tuple of (Tensor(5, 4), LIFState)
Parameters
  • p (LIFParameters) – Parameters of the LIF neuron model.

  • sparse (bool) – Whether to apply sparse activation functions (True) or not (False). Defaults to False.

  • dt (float) – Time step to use. Defaults to 0.001.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

initial_state(input_tensor)[source]
Return type

LIFFeedForwardState

training: bool
class norse.torch.module.lif.LIFRecurrent(input_size, hidden_size, p=LIFParameters(tau_syn_inv=tensor(200.), tau_mem_inv=tensor(100.), v_leak=tensor(0.), v_th=tensor(1.), v_reset=tensor(0.), method='super', alpha=tensor(100.)), **kwargs)[source]

Bases: norse.torch.module.snn.SNNRecurrent

A neuron layer that wraps a LIFRecurrentCell in time such that the layer keeps track of temporal sequences of spikes. After application, the module returns a tuple containing

(spikes from all timesteps, state from the last timestep).

Example

>>> data = torch.zeros(10, 5, 2) # 10 timesteps, 5 batches, 2 neurons
>>> l = LIFRecurrent(2, 4)
>>> l(data) # Returns tuple of (Tensor(10, 5, 4), LIFState)
Parameters
  • input_size (int) – The number of input neurons

  • hidden_size (int) – The number of hidden neurons

  • p (LIFParameters) – The neuron parameters as a torch Module, which allows the module to configure neuron parameters as optimizable.

  • sparse (bool) – Whether to apply sparse activation functions (True) or not (False). Defaults to False.

  • input_weights (torch.Tensor) – Weights used for input tensors. Defaults to a random matrix normalized to the number of hidden neurons.

  • recurrent_weights (torch.Tensor) – Weights used for input tensors. Defaults to a random matrix normalized to the number of hidden neurons.

  • autapses (bool) – Allow self-connections in the recurrence? Defaults to False. Will also remove autapses in custom recurrent weights, if set above.

  • dt (float) – Time step to use in integration. Defaults to 0.001.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

initial_state(input_tensor)[source]
Return type

LIFState

training: bool
class norse.torch.module.lif.LIFRecurrentCell(input_size, hidden_size, p=LIFParameters(tau_syn_inv=tensor(200.), tau_mem_inv=tensor(100.), v_leak=tensor(0.), v_th=tensor(1.), v_reset=tensor(0.), method='super', alpha=tensor(100.)), **kwargs)[source]

Bases: norse.torch.module.snn.SNNRecurrentCell

Module that computes a single euler-integration step of a leaky integrate-and-fire (LIF) neuron-model with recurrence but without time. More specifically it implements one integration step of the following ODE

\[\begin{align*} \dot{v} &= 1/\tau_{\text{mem}} (v_{\text{leak}} - v + i) \ \dot{i} &= -1/\tau_{\text{syn}} i \end{align*}\]

together with the jump condition

\[z = \Theta(v - v_{\text{th}})\]

and transition equations

\[\begin{align*} v &= (1-z) v + z v_{\text{reset}} \ i &= i + w_{\text{input}} z_{\text{in}} \ i &= i + w_{\text{rec}} z_{\text{rec}} \end{align*}\]

where \(z_{\text{rec}}\) and \(z_{\text{in}}\) are the recurrent and input spikes respectively.

Example

>>> data = torch.zeros(5, 2) # 5 batches, 2 neurons
>>> l = LIFRecurrentCell(2, 4)
>>> l(data) # Returns tuple of (Tensor(5, 4), LIFState)
Parameters
  • input_size (int) – Size of the input. Also known as the number of input features.

  • hidden_size (int) – Size of the hidden state. Also known as the number of input features.

  • p (LIFParameters) – Parameters of the LIF neuron model.

  • sparse (bool) – Whether to apply sparse activation functions (True) or not (False). Defaults to False.

  • input_weights (torch.Tensor) – Weights used for input tensors. Defaults to a random matrix normalized to the number of hidden neurons.

  • recurrent_weights (torch.Tensor) – Weights used for input tensors. Defaults to a random matrix normalized to the number of hidden neurons.

  • autapses (bool) – Allow self-connections in the recurrence? Defaults to False. Will also remove autapses in custom recurrent weights, if set above.

  • dt (float) – Time step to use.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

initial_state(input_tensor)[source]
Return type

LIFState

training: bool