norse.torch.module.lif_refrac module

class norse.torch.module.lif_refrac.LIFRefracCell(p=LIFRefracParameters(lif=LIFParameters(tau_syn_inv=tensor(200.), tau_mem_inv=tensor(100.), v_leak=tensor(0.), v_th=tensor(1.), v_reset=tensor(0.), method='super', alpha=tensor(100.)), rho_reset=tensor(5.)), **kwargs)[source]

Bases: norse.torch.module.snn.SNNCell

Module that computes a single euler-integration step of a LIF neuron-model with absolute refractory period without recurrence. More specifically it implements one integration step of the following ODE.

\[\begin{split}\begin{align*} \dot{v} &= 1/\tau_{\text{mem}} (1-\Theta(\rho)) (v_{\text{leak}} - v + i) \\ \dot{i} &= -1/\tau_{\text{syn}} i \\ \dot{\rho} &= -1/\tau_{\text{refrac}} \Theta(\rho) \end{align*}\end{split}\]

together with the jump condition

\[\begin{split}\begin{align*} z &= \Theta(v - v_{\text{th}}) \\ z_r &= \Theta(-\rho) \end{align*}\end{split}\]

and transition equations

\[\begin{split}\begin{align*} v &= (1-z) v + z v_{\text{reset}} \\ \rho &= \rho + z_r \rho_{\text{reset}} \end{align*}\end{split}\]
Parameters

Examples

>>> batch_size = 16
>>> lif = LIFRefracCell()
>>> input = torch.randn(batch_size, 20, 30)
>>> output, s0 = lif(input)

Initializes internal Module state, shared by both nn.Module and ScriptModule.

initial_state(input_tensor)[source]
Return type

LIFRefracFeedForwardState

training: bool
class norse.torch.module.lif_refrac.LIFRefracRecurrentCell(input_size, hidden_size, p=LIFRefracParameters(lif=LIFParameters(tau_syn_inv=tensor(200.), tau_mem_inv=tensor(100.), v_leak=tensor(0.), v_th=tensor(1.), v_reset=tensor(0.), method='super', alpha=tensor(100.)), rho_reset=tensor(5.)), **kwargs)[source]

Bases: norse.torch.module.snn.SNNRecurrentCell

Module that computes a single euler-integration step of a LIF neuron-model with absolute refractory period. More specifically it implements one integration step of the following ODE.

\[\begin{split}\begin{align*} \dot{v} &= 1/\tau_{\text{mem}} (1-\Theta(\rho)) (v_{\text{leak}} - v + i) \\ \dot{i} &= -1/\tau_{\text{syn}} i \\ \dot{\rho} &= -1/\tau_{\text{refrac}} \Theta(\rho) \end{align*}\end{split}\]

together with the jump condition

\[\begin{split}\begin{align*} z &= \Theta(v - v_{\text{th}}) \\ z_r &= \Theta(-\rho) \end{align*}\end{split}\]

and transition equations

\[\begin{split}\begin{align*} v &= (1-z) v + z v_{\text{reset}} \\ i &= i + w_{\text{input}} z_{\text{in}} \\ i &= i + w_{\text{rec}} z_{\text{rec}} \\ \rho &= \rho + z_r \rho_{\text{reset}} \end{align*}\end{split}\]

where \(z_{\text{rec}}\) and \(z_{\text{in}}\) are the recurrent and input spikes respectively.

Parameters
  • input_size (int) – Size of the input. Also known as the number of input features.

  • hidden_size (int) – Size of the hidden state. Also known as the number of input features.

  • p (LIFRefracParameters) – parameters of the lif neuron

  • dt (float) – Integration timestep to use

  • autapses (bool) – Allow self-connections in the recurrence? Defaults to False.

Examples

>>> batch_size = 16
>>> lif = LIFRefracRecurrentCell(10, 20)
>>> input = torch.randn(batch_size, 10)
>>> output, s0 = lif(input)

Initializes internal Module state, shared by both nn.Module and ScriptModule.

initial_state(input_tensor)[source]
Return type

LIFRefracState

training: bool