norse.torch.module.leaky_integrator module

Leaky integrators describe a leaky neuron membrane that integrates incoming currents over time, but never spikes. In other words, the neuron adds up incoming input current, while leaking out some of it in every timestep.

See norse.torch.functional.leaky_integrator for more information.

class norse.torch.module.leaky_integrator.LICell(p=LIParameters(tau_syn_inv=tensor(200.), tau_mem_inv=tensor(100.), v_leak=tensor(0.)), **kwargs)[source]

Bases: norse.torch.module.snn.SNNCell

Cell for a leaky-integrator without recurrence. More specifically it implements a discretized version of the ODE

\[\begin{split}\begin{align*} \dot{v} &= 1/\tau_{\text{mem}} (v_{\text{leak}} - v + i) \\ \dot{i} &= -1/\tau_{\text{syn}} i \end{align*}\end{split}\]

and transition equations

\[i = i + w i_{\text{in}}\]
Parameters
  • p (LIParameters) – parameters of the leaky integrator

  • dt (float) – integration timestep to use

Initializes internal Module state, shared by both nn.Module and ScriptModule.

initial_state(input_tensor)[source]
Return type

LIState

training: bool
class norse.torch.module.leaky_integrator.LILinearCell(input_size, hidden_size, p=LIParameters(tau_syn_inv=tensor(200.), tau_mem_inv=tensor(100.), v_leak=tensor(0.)), dt=0.001)[source]

Bases: torch.nn.modules.module.Module

Cell for a leaky-integrator with an additional linear weighting. More specifically it implements a discretized version of the ODE

\[\begin{split}\begin{align*} \dot{v} &= 1/\tau_{\text{mem}} (v_{\text{leak}} - v + i) \\ \dot{i} &= -1/\tau_{\text{syn}} i \end{align*}\end{split}\]

and transition equations

\[i = i + w i_{\text{in}}\]
Parameters
  • input_size (int) – Size of the input. Also known as the number of input features.

  • hidden_size (int) – Size of the hidden state. Also known as the number of input features.

  • p (LIParameters) – parameters of the leaky integrator

  • dt (float) – integration timestep to use

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(input_tensor, state=None)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Return type

Tuple[Tensor, LIState]

training: bool