norse.torch.functional.lif_adex module

class norse.torch.functional.lif_adex.LIFAdExFeedForwardState(v: torch.Tensor, i: torch.Tensor, a: torch.Tensor)[source]

Bases: tuple

State of a feed forward LIFAdEx neuron

Parameters

Create new instance of LIFAdExFeedForwardState(v, i, a)

a: torch.Tensor

Alias for field number 2

i: torch.Tensor

Alias for field number 1

v: torch.Tensor

Alias for field number 0

class norse.torch.functional.lif_adex.LIFAdExParameters(adaptation_current: torch.Tensor = tensor(4), adaptation_spike: torch.Tensor = tensor(0.0200), delta_T: torch.Tensor = tensor(0.5000), tau_ada_inv: torch.Tensor = tensor(2.), tau_syn_inv: torch.Tensor = tensor(200.), tau_mem_inv: torch.Tensor = tensor(100.), v_leak: torch.Tensor = tensor(0.), v_th: torch.Tensor = tensor(1.), v_reset: torch.Tensor = tensor(0.), method: str = 'super', alpha: float = 100.0)[source]

Bases: tuple

Parametrization of an Adaptive Exponential Leaky Integrate and Fire neuron

Default values from https://github.com/NeuralEnsemble/PyNN/blob/d8056fa956998b031a1c3689a528473ed2bc0265/pyNN/standardmodels/cells.py#L416

Parameters
  • adaptation_current (torch.Tensor) – adaptation coupling parameter in nS

  • adaptation_spike (torch.Tensor) – spike triggered adaptation parameter in nA

  • delta_T (torch.Tensor) – sharpness or speed of the exponential growth in mV

  • tau_syn_inv (torch.Tensor) – inverse adaptation time constant (\(1/\tau_\text{ada}\)) in 1/ms

  • tau_syn_inv – inverse synaptic time constant (\(1/\tau_\text{syn}\)) in 1/ms

  • tau_mem_inv (torch.Tensor) – inverse membrane time constant (\(1/\tau_\text{mem}\)) in 1/ms

  • v_leak (torch.Tensor) – leak potential in mV

  • v_th (torch.Tensor) – threshold potential in mV

  • v_reset (torch.Tensor) – reset potential in mV

  • method (str) – method to determine the spike threshold (relevant for surrogate gradients)

  • alpha (float) – hyper parameter to use in surrogate gradient computation

Create new instance of LIFAdExParameters(adaptation_current, adaptation_spike, delta_T, tau_ada_inv, tau_syn_inv, tau_mem_inv, v_leak, v_th, v_reset, method, alpha)

adaptation_current: torch.Tensor

Alias for field number 0

adaptation_spike: torch.Tensor

Alias for field number 1

alpha: float

Alias for field number 10

delta_T: torch.Tensor

Alias for field number 2

method: str

Alias for field number 9

tau_ada_inv: torch.Tensor

Alias for field number 3

tau_mem_inv: torch.Tensor

Alias for field number 5

tau_syn_inv: torch.Tensor

Alias for field number 4

v_leak: torch.Tensor

Alias for field number 6

v_reset: torch.Tensor

Alias for field number 8

v_th: torch.Tensor

Alias for field number 7

class norse.torch.functional.lif_adex.LIFAdExState(z: torch.Tensor, v: torch.Tensor, i: torch.Tensor, a: torch.Tensor)[source]

Bases: tuple

State of a LIFAdEx neuron

Parameters

Create new instance of LIFAdExState(z, v, i, a)

a: torch.Tensor

Alias for field number 3

i: torch.Tensor

Alias for field number 2

v: torch.Tensor

Alias for field number 1

z: torch.Tensor

Alias for field number 0

norse.torch.functional.lif_adex.lif_adex_current_encoder(input_current, voltage, adaptation, p=LIFAdExParameters(adaptation_current=tensor(4), adaptation_spike=tensor(0.0200), delta_T=tensor(0.5000), tau_ada_inv=tensor(2.), tau_syn_inv=tensor(200.), tau_mem_inv=tensor(100.), v_leak=tensor(0.), v_th=tensor(1.), v_reset=tensor(0.), method='super', alpha=100.0), dt=0.001)[source]

Computes a single euler-integration step of an adaptive exponential LIF neuron-model adapted from http://www.scholarpedia.org/article/Adaptive_exponential_integrate-and-fire_model. More specifically it implements one integration step of the following ODE

\[\begin{split}\begin{align*} \dot{v} &= 1/\tau_{\text{mem}} \left(v_{\text{leak}} - v + i + \Delta_T exp\left({{v - v_{\text{th}}} \over {\Delta_T}}\right)\right) \\ \dot{i} &= -1/\tau_{\text{syn}} i \\ \dot{a} &= 1/\tau_{\text{ada}} \left( a_{current} (V - v_{\text{leak}}) - a \right) \end{align*}\end{split}\]

together with the jump condition

\[z = \Theta(v - v_{\text{th}})\]

and transition equations

\[\begin{split}\begin{align*} v &= (1-z) v + z v_{\text{reset}} \\ i &= i + i_{\text{in}} \\ a &= a + a_{\text{spike}} z_{\text{rec}} \end{align*}\end{split}\]
Parameters
  • input (torch.Tensor) – the input current at the current time step

  • voltage (torch.Tensor) – current state of the LIFAdEx neuron

  • adaptation (torch.Tensor) – membrane adaptation parameter in nS

  • p (LIFAdExParameters) – parameters of a leaky integrate and fire neuron

  • dt (float) – Integration timestep to use

Return type

Tuple[Tensor, Tensor, Tensor]

norse.torch.functional.lif_adex.lif_adex_feed_forward_step(input_tensor, state=LIFAdExFeedForwardState(v=0, i=0, a=0), p=LIFAdExParameters(adaptation_current=tensor(4), adaptation_spike=tensor(0.0200), delta_T=tensor(0.5000), tau_ada_inv=tensor(2.), tau_syn_inv=tensor(200.), tau_mem_inv=tensor(100.), v_leak=tensor(0.), v_th=tensor(1.), v_reset=tensor(0.), method='super', alpha=100.0), dt=0.001)[source]

Computes a single euler-integration step of an adaptive exponential LIF neuron-model adapted from http://www.scholarpedia.org/article/Adaptive_exponential_integrate-and-fire_model. It takes as input the input current as generated by an arbitrary torch module or function. More specifically it implements one integration step of the following ODE

\[\begin{split}\begin{align*} \dot{v} &= 1/\tau_{\text{mem}} \left(v_{\text{leak}} - v + i + \Delta_T exp\left({{v - v_{\text{th}}} \over {\Delta_T}}\right)\right) \\ \dot{i} &= -1/\tau_{\text{syn}} i \\ \dot{a} &= 1/\tau_{\text{ada}} \left( a_{current} (V - v_{\text{leak}}) - a \right) \end{align*}\end{split}\]

together with the jump condition

\[z = \Theta(v - v_{\text{th}})\]

and transition equations

\[\begin{split}\begin{align*} v &= (1-z) v + z v_{\text{reset}} \\ i &= i + i_{\text{in}} \\ a &= a + a_{\text{spike}} z_{\text{rec}} \end{align*}\end{split}\]

where \(i_{\text{in}}\) is meant to be the result of applying an arbitrary pytorch module (such as a convolution) to input spikes.

Parameters
Return type

Tuple[Tensor, LIFAdExFeedForwardState]

norse.torch.functional.lif_adex.lif_adex_step(input_tensor, state, input_weights, recurrent_weights, p=LIFAdExParameters(adaptation_current=tensor(4), adaptation_spike=tensor(0.0200), delta_T=tensor(0.5000), tau_ada_inv=tensor(2.), tau_syn_inv=tensor(200.), tau_mem_inv=tensor(100.), v_leak=tensor(0.), v_th=tensor(1.), v_reset=tensor(0.), method='super', alpha=100.0), dt=0.001)[source]

Computes a single euler-integration step of an adaptive exponential LIF neuron-model adapted from http://www.scholarpedia.org/article/Adaptive_exponential_integrate-and-fire_model. More specifically it implements one integration step of the following ODE

\[\begin{split}\begin{align*} \dot{v} &= 1/\tau_{\text{mem}} \left(v_{\text{leak}} - v + i + \Delta_T exp\left({{v - v_{\text{th}}} \over {\Delta_T}}\right)\right) \\ \dot{i} &= -1/\tau_{\text{syn}} i \\ \dot{a} &= 1/\tau_{\text{ada}} \left( a_{current} (V - v_{\text{leak}}) - a \right) \end{align*}\end{split}\]

together with the jump condition

\[z = \Theta(v - v_{\text{th}})\]

and transition equations

\[\begin{split}\begin{align*} v &= (1-z) v + z v_{\text{reset}} \\ i &= i + w_{\text{input}} z_{\text{in}} \\ i &= i + w_{\text{rec}} z_{\text{rec}} \\ a &= a + a_{\text{spike}} z_{\text{rec}} \end{align*}\end{split}\]

where \(z_{\text{rec}}\) and \(z_{\text{in}}\) are the recurrent and input spikes respectively.

Parameters
  • input_tensor (torch.Tensor) – the input spikes at the current time step

  • s (LIFAdExState) – current state of the LIF neuron

  • input_weights (torch.Tensor) – synaptic weights for incoming spikes

  • recurrent_weights (torch.Tensor) – synaptic weights for recurrent spikes

  • p (LIFAdExParameters) – parameters of a leaky integrate and fire neuron

  • dt (float) – Integration timestep to use

Return type

Tuple[Tensor, LIFAdExState]