norse.torch.functional.lsnn module¶
Long-short term memory module, building on the work by [G. Bellec, D. Salaj, A. Subramoney, R. Legenstein, and W. Maass](https://github.com/IGITUGraz/LSNN-official).
The LSNN dynamics is similar to the lif
equations, but it
adds an adaptive term \(b\):
This adaptation is applied in the jump condition when the neuron spikes:
Contrast this with the regular LIF jump condition:
In practice, this means that the LSNN neurons adapt to fire more or less given the same input. The adaptation is determined by the \(\tau_b\) time constant.
- class norse.torch.functional.lsnn.LSNNFeedForwardState(v: torch.Tensor, i: torch.Tensor, b: torch.Tensor)[source]¶
Bases:
tuple
Integration state kept for a lsnn module
- Parameters
v (torch.Tensor) – membrane potential
i (torch.Tensor) – synaptic input current
b (torch.Tensor) – threshold adaptation
Create new instance of LSNNFeedForwardState(v, i, b)
- b: torch.Tensor¶
Alias for field number 2
- i: torch.Tensor¶
Alias for field number 1
- v: torch.Tensor¶
Alias for field number 0
- class norse.torch.functional.lsnn.LSNNParameters(tau_syn_inv: torch.Tensor = tensor(200.), tau_mem_inv: torch.Tensor = tensor(100.), tau_adapt_inv: torch.Tensor = tensor(0.0012), v_leak: torch.Tensor = tensor(0.), v_th: torch.Tensor = tensor(1.), v_reset: torch.Tensor = tensor(0.), beta: torch.Tensor = tensor(1.8000), method: str = 'super', alpha: float = 100.0)[source]¶
Bases:
tuple
Parameters of an LSNN neuron
- Parameters
tau_syn_inv (torch.Tensor) – inverse synaptic time constant (\(1/\tau_\text{syn}\))
tau_mem_inv (torch.Tensor) – inverse membrane time constant (\(1/\tau_\text{mem}\))
tau_adapt_inv (torch.Tensor) – adaptation time constant (\(\tau_b\))
v_leak (torch.Tensor) – leak potential
v_th (torch.Tensor) – threshold potential
v_reset (torch.Tensor) – reset potential
beta (torch.Tensor) – adaptation constant
Create new instance of LSNNParameters(tau_syn_inv, tau_mem_inv, tau_adapt_inv, v_leak, v_th, v_reset, beta, method, alpha)
- beta: torch.Tensor¶
Alias for field number 6
- tau_adapt_inv: torch.Tensor¶
Alias for field number 2
- tau_mem_inv: torch.Tensor¶
Alias for field number 1
- tau_syn_inv: torch.Tensor¶
Alias for field number 0
- v_leak: torch.Tensor¶
Alias for field number 3
- v_reset: torch.Tensor¶
Alias for field number 5
- v_th: torch.Tensor¶
Alias for field number 4
- class norse.torch.functional.lsnn.LSNNState(z: torch.Tensor, v: torch.Tensor, i: torch.Tensor, b: torch.Tensor)[source]¶
Bases:
tuple
State of an LSNN neuron
- Parameters
z (torch.Tensor) – recurrent spikes
v (torch.Tensor) – membrane potential
i (torch.Tensor) – synaptic input current
b (torch.Tensor) – threshold adaptation
Create new instance of LSNNState(z, v, i, b)
- b: torch.Tensor¶
Alias for field number 3
- i: torch.Tensor¶
Alias for field number 2
- v: torch.Tensor¶
Alias for field number 1
- z: torch.Tensor¶
Alias for field number 0
- norse.torch.functional.lsnn.ada_lif_step(input_tensor, state, input_weights, recurrent_weights, p=LSNNParameters(tau_syn_inv=tensor(200.), tau_mem_inv=tensor(100.), tau_adapt_inv=tensor(0.0012), v_leak=tensor(0.), v_th=tensor(1.), v_reset=tensor(0.), beta=tensor(1.8000), method='super', alpha=100.0), dt=0.001)[source]¶
Euler integration step for LIF Neuron with adaptation. More specifically it implements one integration step of the following ODE
\[\begin{split}\begin{align*} \dot{v} &= 1/\tau_{\text{mem}} (v_{\text{leak}} - v + b + i) \\ \dot{i} &= -1/\tau_{\text{syn}} i \\ \dot{b} &= -1/\tau_{b} b \end{align*}\end{split}\]together with the jump condition
\[z = \Theta(v - v_{\text{th}})\]and transition equations
\[\begin{split}\begin{align*} v &= (1-z) v + z v_{\\text{reset}} \\ i &= i + w_{\text{input}} z_{\\text{in}} \\ i &= i + w_{\text{rec}} z_{\\text{rec}} \\ b &= b + \beta z \end{align*}\end{split}\]where \(z_{\text{rec}}\) and \(z_{\text{in}}\) are the recurrent and input spikes respectively.
- Parameters
input_tensor (torch.Tensor) – the input spikes at the current time step
s (LSNNState) – current state of the lsnn unit
input_weights (torch.Tensor) – synaptic weights for input spikes
recurrent_weights (torch.Tensor) – synaptic weights for recurrent spikes
p (LSNNParameters) – parameters of the lsnn unit
dt (float) – Integration timestep to use
- Return type
- norse.torch.functional.lsnn.lsnn_feed_forward_step(input_tensor, state, p=LSNNParameters(tau_syn_inv=tensor(200.), tau_mem_inv=tensor(100.), tau_adapt_inv=tensor(0.0012), v_leak=tensor(0.), v_th=tensor(1.), v_reset=tensor(0.), beta=tensor(1.8000), method='super', alpha=100.0), dt=0.001)[source]¶
Euler integration step for LIF Neuron with threshold adaptation. More specifically it implements one integration step of the following ODE
\[\begin{split}\\begin{align*} \dot{v} &= 1/\tau_{\text{mem}} (v_{\text{leak}} - v + i) \\ \dot{i} &= -1/\tau_{\text{syn}} i \\ \dot{b} &= -1/\tau_{b} b \end{align*}\end{split}\]together with the jump condition
\[z = \Theta(v - v_{\text{th}} + b)\]and transition equations
\[\begin{split}\begin{align*} v &= (1-z) v + z v_{\text{reset}} \\ i &= i + \text{input} \\ b &= b + \beta z \end{align*}\end{split}\]- Parameters
input_tensor (torch.Tensor) – the input spikes at the current time step
s (LSNNFeedForwardState) – current state of the lsnn unit
p (LSNNParameters) – parameters of the lsnn unit
dt (float) – Integration timestep to use
- Return type
- norse.torch.functional.lsnn.lsnn_step(input_tensor, state, input_weights, recurrent_weights, p=LSNNParameters(tau_syn_inv=tensor(200.), tau_mem_inv=tensor(100.), tau_adapt_inv=tensor(0.0012), v_leak=tensor(0.), v_th=tensor(1.), v_reset=tensor(0.), beta=tensor(1.8000), method='super', alpha=100.0), dt=0.001)[source]¶
Euler integration step for LIF Neuron with threshold adaptation More specifically it implements one integration step of the following ODE
\[\begin{split}\begin{align*} \dot{v} &= 1/\tau_{\text{mem}} (v_{\text{leak}} - v + i) \\ \dot{i} &= -1/\tau_{\text{syn}} i \\ \dot{b} &= -1/\tau_{b} b \end{align*}\end{split}\]together with the jump condition
\[z = \Theta(v - v_{\text{th}} + b)\]and transition equations
\[\begin{split}\begin{align*} v &= (1-z) v + z v_{\text{reset}} \\ i &= i + w_{\text{input}} z_{\text{in}} \\ i &= i + w_{\text{rec}} z_{\text{rec}} \\ b &= b + \beta z \end{align*}\end{split}\]where \(z_{\text{rec}}\) and \(z_{\text{in}}\) are the recurrent and input spikes respectively.
- Parameters
input_tensor (torch.Tensor) – the input spikes at the current time step
s (LSNNState) – current state of the lsnn unit
input_weights (torch.Tensor) – synaptic weights for input spikes
recurrent_weights (torch.Tensor) – synaptic weights for recurrent spikes
p (LSNNParameters) – parameters of the lsnn unit
dt (float) – Integration timestep to use
- Return type