norse.torch.functional.spike_latency_encode#
- norse.torch.functional.spike_latency_encode(input_spikes: Tensor) Tensor [source]#
For all neurons, remove all but the first spike. This encoding basically measures the time it takes for a neuron to spike first. Assuming that the inputs are constant, this makes sense in that strong inputs spikes fast.
Spikes are identified by their unique position within each sequence.
- Example:
>>> data = torch.as_tensor([[0, 1, 1], [1, 1, 1]]) >>> spike_latency_encode(data) tensor([[0, 1, 1], [1, 0, 0]])
- Parameters:
input_spikes (torch.Tensor): A tensor of input spikes, assumed to be at least 2D (sequences, …)
- Returns:
A tensor where the first spike (1) is retained in the sequence