norse.torch.functional.spike_latency_encode(input_spikes: torch.Tensor)torch.Tensor[source]

For all neurons, remove all but the first spike. This encoding basically measures the time it takes for a neuron to spike first. Assuming that the inputs are constant, this makes sense in that strong inputs spikes fast.

See R. Van Rullen & S. J. Thorpe (2001): Rate Coding Versus Temporal Order Coding: What the Retinal Ganglion Cells Tell the Visual Cortex.

Spikes are identified by their unique position within each sequence.

>>> data = torch.as_tensor([[0, 1, 1], [1, 1, 1]])
>>> spike_latency_encode(data)
tensor([[0, 1, 1],
        [1, 0, 0]])

input_spikes (torch.Tensor): A tensor of input spikes, assumed to be at least 2D (sequences, …)


A tensor where the first spike (1) is retained in the sequence