norse.torch.module.conv module

class norse.torch.module.conv.LConv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, device=None, dtype=None)[source]

Bases: torch.nn.modules.conv.Conv3d

Implements a 2d-convolution applied pointwise in time. See, for documentation of the arguments, which we will reproduce in part here.

This module expects an additional temporal dimension in the tensor it is passed, that is in the notation in the documentation referenced above, it turns in the simplest case a tensor with input shape \((T, N, C_{ ext{in}}, H, W)\) and output tensor of shape \((T, N, C_{ ext{out}}, H_{ ext{out}}, W_{ ext{out}})\), by applying a 2d convolution operation pointwise along the time-direction, with T denoting the number of time steps.


The parameters kernel_size, stride, padding, dilation can either be:
  • a single int – in which case the same value is used for the height and width dimension

  • a tuple of two ints – in which case, the first int is used for the height dimension, and the second int for the width dimension

  • in_channels (int) – Number of channels in the input image

  • out_channels (int) – Number of channels produced by the convolution

  • kernel_size (int or tuple) – Size of the convolving kernel

  • stride (int or tuple, optional) – Stride of the convolution. Default: 1

  • padding (int, tuple or str, optional) – Padding added to all four sides of the input. Default: 0

  • dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1

  • groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1

bias: Optional[torch.Tensor]
dilation: Tuple[int, ...]

Defines the computation performed at every call.

Should be overridden by all subclasses.


Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

groups: int
kernel_size: Tuple[int, ...]
out_channels: int
output_padding: Tuple[int, ...]
padding: Union[str, Tuple[int, ...]]
padding_mode: str
stride: Tuple[int, ...]
training: bool
transposed: bool
weight: torch.Tensor