norse.torch.module.receptive_field.SpatialReceptiveField2d#
- class norse.torch.module.receptive_field.SpatialReceptiveField2d(in_channels: int, size: int, rf_parameters: Tensor, aggregate: bool = True, domain: float = 8, optimize_fields: bool = True, optimize_log: bool = True, **kwargs)[source]#
Creates a spatial receptive field as 2-dimensional convolutions. The rf_parameters are a tensor of shape (n, 5) where n is the number of receptive fields. If the optimize_fields flag is set to True, the rf_parameters will be optimized during training.
- Example:
>>> import torch >>> from norse.torch import SpatialReceptiveField2d >>> parameters = torch.tensor([[1., 1., 1., 0., 0., 0., 0.]]) >>> m = SpatialReceptiveField2d(1, 9, parameters) >>> m.weights.shape torch.Size([1, 1, 9, 9]) >>> y = m(torch.empty(1, 1, 9, 9)) >>> y.shape torch.Size([1, 1, 1, 1])
- Arguments:
in_channels (int): Number of input channels size (int): Size of the receptive field rf_parameters (torch.Tensor): Parameters for the receptive fields in the order (scale, angle, ratio, x, y, dx, dy) aggregate (bool): If True, the receptive fields will be aggregated across channels. Defaults to True. domain (float): The domain of the receptive field. Defaults to 8. optimize_fields (bool): If True, the rf_parameters will be optimized during training. Defaults to True. **kwargs: Additional arguments for the torch.nn.functional.conv2d function.
- __init__(in_channels: int, size: int, rf_parameters: Tensor, aggregate: bool = True, domain: float = 8, optimize_fields: bool = True, optimize_log: bool = True, **kwargs) None [source]#
Initialize internal Module state, shared by both nn.Module and ScriptModule.
Methods
__init__
(in_channels, size, rf_parameters[, ...])Initialize internal Module state, shared by both nn.Module and ScriptModule.
add_module
(name, module)Add a child module to the current module.
apply
(fn)Apply
fn
recursively to every submodule (as returned by.children()
) as well as self.bfloat16
()Casts all floating point parameters and buffers to
bfloat16
datatype.buffers
([recurse])Return an iterator over module buffers.
children
()Return an iterator over immediate children modules.
compile
(*args, **kwargs)Compile this Module's forward using
torch.compile()
.cpu
()Move all model parameters and buffers to the CPU.
cuda
([device])Move all model parameters and buffers to the GPU.
double
()Casts all floating point parameters and buffers to
double
datatype.eval
()Set the module in evaluation mode.
extra_repr
()Set the extra representation of the module.
float
()Casts all floating point parameters and buffers to
float
datatype.forward
(x)Define the computation performed at every call.
get_buffer
(target)Return the buffer given by
target
if it exists, otherwise throw an error.get_extra_state
()Return any extra state to include in the module's state_dict.
get_parameter
(target)Return the parameter given by
target
if it exists, otherwise throw an error.get_submodule
(target)Return the submodule given by
target
if it exists, otherwise throw an error.half
()Casts all floating point parameters and buffers to
half
datatype.ipu
([device])Move all model parameters and buffers to the IPU.
load_state_dict
(state_dict[, strict, assign])Copy parameters and buffers from
state_dict
into this module and its descendants.modules
()Return an iterator over all modules in the network.
mtia
([device])Move all model parameters and buffers to the MTIA.
named_buffers
([prefix, recurse, ...])Return an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.
named_children
()Return an iterator over immediate children modules, yielding both the name of the module as well as the module itself.
named_modules
([memo, prefix, remove_duplicate])Return an iterator over all modules in the network, yielding both the name of the module as well as the module itself.
named_parameters
([prefix, recurse, ...])Return an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.
parameters
([recurse])Return an iterator over module parameters.
register_backward_hook
(hook)Register a backward hook on the module.
register_buffer
(name, tensor[, persistent])Add a buffer to the module.
register_forward_hook
(hook, *[, prepend, ...])Register a forward hook on the module.
register_forward_pre_hook
(hook, *[, ...])Register a forward pre-hook on the module.
register_full_backward_hook
(hook[, prepend])Register a backward hook on the module.
register_full_backward_pre_hook
(hook[, prepend])Register a backward pre-hook on the module.
register_load_state_dict_post_hook
(hook)Register a post-hook to be run after module's
load_state_dict()
is called.register_load_state_dict_pre_hook
(hook)Register a pre-hook to be run before module's
load_state_dict()
is called.register_module
(name, module)Alias for
add_module()
.register_parameter
(name, param)Add a parameter to the module.
register_state_dict_post_hook
(hook)Register a post-hook for the
state_dict()
method.register_state_dict_pre_hook
(hook)Register a pre-hook for the
state_dict()
method.requires_grad_
([requires_grad])Change if autograd should record operations on parameters in this module.
set_extra_state
(state)Set extra state contained in the loaded state_dict.
set_submodule
(target, module)Set the submodule given by
target
if it exists, otherwise throw an error.share_memory
()state_dict
(*args[, destination, prefix, ...])Return a dictionary containing references to the whole state of the module.
to
(*args, **kwargs)Move and/or cast the parameters and buffers.
to_empty
(*, device[, recurse])Move the parameters and buffers to the specified device without copying storage.
train
([mode])Set the module in training mode.
type
(dst_type)Casts all parameters and buffers to
dst_type
.xpu
([device])Move all model parameters and buffers to the XPU.
zero_grad
([set_to_none])Reset gradients of all model parameters.
Attributes
T_destination
call_super_init
dump_patches
training