claf.modules.layer package

Submodules

class claf.modules.layer.highway.Highway(input_size, num_layers=2, activation='relu')[source]

Bases: torch.nn.modules.module.Module

Highway Networks (https://arxiv.org/abs/1505.00387) https://github.com/allenai/allennlp/blob/master/allennlp/modules/highway.py

  • Args:

    input_size: The number of expected features in the input x num_layers: The number of Highway layers. activation: Activation Function (ReLU is default)

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class claf.modules.layer.normalization.LayerNorm(normalized_shape, eps=1e-05)[source]

Bases: torch.nn.modules.module.Module

Layer Normalization (https://arxiv.org/abs/1607.06450)

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class claf.modules.layer.positionwise.PositionwiseFeedForward(input_size, hidden_size, dropout=0.1)[source]

Bases: torch.nn.modules.module.Module

Pointwise Feed-Forward Layer

  • Args:

    input_size: the number of input size hidden_size: the number of hidden size

  • Kwargs:

    dropout: the probability of dropout

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class claf.modules.layer.residual.ResidualConnection(dim, layer_dropout=None, layernorm=False)[source]

Bases: torch.nn.modules.module.Module

in Deep Residual Learning for Image Recognition (https://arxiv.org/abs/1512.03385)

=> f(x) + x

  • Args:

    dim: the number of dimension

  • Kwargs:

    layer_dropout: layer dropout probability (stochastic depth) dropout: dropout probability

forward(x, sub_layer_fn)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

This code is from allenai/allennlp (https://github.com/allenai/allennlp/blob/master/allennlp/modules/scalar_mix.py)

class claf.modules.layer.scalar_mix.ScalarMix(mixture_size: int, do_layer_norm: bool = False, initial_scalar_parameters: List[float] = None, trainable: bool = True)[source]

Bases: torch.nn.modules.module.Module

Computes a parameterised scalar mixture of N tensors, mixture = gamma * sum(s_k * tensor_k) where s = softmax(w), with w and gamma scalar parameters. In addition, if do_layer_norm=True then apply layer normalization to each tensor before weighting.

forward(tensors: List[torch.Tensor], mask: torch.Tensor = None) → torch.Tensor[source]

Compute a weighted average of the tensors. The input tensors an be any shape with at least two dimensions, but must all be the same shape. When do_layer_norm=True, the mask is required input. If the tensors are dimensioned (dim_0, ..., dim_{n-1}, dim_n), then the mask is dimensioned (dim_0, ..., dim_{n-1}), as in the typical case with tensors of shape (batch_size, timesteps, dim) and mask of shape (batch_size, timesteps). When do_layer_norm=False the mask is ignored.

Module contents

class claf.modules.layer.Highway(input_size, num_layers=2, activation='relu')[source]

Bases: torch.nn.modules.module.Module

Highway Networks (https://arxiv.org/abs/1505.00387) https://github.com/allenai/allennlp/blob/master/allennlp/modules/highway.py

  • Args:

    input_size: The number of expected features in the input x num_layers: The number of Highway layers. activation: Activation Function (ReLU is default)

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class claf.modules.layer.PositionwiseFeedForward(input_size, hidden_size, dropout=0.1)[source]

Bases: torch.nn.modules.module.Module

Pointwise Feed-Forward Layer

  • Args:

    input_size: the number of input size hidden_size: the number of hidden size

  • Kwargs:

    dropout: the probability of dropout

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class claf.modules.layer.ResidualConnection(dim, layer_dropout=None, layernorm=False)[source]

Bases: torch.nn.modules.module.Module

in Deep Residual Learning for Image Recognition (https://arxiv.org/abs/1512.03385)

=> f(x) + x

  • Args:

    dim: the number of dimension

  • Kwargs:

    layer_dropout: layer dropout probability (stochastic depth) dropout: dropout probability

forward(x, sub_layer_fn)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class claf.modules.layer.ScalarMix(mixture_size: int, do_layer_norm: bool = False, initial_scalar_parameters: List[float] = None, trainable: bool = True)[source]

Bases: torch.nn.modules.module.Module

Computes a parameterised scalar mixture of N tensors, mixture = gamma * sum(s_k * tensor_k) where s = softmax(w), with w and gamma scalar parameters. In addition, if do_layer_norm=True then apply layer normalization to each tensor before weighting.

forward(tensors: List[torch.Tensor], mask: torch.Tensor = None) → torch.Tensor[source]

Compute a weighted average of the tensors. The input tensors an be any shape with at least two dimensions, but must all be the same shape. When do_layer_norm=True, the mask is required input. If the tensors are dimensioned (dim_0, ..., dim_{n-1}, dim_n), then the mask is dimensioned (dim_0, ..., dim_{n-1}), as in the typical case with tensors of shape (batch_size, timesteps, dim) and mask of shape (batch_size, timesteps). When do_layer_norm=False the mask is ignored.