claf.tokens.token_embedder package

Submodules

class claf.tokens.token_embedder.base.TokenEmbedder(token_makers)[source]

Bases: torch.nn.modules.module.Module

Token Embedder

Take a tensor(indexed token) look up Embedding modules.

  • Args:

    token_makers: dictionary of TokenMaker (claf.token_makers.token)

add_embedding_modules(token_makers)[source]

add embedding module to TokenEmbedder

forward(inputs, params={})[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

get_embed_dim()[source]
class claf.tokens.token_embedder.basic_embedder.BasicTokenEmbedder(token_makers)[source]

Bases: claf.tokens.token_embedder.base.TokenEmbedder

Basic Token Embedder

Take a tensor(indexed token) look up Embedding modules. Output is concatenating all embedded tensors.

  • Args:

    token_makers: dictionary of TokenMaker (claf.tokens.token_maker)

forward(inputs, except_keys=[], params={})[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

get_embed_dim(except_keys=[])[source]
class claf.tokens.token_embedder.reading_comprehension_embedder.RCTokenEmbedder(token_makers)[source]

Bases: claf.tokens.token_embedder.base.TokenEmbedder

Reading Comprehension Token Embedder

Take a tensor(indexed token) look up Embedding modules. Inputs are seperated context and query for individual token setting.

  • Args:

    token_makers: dictionary of TokenMaker (claf.tokens.token_maker) vocabs: dictionary of vocab

    {“token_name”: Vocab (claf.token_makers.vocaburary), …}

EXCLUSIVE_TOKENS = ['exact_match']
forward(context, query, context_params={}, query_params={}, query_align=False)[source]
  • Args:

    context: context inputs (eg. {“token_name1”: tensor, “token_name2”: tensor, …}) query: query inputs (eg. {“token_name1”: tensor, “token_name2”: tensor, …})

  • Kwargs:

    context_params: custom context parameters query_params: query context parameters query_align: f_align(p_i) = sum(a_ij, E(qj), where the attention score a_ij

    captures the similarity between pi and each question words q_j. these features add soft alignments between similar but non-identical words (e.g., car and vehicle) it only apply to ‘context_embed’.

get_embed_dim()[source]

Module contents

class claf.tokens.token_embedder.BasicTokenEmbedder(token_makers)[source]

Bases: claf.tokens.token_embedder.base.TokenEmbedder

Basic Token Embedder

Take a tensor(indexed token) look up Embedding modules. Output is concatenating all embedded tensors.

  • Args:

    token_makers: dictionary of TokenMaker (claf.tokens.token_maker)

forward(inputs, except_keys=[], params={})[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

get_embed_dim(except_keys=[])[source]
class claf.tokens.token_embedder.RCTokenEmbedder(token_makers)[source]

Bases: claf.tokens.token_embedder.base.TokenEmbedder

Reading Comprehension Token Embedder

Take a tensor(indexed token) look up Embedding modules. Inputs are seperated context and query for individual token setting.

  • Args:

    token_makers: dictionary of TokenMaker (claf.tokens.token_maker) vocabs: dictionary of vocab

    {“token_name”: Vocab (claf.token_makers.vocaburary), …}

EXCLUSIVE_TOKENS = ['exact_match']
forward(context, query, context_params={}, query_params={}, query_align=False)[source]
  • Args:

    context: context inputs (eg. {“token_name1”: tensor, “token_name2”: tensor, …}) query: query inputs (eg. {“token_name1”: tensor, “token_name2”: tensor, …})

  • Kwargs:

    context_params: custom context parameters query_params: query context parameters query_align: f_align(p_i) = sum(a_ij, E(qj), where the attention score a_ij

    captures the similarity between pi and each question words q_j. these features add soft alignments between similar but non-identical words (e.g., car and vehicle) it only apply to ‘context_embed’.

get_embed_dim()[source]