claf.learn package

Submodules

class claf.learn.experiment.Experiment(mode, config)[source]

Bases: object

Experiment settings with config.

  • Args:

    mode: Mode (ex. TRAIN, EVAL, INFER_EVAL, PREDICT) config: (NestedNamespace) Argument config according to mode

common_setting(mode, config)[source]

Common Setting - experiment config, use_gpu and cuda_device_ids

load_setting()[source]

Load Setting - need to load checkpoint case (ex. evaluate and predict)

predict(raw_features)[source]
set_eval_inference_latency_mode()[source]

Evaluate Inference Latency Mode

  • Pipeline 1. read raw_data (DataReader) 2. load vocabs from checkpoint (DataReader, Token) 3. define raw_to_tensor_fn (DataReader, Token) 4. define and load model 5. run!

set_eval_mode()[source]

Evaluate Mode

  • Pipeline 1. read raw_data (DataReader) 2. load vocabs from checkpoint (DataReader, Token) 3. indexing tokens (DataReader, Token) 4. convert to DataSet (DataReader) 5. create DataLoader (DataLoader) 6. define and load model 7. run!

set_predict_mode(preload=False)[source]

Predict Mode

  • Pipeline 1. read raw_data (Argument) 2. load vocabs from checkpoint (DataReader, Token) 3. define raw_to_tensor_fn (DataReader, Token) 4. define and load model 5. run!

set_train_mode()[source]

Training Mode

  • Pipeline 1. read raw_data (DataReader) 2. build vocabs (DataReader, Token) 3. indexing tokens (DataReader, Token) 4. convert to DataSet (DataReader) 5. create DataLoader (DataLoader) 6. define model and optimizer 7. run!

set_trainer(model, op_dict={}, save_params={})[source]
class claf.learn.mode.Mode[source]

Bases: object

Experiment Flag class

EVAL = 'eval'
INFER_EVAL = 'infer_eval'
MACHINE = 'machine'
PREDICT = 'predict'
TRAIN = 'train'
class claf.learn.tensorboard.TensorBoard(log_dir)[source]

Bases: object

TensorBoard Wrapper for Pytorch

embedding_summary(features, metadata=None, label_img=None)[source]
graph_summary(model, input_to_model=None)[source]
histogram_summary(tag, values, step, bins=1000)[source]

Log a histogram of the tensor of values.

image_summary(tag, images, step)[source]

Log a list of images.

scalar_summaries(step, summary)[source]
scalar_summary(step, tag, value)[source]

Log a scalar variable.

class claf.learn.trainer.Trainer(model, config={}, log_dir='logs/experiment', grad_max_norm=None, gradient_accumulation_steps=1, learning_rate_scheduler=None, exponential_moving_average=None, num_epochs=20, early_stopping_threshold=10, max_eval_examples=5, metric_key=None, verbose_step_count=100, eval_and_save_step_count='epoch', save_checkpoint=True)[source]

Bases: object

Run experiment

  • train

  • train_and_evaluate

  • evaluate

  • evaluate_inference_latency

  • predict

  • Args:

    config: experiment overall config model: Model based on torch.nn.Module

  • Kwargs:

    log_dir: path to directory for save model and other options grad_max_norm: Clips gradient norm of an iterable of parameters. learning_rate_scheduler: PyTorch’s Learning Rate Scheduler.

    exponential_moving_average: the moving averages of all weights of the model are maintained

    with the exponential decay rate of {ema}.

    num_epochs: the number of maximun epochs (Default is 20) early_stopping_threshold: the number of early stopping threshold (Default is 10) max_eval_examples: print evaluation examples metric_key: metric score’s control point verbose_step_count: print verbose step count (Default is 100) eval_and_save_step_count: evaluate valid_dataset then save every n step_count (Default is ‘epoch’)

evaluate(data_loader)[source]

Evaluate

evaluate_inference_latency(raw_examples, raw_to_tensor_fn, token_key=None, max_latency=1000)[source]

Evaluate with focusing inferece latency (Note: must use sorted synthetic data)

  • inference_latency: raw_data -> pre-processing -> model -> predict_value

    (elapsed_time) (elapsed_time)

predict(raw_feature, raw_to_tensor_fn, arguments, interactive=False)[source]

Inference / Predict

save(optimizer)[source]
set_model_base_properties(config, log_dir)[source]
train(data_loader, optimizer)[source]

Train

train_and_evaluate(train_loader, valid_loader, optimizer)[source]

Train and Evaluate

class claf.learn.utils.TrainCounter(display_unit='epoch')[source]

Bases: object

epoch = 0
get_display()[source]
global_step = 0
claf.learn.utils.bind_nsml(model, **kwargs)[source]
claf.learn.utils.get_session_name()[source]
claf.learn.utils.get_sorted_path(checkpoint_dir, both_exist=False)[source]
claf.learn.utils.load_model_checkpoint(model, checkpoint)[source]
claf.learn.utils.load_optimizer_checkpoint(optimizer, checkpoint)[source]
claf.learn.utils.load_vocabs(model_checkpoint)[source]
claf.learn.utils.logger = <Logger claf.learn.utils (WARNING)>

Train Counter

claf.learn.utils.save_checkpoint(path, model, optimizer, max_to_keep=10)[source]
claf.learn.utils.send_message_to_slack(webhook_url, title=None, message=None)[source]

Module contents