claf.learn package¶
Submodules¶
-
class
claf.learn.experiment.
Experiment
(mode, config)[source]¶ Bases:
object
Experiment settings with config.
- Args:
mode: Mode (ex. TRAIN, EVAL, INFER_EVAL, PREDICT) config: (NestedNamespace) Argument config according to mode
-
common_setting
(mode, config)[source]¶ Common Setting - experiment config, use_gpu and cuda_device_ids
-
set_eval_inference_latency_mode
()[source]¶ Evaluate Inference Latency Mode
Pipeline 1. read raw_data (DataReader) 2. load vocabs from checkpoint (DataReader, Token) 3. define raw_to_tensor_fn (DataReader, Token) 4. define and load model 5. run!
-
set_eval_mode
()[source]¶ Evaluate Mode
Pipeline 1. read raw_data (DataReader) 2. load vocabs from checkpoint (DataReader, Token) 3. indexing tokens (DataReader, Token) 4. convert to DataSet (DataReader) 5. create DataLoader (DataLoader) 6. define and load model 7. run!
-
set_predict_mode
(preload=False)[source]¶ Predict Mode
Pipeline 1. read raw_data (Argument) 2. load vocabs from checkpoint (DataReader, Token) 3. define raw_to_tensor_fn (DataReader, Token) 4. define and load model 5. run!
-
class
claf.learn.mode.
Mode
[source]¶ Bases:
object
Experiment Flag class
-
EVAL
= 'eval'¶
-
INFER_EVAL
= 'infer_eval'¶
-
MACHINE
= 'machine'¶
-
PREDICT
= 'predict'¶
-
TRAIN
= 'train'¶
-
-
class
claf.learn.tensorboard.
TensorBoard
(log_dir)[source]¶ Bases:
object
TensorBoard Wrapper for Pytorch
-
class
claf.learn.trainer.
Trainer
(model, config={}, log_dir='logs/experiment', grad_max_norm=None, gradient_accumulation_steps=1, learning_rate_scheduler=None, exponential_moving_average=None, num_epochs=20, early_stopping_threshold=10, max_eval_examples=5, metric_key=None, verbose_step_count=100, eval_and_save_step_count='epoch', save_checkpoint=True)[source]¶ Bases:
object
Run experiment
train
train_and_evaluate
evaluate
evaluate_inference_latency
predict
- Args:
config: experiment overall config model: Model based on torch.nn.Module
- Kwargs:
log_dir: path to directory for save model and other options grad_max_norm: Clips gradient norm of an iterable of parameters. learning_rate_scheduler: PyTorch’s Learning Rate Scheduler.
- exponential_moving_average: the moving averages of all weights of the model are maintained
with the exponential decay rate of {ema}.
num_epochs: the number of maximun epochs (Default is 20) early_stopping_threshold: the number of early stopping threshold (Default is 10) max_eval_examples: print evaluation examples metric_key: metric score’s control point verbose_step_count: print verbose step count (Default is 100) eval_and_save_step_count: evaluate valid_dataset then save every n step_count (Default is ‘epoch’)
-
class
claf.learn.utils.
TrainCounter
(display_unit='epoch')[source]¶ Bases:
object
-
epoch
= 0¶
-
global_step
= 0¶
-
-
claf.learn.utils.
logger
= <Logger claf.learn.utils (WARNING)>¶ Train Counter