Some model types, such as DRF, GBM, and Deep Learning, support checkpointing. A checkpoint resumes model training so that you can iterate your model. The dataset must be the same. The following model parameters must be the same when restarting a model from a checkpoint:
| Must be the same as in checkpoint model | ||
|---|---|---|
| drop_na20_cols | response_column | activation | 
| use_all_factor_levels | adaptive_rate | autoencoder | 
| rho | epsilon | sparse | 
| sparsity_beta | col_major | rate | 
| rate_annealing | rate_decay | momentum_start | 
| momentum_ramp | momentum_stable | nesterov_accelerated_gradient | 
| ignore_const_cols | max_categorical_features | nfolds | 
| distribution | tweedie_power | 
The following parameters can be modified when restarting a model from a checkpoint:
| Can be modified | ||
|---|---|---|
| seed | checkpoint | epochs | 
| score_interval | train_samples_per_iteration | target_ratio_comm_to_comp | 
| score_duty_cycle | score_training_samples | score_validation_samples | 
| score_validation_sampling | classification_stop | regression_stop | 
| quiet_mode | max_confusion_matrix_size | mini_batch_size | 
| diagnostics | variable_importances | initial_weight_distribution | 
| initial_weight_scale | force_load_balance | replicate_training_data | 
| shuffle_training_data | single_node_mode | fast_mode | 
| l1 | l2 | max_w2 | 
| input_dropout_ratio | hidden_dropout_ratios | loss | 
| overwrite_with_best_model | missing_values_handling | average_activation | 
| reproducible | export_weights_and_biases | elastic_averaging | 
| elastic_averaging_moving_rate | elastic_averaging_regularization | 
model_id. To view the model_id, click the Model menu then click List All Models. Note: The model type must be the same as the checkpointed model.
model_id in the checkpoint entry field.