Darkopt API Reference¶
Integration¶
-
class
darkopt.
ChainerTrigger
(score_key, known_best_score, stop_trigger, maximize=False, test_trigger=(5, 'epoch'), pruning_prob_thresh=0.05, learning_curve_predictor=None)¶ The trigger class for Chainer to prune with learning curve prediction.
-
info
()¶ Returns trial information (e.g., the number of iterations before pruning).
Returns: A dict that contains the information.
-
-
class
darkopt.
XGBoostCallback
(known_best_score, score_key=None, pruning_prob_thresh=0.05, maximize=False, learning_curve_predictor=None, min_iters_before_prune=10, test_interval=10)¶ The callback class for XGBoost to prune with learning curve prediction.
-
info
()¶ Returns trial information (e.g., the number of iterations before pruning).
Returns: A dict that contains the information.
-
Optimization¶
-
class
darkopt.
Optimizer
(target_func, param_space, engine='random_search', maximize=False, prune=True)¶ Hyper-parameter optimizer based on random search algorithm with pruning by learning curve prediction.
-
optimize
(max_n_calls=None)¶ Invoke the optimization.
Parameters: max_n_calls – The maximum number to call the target_func
.Returns: A TrialResult
object that describes the best trial.
-