![]() |
Dakota
Version 6.15
Explore and Predict with Confidence
|
Wrapper class for the OPT++ optimization library. More...
Public Member Functions | |
SNLLLeastSq (ProblemDescDB &problem_db, Model &model) | |
standard constructor | |
SNLLLeastSq (const String &method_name, Model &model) | |
alternate constructor for instantiations without ProblemDescDB support | |
~SNLLLeastSq () | |
destructor | |
void | core_run () |
compute the least squares solution | |
void | reset () |
restore initial state for repeated sub-iterator executions | |
![]() | |
SNLLBase () | |
default constructor | |
SNLLBase (ProblemDescDB &problem_db) | |
standard constructor | |
~SNLLBase () | |
destructor | |
Protected Member Functions | |
void | initialize_run () |
invokes LeastSq::initialize_run(), SNLLBase::snll_initialize_run(), and performs other set-up | |
void | finalize_run () |
restores instances | |
![]() | |
LeastSq (std::shared_ptr< TraitsBase > traits) | |
default constructor | |
LeastSq (ProblemDescDB &problem_db, Model &model, std::shared_ptr< TraitsBase > traits) | |
standard constructor More... | |
LeastSq (unsigned short method_name, Model &model, std::shared_ptr< TraitsBase > traits) | |
alternate "on the fly" constructor | |
~LeastSq () | |
destructor | |
void | post_run (std::ostream &s) |
void | print_results (std::ostream &s, short results_state=FINAL_RESULTS) |
void | get_confidence_intervals (const Variables &native_vars, const Response &iter_resp) |
Calculate confidence intervals on estimated parameters. More... | |
![]() | |
Minimizer (std::shared_ptr< TraitsBase > traits=std::shared_ptr< TraitsBase >(new TraitsBase())) | |
default constructor | |
Minimizer (ProblemDescDB &problem_db, Model &model, std::shared_ptr< TraitsBase > traits=std::shared_ptr< TraitsBase >(new TraitsBase())) | |
standard constructor More... | |
Minimizer (unsigned short method_name, Model &model, std::shared_ptr< TraitsBase > traits=std::shared_ptr< TraitsBase >(new TraitsBase())) | |
alternate constructor for "on the fly" instantiations | |
Minimizer (unsigned short method_name, size_t num_lin_ineq, size_t num_lin_eq, size_t num_nln_ineq, size_t num_nln_eq, std::shared_ptr< TraitsBase > traits=std::shared_ptr< TraitsBase >(new TraitsBase())) | |
alternate constructor for "on the fly" instantiations | |
~Minimizer () | |
destructor | |
void | update_from_model (const Model &model) |
set inherited data attributes based on extractions from incoming model | |
void | post_run (std::ostream &s) |
post-run portion of run (optional); verbose to print results; re-implemented by Iterators that can read all Variables/Responses and perform final analysis phase in a standalone way More... | |
const Model & | algorithm_space_model () const |
Model | original_model (unsigned short recasts_left=0) const |
Return a shallow copy of the original model this Iterator was originally passed, optionally leaving recasts_left on top of it. | |
void | data_transform_model () |
Wrap iteratedModel in a RecastModel that subtracts provided observed data from the primary response functions (variables and secondary responses are unchanged) More... | |
void | scale_model () |
Wrap iteratedModel in a RecastModel that performs variable and/or response scaling. More... | |
Real | objective (const RealVector &fn_vals, const BoolDeque &max_sense, const RealVector &primary_wts) const |
compute a composite objective value from one or more primary functions More... | |
Real | objective (const RealVector &fn_vals, size_t num_fns, const BoolDeque &max_sense, const RealVector &primary_wts) const |
compute a composite objective with specified number of source primary functions, instead of userPrimaryFns More... | |
void | objective_gradient (const RealVector &fn_vals, const RealMatrix &fn_grads, const BoolDeque &max_sense, const RealVector &primary_wts, RealVector &obj_grad) const |
compute the gradient of the composite objective function | |
void | objective_gradient (const RealVector &fn_vals, size_t num_fns, const RealMatrix &fn_grads, const BoolDeque &max_sense, const RealVector &primary_wts, RealVector &obj_grad) const |
compute the gradient of the composite objective function More... | |
void | objective_hessian (const RealVector &fn_vals, const RealMatrix &fn_grads, const RealSymMatrixArray &fn_hessians, const BoolDeque &max_sense, const RealVector &primary_wts, RealSymMatrix &obj_hess) const |
compute the Hessian of the composite objective function | |
void | objective_hessian (const RealVector &fn_vals, size_t num_fns, const RealMatrix &fn_grads, const RealSymMatrixArray &fn_hessians, const BoolDeque &max_sense, const RealVector &primary_wts, RealSymMatrix &obj_hess) const |
compute the Hessian of the composite objective function More... | |
void | archive_best_variables (const bool active_only=false) const |
archive best variables for the index'th final solution | |
void | archive_best_objective_functions () const |
archive the index'th set of objective functions | |
void | archive_best_constraints () const |
archive the index'th set of constraints | |
void | archive_best_residuals () const |
Archive residuals when calibration terms are used. | |
void | resize_best_vars_array (size_t newsize) |
Safely resize the best variables array to newsize taking into account the envelope-letter design pattern and any recasting. More... | |
void | resize_best_resp_array (size_t newsize) |
Safely resize the best response array to newsize taking into account the envelope-letter design pattern and any recasting. More... | |
void | local_recast_retrieve (const Variables &vars, Response &response) const |
infers MOO/NLS solution from the solution of a single-objective optimizer More... | |
![]() | |
Iterator (BaseConstructor, ProblemDescDB &problem_db, std::shared_ptr< TraitsBase > traits=std::shared_ptr< TraitsBase >(new TraitsBase())) | |
constructor initializes the base class part of letter classes (BaseConstructor overloading avoids infinite recursion in the derived class constructors - Coplien, p. 139) More... | |
Iterator (NoDBBaseConstructor, unsigned short method_name, Model &model, std::shared_ptr< TraitsBase > traits=std::shared_ptr< TraitsBase >(new TraitsBase())) | |
alternate constructor for base iterator classes constructed on the fly More... | |
Iterator (NoDBBaseConstructor, unsigned short method_name, std::shared_ptr< TraitsBase > traits=std::shared_ptr< TraitsBase >(new TraitsBase())) | |
alternate constructor for base iterator classes constructed on the fly More... | |
virtual void | derived_init_communicators (ParLevLIter pl_iter) |
derived class contributions to initializing the communicators associated with this Iterator instance | |
virtual const VariablesArray & | initial_points () const |
gets the multiple initial points for this iterator. This will only be meaningful after a call to initial_points mutator. | |
StrStrSizet | run_identifier () const |
get the unique run identifier based on method name, id, and number of executions | |
void | initialize_model_graphics (Model &model, int iterator_server_id) |
helper function that encapsulates initialization operations, modular on incoming Model instance More... | |
void | export_final_surrogates (Model &data_fit_surr_model) |
export final surrogates generated, e.g., GP in EGO and friends More... | |
![]() | |
void | copy_con_vals_dak_to_optpp (const RealVector &local_fn_vals, RealVector &g, size_t offset) |
convenience function for copying local_fn_vals to g; used by constraint evaluator functions | |
void | copy_con_vals_optpp_to_dak (const RealVector &g, RealVector &local_fn_vals, size_t offset) |
convenience function for copying g to local_fn_vals; used in final solution logging | |
void | copy_con_grad (const RealMatrix &local_fn_grads, RealMatrix &grad_g, size_t offset) |
convenience function for copying local_fn_grads to grad_g; used by constraint evaluator functions | |
void | copy_con_hess (const RealSymMatrixArray &local_fn_hessians, OPTPP::OptppArray< RealSymMatrix > &hess_g, size_t offset) |
convenience function for copying local_fn_hessians to hess_g; used by constraint evaluator functions | |
void | snll_pre_instantiate (bool bound_constr_flag, int num_constr) |
convenience function for setting OPT++ options prior to the method instantiation | |
void | snll_post_instantiate (int num_cv, bool vendor_num_grad_flag, const String &finite_diff_type, const RealVector &fdss, size_t max_iter, size_t max_eval, Real conv_tol, Real grad_tol, Real max_step, bool bound_constr_flag, int num_constr, short output_lev, OPTPP::OptimizeClass *the_optimizer, OPTPP::NLP0 *nlf_objective, OPTPP::FDNLF1 *fd_nlf1, OPTPP::FDNLF1 *fd_nlf1_con) |
convenience function for setting OPT++ options after the method instantiation | |
void | snll_initialize_run (OPTPP::NLP0 *nlf_objective, OPTPP::NLP *nlp_constraint, const RealVector &init_pt, bool bound_constr_flag, const RealVector &lower_bnds, const RealVector &upper_bnds, const RealMatrix &lin_ineq_coeffs, const RealVector &lin_ineq_l_bnds, const RealVector &lin_ineq_u_bnds, const RealMatrix &lin_eq_coeffs, const RealVector &lin_eq_targets, const RealVector &nln_ineq_l_bnds, const RealVector &nln_ineq_u_bnds, const RealVector &nln_eq_targets) |
convenience function for OPT++ configuration prior to the method invocation | |
void | snll_post_run (OPTPP::NLP0 *nlf_objective) |
convenience function for managing OPT++ results after method execution | |
void | snll_finalize_run (OPTPP::NLP0 *nlf_objective) |
convenience function for clearing OPT++ data after method execution | |
void | reset_base () |
reset last{FnEvalLocn,EvalMode,EvalVars} | |
Static Private Member Functions | |
static void | nlf2_evaluator_gn (int mode, int n, const RealVector &x, double &f, RealVector &grad_f, RealSymMatrix &hess_f, int &result_mode) |
objective function evaluator function which obtains values and gradients for least square terms and computes objective function value, gradient, and Hessian using the Gauss-Newton approximation. More... | |
static void | constraint1_evaluator_gn (int mode, int n, const RealVector &x, RealVector &g, RealMatrix &grad_g, int &result_mode) |
constraint evaluator function which provides constraint values and gradients to OPT++ Gauss-Newton methods. More... | |
static void | constraint2_evaluator_gn (int mode, int n, const RealVector &x, RealVector &g, RealMatrix &grad_g, OPTPP::OptppArray< RealSymMatrix > &hess_g, int &result_mode) |
constraint evaluator function which provides constraint values, gradients, and Hessians to OPT++ Gauss-Newton methods. More... | |
Private Attributes | |
SNLLLeastSq * | prevSnllLSqInstance |
pointer to the previously active object instance used for restoration in the case of iterator/model recursion | |
OPTPP::NLP0 * | nlfObjective |
objective NLF base class pointer | |
OPTPP::NLP0 * | nlfConstraint |
constraint NLF base class pointer | |
OPTPP::NLP * | nlpConstraint |
constraint NLP pointer | |
OPTPP::NLF2 * | nlf2 |
pointer to objective NLF for full Newton optimizers | |
OPTPP::NLF2 * | nlf2Con |
pointer to constraint NLF for full Newton optimizers | |
OPTPP::NLF1 * | nlf1Con |
pointer to constraint NLF for Quasi Newton optimizers | |
OPTPP::OptimizeClass * | theOptimizer |
optimizer base class pointer | |
OPTPP::OptNewton * | optnewton |
Newton optimizer pointer. | |
OPTPP::OptBCNewton * | optbcnewton |
Bound constrained Newton optimizer ptr. | |
OPTPP::OptDHNIPS * | optdhnips |
Disaggregated Hessian NIPS optimizer ptr. | |
Static Private Attributes | |
static SNLLLeastSq * | snllLSqInstance |
pointer to the active object instance used within the static evaluator functions in order to avoid the need for static data | |
Additional Inherited Members | |
![]() | |
static Real | sum_squared_residuals (size_t num_pri_fns, const RealVector &residuals, const RealVector &weights) |
return weighted sum of squared residuals | |
static void | print_residuals (size_t num_terms, const RealVector &best_terms, const RealVector &weights, size_t num_best, size_t best_index, std::ostream &s) |
print num_terms residuals and misfit for final results | |
static void | print_model_resp (size_t num_pri_fns, const RealVector &best_fns, size_t num_best, size_t best_index, std::ostream &s) |
print the original user model resp in the case of data transformations | |
![]() | |
static void | gnewton_set_recast (const Variables &recast_vars, const ActiveSet &recast_set, ActiveSet &sub_model_set) |
conversion of request vector values for the Gauss-Newton Hessian approximation More... | |
![]() | |
static void | init_fn (int n, RealVector &x) |
An initialization mechanism provided by OPT++ (not currently used). | |
![]() | |
size_t | numLeastSqTerms |
number of least squares terms | |
LeastSq * | prevLSqInstance |
pointer containing previous value of leastSqInstance | |
bool | weightFlag |
flag indicating whether weighted least squares is active | |
RealVector | confBoundsLower |
lower bounds for confidence intervals on calibration parameters | |
RealVector | confBoundsUpper |
upper bounds for confidence intervals on calibration parameters | |
RealVector | bestIterPriFns |
storage for iterator best primary functions (which shouldn't be stored in bestResponseArray when there are transformations) | |
bool | retrievedIterPriFns |
whether final primary iterator space functions have been retrieved (possibly by a derived class) | |
![]() | |
String | searchMethod |
value_based_line_search, gradient_based_line_search, trust_region, or tr_pds | |
OPTPP::SearchStrategy | searchStrat |
enum: LineSearch, TrustRegion, or TrustPDS | |
OPTPP::MeritFcn | meritFn |
enum: NormFmu, ArgaezTapia, or VanShanno | |
Real | maxStep |
value from max_step specification | |
Real | stepLenToBndry |
value from steplength_to_boundary specification | |
Real | centeringParam |
value from centering_parameter specification | |
bool | constantASVFlag |
flags a user selection of active_set_vector == constant. By mapping this into mode override, reliance on duplicate detection can be avoided. | |
![]() | |
static LeastSq * | leastSqInstance |
pointer to LeastSq instance used in static member functions | |
![]() | |
static Minimizer * | optLSqInstance |
pointer to the active base class object instance used within the static evaluator functions in order to avoid the need for static data | |
static bool | modeOverrideFlag |
flags OPT++ mode override (for combining value, gradient, and Hessian requests) | |
static EvalType | lastFnEvalLocn |
an enum used to track whether an nlf evaluator or a constraint evaluator was the last location of a function evaluation | |
static int | lastEvalMode |
copy of mode from constraint evaluators | |
static RealVector | lastEvalVars |
copy of variables from constraint evaluators | |
Wrapper class for the OPT++ optimization library.
The SNLLLeastSq class provides a wrapper for OPT++, a C++ optimization library of nonlinear programming and pattern search techniques from the Computational Sciences and Mathematics Research (CSMR) department at Sandia's Livermore CA site. It uses a function pointer approach for which passed functions must be either global functions or static member functions. Any attribute used within static member functions must be either local to that function, a static member, or accessed by static pointer.
The user input mappings are as follows: max_iterations
, max_function_evaluations
, convergence_tolerance
, max_step
, gradient_tolerance
, search_method
, and search_scheme_size
are set using OPT++'s setMaxIter(), setMaxFeval(), setFcnTol(), setMaxStep(), setGradTol(), setSearchStrategy(), and setSSS() member functions, respectively; output
verbosity is used to toggle OPT++'s debug mode using the setDebug() member function. Internal to OPT++, there are 3 search strategies, while the DAKOTA search_method
specification supports 4 (value_based_line_search
, gradient_based_line_search
, trust_region
, or tr_pds
). The difference stems from the "is_expensive" flag in OPT++. If the search strategy is LineSearch and "is_expensive" is turned on, then the value_based_line_search
is used. Otherwise (the "is_expensive" default is off), the algorithm will use the gradient_based_line_search
. Refer to [Meza, J.C., 1994] and to the OPT++ source in the Dakota/packages/OPTPP directory for information on OPT++ class member functions.
|
staticprivate |
objective function evaluator function which obtains values and gradients for least square terms and computes objective function value, gradient, and Hessian using the Gauss-Newton approximation.
This nlf2 evaluator function is used for the Gauss-Newton method in order to exploit the special structure of the nonlinear least squares problem. Here, fx = sum (T_i - Tbar_i)^2 and Response is made up of residual functions and their gradients along with any nonlinear constraints. The objective function and its gradient vector and Hessian matrix are computed directly from the residual functions and their derivatives (which are returned from the Response object).
References Dakota::abort_handler(), Iterator::activeSet, Model::continuous_variables(), Model::current_response(), Model::evaluate(), Response::function_gradients(), Response::function_values(), Iterator::iteratedModel, SNLLBase::lastEvalMode, SNLLBase::lastEvalVars, SNLLBase::lastFnEvalLocn, Minimizer::numFunctions, LeastSq::numLeastSqTerms, Minimizer::numNonlinearConstraints, Iterator::outputLevel, ActiveSet::request_vector(), SNLLLeastSq::snllLSqInstance, and Dakota::write_precision.
Referenced by SNLLLeastSq::SNLLLeastSq().
|
staticprivate |
constraint evaluator function which provides constraint values and gradients to OPT++ Gauss-Newton methods.
While it does not employ the Gauss-Newton approximation, it is distinct from constraint1_evaluator() due to its need to anticipate the required modes for the least squares terms. This constraint evaluator function is used with diaggregated Hessian NIPS and is currently active.
References Dakota::abort_handler(), Iterator::activeSet, Model::continuous_variables(), SNLLBase::copy_con_grad(), SNLLBase::copy_con_vals_dak_to_optpp(), Model::current_response(), Model::evaluate(), Response::function_gradients(), Response::function_values(), Iterator::iteratedModel, SNLLBase::lastEvalMode, SNLLBase::lastEvalVars, SNLLBase::lastFnEvalLocn, Minimizer::numFunctions, LeastSq::numLeastSqTerms, Iterator::outputLevel, ActiveSet::request_vector(), and SNLLLeastSq::snllLSqInstance.
Referenced by SNLLLeastSq::SNLLLeastSq().
|
staticprivate |
constraint evaluator function which provides constraint values, gradients, and Hessians to OPT++ Gauss-Newton methods.
While it does not employ the Gauss-Newton approximation, it is distinct from constraint2_evaluator() due to its need to anticipate the required modes for the least squares terms. This constraint evaluator function is used with full Newton NIPS and is currently inactive.
References Dakota::abort_handler(), Iterator::activeSet, Model::continuous_variables(), SNLLBase::copy_con_grad(), SNLLBase::copy_con_hess(), SNLLBase::copy_con_vals_dak_to_optpp(), Model::current_response(), Model::evaluate(), Response::function_gradients(), Response::function_hessians(), Response::function_values(), Iterator::iteratedModel, SNLLBase::lastEvalMode, SNLLBase::lastEvalVars, SNLLBase::lastFnEvalLocn, SNLLBase::modeOverrideFlag, Minimizer::numFunctions, LeastSq::numLeastSqTerms, Iterator::outputLevel, ActiveSet::request_vector(), and SNLLLeastSq::snllLSqInstance.