Skip to content

Commit 114ee14

Browse files
committed
tutorial
1 parent e7e4bb7 commit 114ee14

File tree

2 files changed

+100
-38
lines changed

2 files changed

+100
-38
lines changed

docs/src/tutorial.md

Lines changed: 91 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -62,6 +62,97 @@ Here is an example using the constrained problem solve:
6262
stats.solver_specific[:internal_msg]
6363
```
6464

65+
## Monitoring optimization with callbacks
66+
67+
You can monitor the optimization process using a callback function. The callback allows you to access the current iterate and constraint violations at each iteration, which is useful for custom stopping criteria, logging, or real-time analysis.
68+
69+
### Callback signature
70+
71+
The callback function must have the following signature:
72+
73+
```julia
74+
function my_callback(alg_mod, iter_count, problem_ptr, args...)
75+
# Your custom code here
76+
return true # return false to stop optimization
77+
end
78+
```
79+
80+
### Accessing current iterate information
81+
82+
The `problem_ptr` argument provides access to the current state of the optimization:
83+
84+
- `Ipopt.GetIpoptCurrentIterate(problem_ptr)` returns:
85+
- `x`: current primal variables
86+
- `z_L`: current multipliers for lower bounds
87+
- `z_U`: current multipliers for upper bounds
88+
- `g`: current constraint values
89+
- `lambda`: current multipliers for constraints
90+
91+
- `Ipopt.GetIpoptCurrentViolations(problem_ptr)` returns:
92+
- `constr_viol`: constraint violation
93+
- `dual_inf`: dual infeasibility
94+
- `compl`: complementarity
95+
96+
### Example usage
97+
98+
Here's a complete example showing how to use callbacks to monitor the optimization:
99+
100+
```@example ex4
101+
using ADNLPModels, NLPModelsIpopt
102+
103+
# Define a callback function to monitor iterations
104+
function my_callback(alg_mod, iter_count, problem_ptr, args...)
105+
# Get current iterate (primal and dual variables)
106+
x, z_L, z_U, g, lambda = Ipopt.GetIpoptCurrentIterate(problem_ptr)
107+
# Get current constraint violations
108+
constr_viol, dual_inf, compl = Ipopt.GetIpoptCurrentViolations(problem_ptr)
109+
110+
# Log iteration information
111+
println("Iteration $iter_count:")
112+
println(" x = ", x)
113+
println(" Constraint violation = ", constr_viol)
114+
println(" Dual infeasibility = ", dual_inf)
115+
println(" Complementarity = ", compl)
116+
117+
# Return true to continue, false to stop
118+
return iter_count < 5 # Stop after 5 iterations for this example
119+
end
120+
121+
# Create and solve a problem with callback
122+
nlp = ADNLPModel(x -> (x[1] - 1)^2 + 100 * (x[2] - x[1]^2)^2, [-1.2; 1.0])
123+
stats = ipopt(nlp, callback = my_callback, print_level = 0)
124+
```
125+
126+
You can also use callbacks with the advanced solver interface:
127+
128+
```@example ex4
129+
# Advanced usage with IpoptSolver
130+
solver = IpoptSolver(nlp)
131+
stats = solve!(solver, nlp, callback = my_callback, print_level = 0)
132+
```
133+
134+
### Custom stopping criteria
135+
136+
Callbacks are particularly useful for implementing custom stopping criteria:
137+
138+
```@example ex4
139+
function custom_stopping_callback(alg_mod, iter_count, problem_ptr, args...)
140+
x, z_L, z_U, g, lambda = Ipopt.GetIpoptCurrentIterate(problem_ptr)
141+
constr_viol, dual_inf, compl = Ipopt.GetIpoptCurrentViolations(problem_ptr)
142+
143+
# Custom stopping criterion: stop if x[1] gets close to 1
144+
if abs(x[1] - 1.0) < 0.1
145+
println("Custom stopping criterion met at iteration $iter_count")
146+
return false # Stop optimization
147+
end
148+
149+
return true # Continue optimization
150+
end
151+
152+
nlp = ADNLPModel(x -> (x[1] - 1)^2 + 100 * (x[2] - x[1]^2)^2, [-1.2; 1.0])
153+
stats = ipopt(nlp, callback = custom_stopping_callback, print_level = 0)
154+
```
155+
65156
## Manual input
66157

67158
In this section, we work through an example where we specify the problem and its derivatives manually. For this, we need to implement the following `NLPModel` API methods:

src/NLPModelsIpopt.jl

Lines changed: 9 additions & 38 deletions
Original file line numberDiff line numberDiff line change
@@ -123,44 +123,15 @@ end
123123
124124
Return the set of functions needed to instantiate an `IpoptProblem`.
125125
126-
You can use a callback to monitor the optimization process. The callback must have the signature:
127-
128-
function my_callback(alg_mod, iter_count, problem_ptr, args...)
129-
130-
The `problem_ptr` argument is required to access the current iterate and constraint violations using `Ipopt.GetIpoptCurrentIterate` and `Ipopt.GetIpoptCurrentViolations`.
131-
132-
-`Ipopt.GetIpoptCurrentIterate(problem_ptr)` returns:
133-
- `x`: current primal variables
134-
- `z_L`: current multipliers for lower bounds
135-
- `z_U`: current multipliers for upper bounds
136-
- `g`: current constraint values
137-
- `lambda`: current multipliers for constraints
138-
139-
- `Ipopt.GetIpoptCurrentViolations(problem_ptr)` returns:
140-
- `constr_viol`: constraint violation
141-
- `dual_inf`: dual infeasibility
142-
- `compl`: complementarity
143-
144-
Example:
145-
146-
```julia
147-
function my_callback(alg_mod, iter_count, problem_ptr, args...)
148-
# Get current iterate (primal and dual variables)
149-
x, z_L, z_U, g, lambda = Ipopt.GetIpoptCurrentIterate(problem_ptr)
150-
# Get current constraint violations
151-
constr_viol, dual_inf, compl = Ipopt.GetIpoptCurrentViolations(problem_ptr)
152-
@info "Iter \$iter_count: primal = \$x, dual = \$lambda, constr_viol = \$constr_viol, dual_inf = \$dual_inf, compl = \$compl"
153-
return true # return false to stop
154-
end
155-
156-
# Pass the callback to ipopt using the `callback` keyword:
157-
stats = ipopt(nlp, callback = my_callback)
158-
159-
# For advanced access to the underlying problem struct:
160-
nlp = ADNLPModel(...)
161-
solver = IpoptSolver(nlp)
162-
stats = solve!(solver, nlp, callback = my_callback)
163-
```
126+
This function creates the callback functions that Ipopt needs to evaluate:
127+
- the objective function
128+
- the constraint function (if any)
129+
- the objective gradient
130+
- the constraint Jacobian (if any)
131+
- the Hessian of the Lagrangian
132+
133+
For information on using callbacks to monitor the optimization process,
134+
see the tutorial documentation.
164135
"""
165136
function set_callbacks(nlp::AbstractNLPModel)
166137
eval_f(x) = obj(nlp, x)

0 commit comments

Comments
 (0)