Skip to content

Commit

Permalink
added three examples
Browse files Browse the repository at this point in the history
  • Loading branch information
dnguyen227 committed Nov 20, 2024
1 parent 40c4008 commit a8957ea
Show file tree
Hide file tree
Showing 3 changed files with 196 additions and 0 deletions.
58 changes: 58 additions & 0 deletions docs/src/examples/Optimal Control/Fishing.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
# # Fishing Optimal Control
# We will solve an optimal control problem to maximize profit
# constrained by fish population
# # Problem Statement and Model
# In this system out input is the rate of fishing ``u``, and the profit
# ``J`` will be defined in the objective to maximize.
# Our profit objective is represented as:
# ```math
# \begin{aligned}
# &&\max_{u(t)} J(t) \\
# &&&&&J = \int_0^{10} \left(E - \frac{c}{x}\right) u U_{max} \, dt \\
# &&\text{s.t.} &&& \frac{dx}{dt}= rx(t)\left(1 - \frac{x(t)}{k}\right) - uU_{max} \\
# &&&&&x(0) = 70 \\
# &&&&&0 \leq u(t) \leq 1 \\
# &&&&&E = 1, \; c = 17.5, \; r = 0.71, \; k = 80.5, \; U_{max} = 20 \\
# &&&&&J(0) = 0 \\
# \end{aligned}
# ```
# # Model Definition
# First we must import ``InfiniteOpt`` and other packages.
using InfiniteOpt, Ipopt, Plots;
# Next we specify an array of initial conditions as well as problem variables.
x0 = 70
E, c, r, k, Umax = 1, 17.5, 0.71, 80.5, 20;
# We initialize the infinite model and opt to use the Ipopt solver
m = InfiniteModel(Ipopt.Optimizer);
# Now let's specify variables. ``u`` is as our fishing rate.
# ``x`` will be used to model the fish population in response to ``u``
# the infinite parameter ``t`` that will span over 10 years.
@infinite_parameter(m, t in [0,10],num_supports=100)
@variable(m, 1 <= x, Infinite(t))
@variable(m, 0 <= u <= 1, Infinite(t));
# ``J`` represents profit over time.
@variable(m, J, Infinite(t));
# Specifying the objective to maximize profit ``J``:
@objective(m, Max, J(10));
# Define the ODEs which serve as our system model.
@constraint(m, (J,t) == (E-c/x) * u * Umax)
@constraint(m, (x,t) == r * x *(1 - x/k) - u*Umax);
# Set our initial conditions.
@constraint(m, x(0) == x0)
@constraint(m, J(0) == 0);
# # Problem Solution
# Optimize the model:
optimize!(m)
# Extract the results.
ts = value(t)
u_opt = value(u)
x_opt = value(x)
J_opt = value(J);

p1 = plot(ts, [x_opt, J_opt] ,
label=["Fish Pop" "Profit"],
title="State Variables")
p2 = plot(ts, u_opt,
label = "Rate",
title = "Rate vs Time")
plot(p1,p2 ,layout=(2,1), size=(800,600));
69 changes: 69 additions & 0 deletions docs/src/examples/Optimal Control/Hanging Chain.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
# # Hanging Chain Problem
# We will solve a reformulated version of the Hanging Chain Problem
# from the Constrained Optimization Problem Set.
# # Problem Statement and Model
# In this problem, we seek to minimize the potential energy of a chain of
# length ``L`` suspended between points ``a`` and ``b``.
# The potential energy constrained by length is represented by:
# ```math
# \begin{gathered}
# \int_0^1 x(1 + (x')^2)^{1/2} \, dt
# \end{gathered}
# ```
# Our optimization problem is defined as follows:
# ```math
# \begin{aligned}
# &&\min_{x,u} x_2(t_f)\\
# &&\text{s.t.} &&& \quad \dot{x}_1= u\\
# &&&&&\dot{x}_2 = x_1(1+u^2)^{1/2}\\
# &&&&&\dot{x}_3 = (1+u^2)^{1/2}\\
# &&&&&x(t_0) = (a,0,0)^T\\
# &&&&&x_1(t_f) = b\\
# &&&&&x_3(t_f) = L\\
# &&&&&x(t) \in [0,10]\\
# &&&&&u(t) \in [-10,20]\\
# \end{aligned}
# ```

# # Model Definition
# First we must import ``InfiniteOpt`` and other packages.
using InfiniteOpt, Ipopt, Plots;
# Next we specify an array of initial conditions as well as problem variables.
a, b, L = 1, 3, 4
x0 = [a, 0, 0]
xtf = [b, NaN, L];
# We initialize the infinite model and opt to use the Ipopt solver
m = InfiniteModel(Ipopt.Optimizer);
# t is specified as ``\ t \in [0,1]``. The bounds are arbitrary for this problem:
@infinite_parameter(m, t in [0,1], num_supports = 100);
# Now let's specify variables. ``u`` is our controller variable.
@variable(m, 0 <= x[1:3] <= 10, Infinite(t))
@variable(m, -10 <= u <= 20, Infinite(t));
# Specifying the objective to minimize kinetic energy at the final time:
@objective(m, Min, x[2](1));
# Define the ODEs which serve as our system model.
@constraint(m, (x[1],t) == u)
@constraint(m, (x[2],t) == x[1] * (1 + u^2)^(1/2))
@constraint(m, (x[3],t) == (1 + u^2)^(1/2));
# Set our inital and final conditions.
@constraint(m, [i in 1:3], x[i](0) == x0[i])
@constraint(m, x[1](1) == xtf[1])
@constraint(m, x[3](1) == xtf[3]);
# # Problem Solution
# Optimize the model:
optimize!(m)
# Extract the results.
ts = value(t)
u_opt = value(u)
x1_opt = value(x[1])
x2_opt = value(x[2])
x3_opt = value(x[3])
@show(objective_value(m))
p1 = plot(ts, [x1_opt, x2_opt, x3_opt],
label=["x1" "x2" "x3"],
title="State Variables")

p2 = plot(ts, u_opt,
label="u(t)",
title="Input")
plot(p1, p2, layout=(2,1), size=(800,600));
69 changes: 69 additions & 0 deletions docs/src/examples/Optimal Control/Jennings.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
# # Minimizing Final Time (Jennings Problem)
# Solve an optimal control problem with a minimal final time.
# Set up and solve the Jennings optimal control benchmark problem.
# # Problem Statement and Model
# When solving differential equations over a variable time interval ``[0,t_f]``,
# we can apply a time-scaling transformation to normalize the interval to``[0,1]``.
# This is achieved by introducing a final time parameter ``t_f``.
# The Jennings optimal control problem divides derivatives by ``t_f``.
# In practice, ``t_f`` appears on the right hand side to avoid any divisions by 0.
# ```math
# \begin{gathered}
# \frac{\frac{dx}{dt}}{t_f} = f(x,u) \\
# \frac{dx}{dt} = t_f f(x,u) \\
# \end{gathered}
# ```
# Our specific problem is defined as the following:
# ```math
# \begin{aligned}
# &&\min_{u(t),t_f} t_f \\
# &&\text{s.t.} &&& \frac{dx_1}{dt}= t_f u \\
# &&&&&\frac{dx_2}{dt} = t_f \cos(x_1(t)) \\
# &&&&&\frac{dx_3}{dt} = t_f \sin(x_1(t)) \\
# &&&&&x(0) = [\pi/2, 4, 0] \\
# &&&&&x_2(t_f) = 0 \\
# &&&&&x_3(t_f) = 0 \\
# &&&&&-2 \leq u(t) \leq 2
# \end{aligned}
# ```

# # Model Definition

# First we must import ``InfiniteOpt`` and other packages.
using InfiniteOpt, Ipopt, Plots;
# Next we specify an array of initial conditions.
x0 =/2, 4, 0]; # x(0) for x1,x2,x3
# We initialize the infinite model and opt to use the Ipopt solver
m = InfiniteModel(Ipopt.Optimizer);
# Recall t is specified as ``\ t \in [0,1]``:
@infinite_parameter(m, t in [0,1],num_supports= 100)
# Now let's specify descision variables. Notice that ``t_f`` is
# not a function of time and is a singular value.
@variable(m, x[1:3], Infinite(t))
@variable(m, -2 <= u <= 2, Infinite(t))
@variable(m, 0.1 <= tf);
# Specifying the objective to minimize final time:
@objective(m, Min, tf);
# Define the ODEs which serve as our system model.
@constraint(m, (x[1],t) == tf*u)
@constraint(m, (x[2],t) == tf*cos(x[1]))
@constraint(m, (x[3],t) == tf*sin(x[1]));
# Set our inital and final conditions.
@constraint(m, [i in 1:3], x[i](0) == x0[i])
@constraint(m, x[2](1) <=0)
@constraint(m, x[3](1) <= 1e-1);
# # Problem Solution
# Optimize the model:
optimize!(m)
# Extract the results. Notice that we multiply by ``t_f``
# to scale our time.
ts = value(t)*value(tf)
u_opt = value(u)
x1_opt = value(x[1])
x2_opt = value(x[2])
x3_opt = value(x[3]);
# Plot the results
plot(ts, u_opt, label = "u(t)", linecolor = :black, linestyle = :dash)
plot!(ts, x1_opt, linecolor = :blue, linealpha = 0.4, label = "x1")
plot!(ts, x2_opt, linecolor = :green, linealpha = 0.4, label = "x2")
plot!(ts, x3_opt, linecolor = :red, linealpha = 0.4, label = "x3");

0 comments on commit a8957ea

Please sign in to comment.