Mathematical way of attaining a desired output from a dynamic system
Optimal control theory is a branch of control theory that deals with finding a control for a dynamical system over a period of time such that an objective function is optimized.[1] It has numerous applications in science, engineering and operations research. For example, the dynamical system might be a spacecraft with controls corresponding to rocket thrusters, and the objective might be to reach the Moon with minimum fuel expenditure.[2] Or the dynamical system could be a nation's economy, with the objective to minimize unemployment; the controls in this case could be fiscal and monetary policy.[3] A dynamical system may also be introduced to embed operations research problems within the framework of optimal control theory.[4][5]
Optimal control is an extension of the calculus of variations, and is a mathematical optimization method for deriving control policies.[6] The method is largely due to the work of Lev Pontryagin and Richard Bellman in the 1950s, after contributions to calculus of variations by Edward J. McShane.[7] Optimal control can be seen as a control strategy in control theory.[1]
^ abRoss, Isaac (2015). A primer on Pontryagin's principle in optimal control. San Francisco: Collegiate Publishers. ISBN 978-0-9843571-0-9. OCLC 625106088.
^Luenberger, David G. (1979). "Optimal Control". Introduction to Dynamic Systems. New York: John Wiley & Sons. pp. 393–435. ISBN 0-471-02594-1.
^Kamien, Morton I. (2013). Dynamic Optimization: the Calculus of Variations and Optimal Control in Economics and Management. Dover Publications. ISBN 978-1-306-39299-0. OCLC 869522905.
^Ross, I. M.; Proulx, R. J.; Karpenko, M. (6 May 2020). "An Optimal Control Theory for the Traveling Salesman Problem and Its Variants". arXiv:2005.03186 [math.OC].
^Ross, Isaac M.; Karpenko, Mark; Proulx, Ronald J. (1 January 2016). "A Nonsmooth Calculus for Solving Some Graph-Theoretic Control Problems**This research was sponsored by the U.S. Navy". IFAC-PapersOnLine. 10th IFAC Symposium on Nonlinear Control Systems NOLCOS 2016. 49 (18): 462–467. doi:10.1016/j.ifacol.2016.10.208. ISSN 2405-8963.
^Sargent, R. W. H. (2000). "Optimal Control". Journal of Computational and Applied Mathematics. 124 (1–2): 361–371. Bibcode:2000JCoAM.124..361S. doi:10.1016/S0377-0427(00)00418-0.
^Bryson, A. E. (1996). "Optimal Control—1950 to 1985". IEEE Control Systems Magazine. 16 (3): 26–33. doi:10.1109/37.506395.
Optimalcontrol theory is a branch of control theory that deals with finding a control for a dynamical system over a period of time such that an objective...
for this reason MPC is also called receding horizon control. Although this approach is not optimal, in practice it has given very good results. Much academic...
the optimal policy in the last time period is specified in advance as a function of the state variable's value at that time, and the resulting optimal value...
unscented optimalcontrol combines the notion of the unscented transform with deterministic optimalcontrol to address a class of uncertain optimalcontrol problems...
Fractional-order control H-infinity loop-shaping Hierarchical control system Model predictive controlOptimalcontrol Process control Robust control Servomechanism...
learning (RL) is an interdisciplinary area of machine learning and optimalcontrol concerned with how an intelligent agent ought to take actions in a...
The PROPT MATLAB OptimalControl Software is a new generation platform for solving applied optimalcontrol (with ODE or DAE formulation) and parameters...
a cost function where a minimum implies a set of possibly optimal parameters with an optimal (lowest) error. Typically, A is some subset of the Euclidean...
Machine learning control (MLC) is a subfield of machine learning, intelligent control and control theory which solves optimalcontrol problems with methods...
Stochastic control or stochastic optimalcontrol is a sub field of control theory that deals with the existence of uncertainty either in observations...
the natural outcome of an adaptive optimalcontrol process. Optimalcontrol is a way of understanding motor control and the motor equivalence problem,...
on the parameters of the problem. In a controlled dynamical system, the value function represents the optimal payoff of the system over the interval [t...
same precision as an optimal design. In practical terms, optimal experiments can reduce the costs of experimentation. The optimality of a design depends...
variables. They are one of the necessary conditions for optimality infinite-horizon optimalcontrol problems without an endpoint constraint on the state...
regulations on trade Internal control, a process to help achieve specific goals typically related to managing risk Control (optimalcontrol theory), a variable...
optimization is a technique for computing an open-loop solution to an optimalcontrol problem. It is often used for systems where computing the full closed-loop...
problem – to movement trajectories. Active inference is related to optimalcontrol by replacing value or cost-to-go functions with prior beliefs about...
"Estimation and nonlinear optimalcontrol: Particle resolution in filtering and estimation". Studies on: Filtering, optimalcontrol, and maximum likelihood...
solved optimally by breaking it into sub-problems and then recursively finding the optimal solutions to the sub-problems, then it is said to have optimal substructure...
and open source symbolic framework for automatic differentiation and optimalcontrol. Automatic differentiation JModelica.org "Optimization in Engineering...