Page 288 |
Save page Remove page | Previous | 288 of 289 | Next |
|
small (250x250 max)
medium (500x500 max)
Large (1000x1000 max)
Extra Large
large ( > 500x500)
Full Resolution
All (PDF)
|
This page
All
|
Todorov, E. (2005), ‘Stochastic optimal control and estimation methods adapted to the noise characteristics of the sensorimotor system’, Neural Computation 17(5), 1084. Todorov, E. (2007), Linearly-solvable markov decision problems, in B. Scholkopf, J. Platt & T. Hoffman, eds, ‘Advances in Neural Information Processing Systems 19 (NIPS 2007)’, Cambridge, MA: MIT Press, Vancouver, BC. Todorov, E. (2008), General duality between optimal control and estimation, in ‘Decision and Control, 2008. CDC 2008. 47th IEEE Conference on’, pp. 4286 –4292. Todorov, E. & Jordan, M. I. (2002), ‘Optimal feedback control as a theory of motor coordination.’, Nature neuroscience 5(11), 1226–1235. URL: http://dx.doi.org/10.1038/nn963 Toussaint, M. & Storkey, A. (2006), ‘Probabilistic inference for solving discrete and con-tinuous state markov decision processes’. Valero-Cuevas, F. J. (2009), ‘A mathematical approach to the mechanical capabilities of limbs and fingers’, 629, 619–633. Valero-Cuevas, F. J., Johanson, M. E. & Towles, J. D. (2003), ‘Towards a realistic biome-chanical model of the thumb: the choice of kinematic description may be more critical than the solution method or the variability/uncertainty of musculoskeletal parameters.’, J Biomech 36(7), 1019–1030. Valero-Cuevas, F. J., Towles, J. D. & Hentz, V. R. (2000), ‘Quantification of fingertip force reduction in the forefinger following simulated paralysis of extensor and intrinsic muscles’, Journal of Biomechanics 33(12), 1601 – 1609. Valero-Cuevas, F. J., Zajac, F. E. & Burgar, C. G. (1998), ‘Large index-fingertip forces are produced by subject-independent patterns of muscle excitation’, Journal of Biome-chanics 31(8), 693 – 703. Venkadesan, M. & Valero-Cuevas, F. (2008a), ‘Effects of time delays on controlling contact transitions’, Royal Society . Venkadesan, M. & Valero-Cuevas, F. (2008b), ‘Neural control of motion- to force transi-tions with the fingertip’, The journal of Neuroscience 28(6), 1366–1373. Vlassis, N., Toussaint, M., Kontes, G. & S., P. (2009), ‘Learning model-free control by a monte-carlo em algorithm’, Autonomous Robots 27(2), 123–130. Whittle, P. (1990), Risk Sensitive Optimal Control, Wiley. Whittle, P. (1991), ‘Risk sensitive optimal linear quadratic gaussian control’, Adv. Appl. Probability 13, 746 – 777. Williams, R. J. (1992), ‘Simple statistical gradient-following algorithms for connectionist reinforcement learning’, Machine Learning 8, 229–256. 274
Object Description
Title | Iterative path integral stochastic optimal control: theory and applications to motor control |
Author | Theodorou, Evangelos A. |
Author email | etheodor@usc.edu; theo0027@umn.edu |
Degree | Doctor of Philosophy |
Document type | Dissertation |
Degree program | Computer Science |
School | Viterbi School of Engineering |
Date defended/completed | 2011-01-11 |
Date submitted | 2011 |
Restricted until | Unrestricted |
Date published | 2011-04-29 |
Advisor (committee chair) | Schaal, Stefan |
Advisor (committee member) |
Valero-Cuevas, Francisco Sukhatme, Gaurav S. Todorov, Emo Schweighofer, Nicolas |
Abstract | Motivated by the limitations of current optimal control and reinforcement learning methods in terms of their efficiency and scalability, this thesis proposes an iterative stochastic optimal control approach based on the generalized path integral formalism. More precisely, we suggest the use of the framework of stochastic optimal control with path integrals to derive a novel approach to RL with parameterized policies. While solidly grounded in value function estimation and optimal control based on the stochastic Hamilton Jacobi Bellman (HJB) equation, policy improvements can be transformed into an approximation problem of a path integral which has no open algorithmic parameters other than the exploration noise. The resulting algorithm can be conceived of as model-based, semi-model-based, or even model free, depending on how the learning problem is structured. The new algorithm, Policy Improvement with Path Integrals (PI2), demonstrates interesting similarities with previous RL research in the framework of probability matching and provides intuition why the slightly heuristically motivated probability matching approach can actually perform well. Applications to high dimensional robotic systems are presented for a variety of tasks that require optimal planning and gain scheduling.; In addition to the work on generalized path integral stochastic optimal control, in this thesis we extend model based iterative optimal control algorithms to the stochastic setting. More precisely we derive the Differential Dynamic Programming algorithm for stochastic systems with state and control multiplicative noise. Finally, in the last part of this thesis, model based iterative optimal control methods are applied to bio-mechanical models of the index finger with the goal to find the underlying tendon forces applied for the movements of, tapping and flexing. |
Keyword | stochastic optimal control; reinforcement learning,; robotics |
Language | English |
Part of collection | University of Southern California dissertations and theses |
Publisher (of the original version) | University of Southern California |
Place of publication (of the original version) | Los Angeles, California |
Publisher (of the digital version) | University of Southern California. Libraries |
Provenance | Electronically uploaded by the author |
Type | texts |
Legacy record ID | usctheses-m3804 |
Contributing entity | University of Southern California |
Rights | Theodorou, Evangelos A. |
Repository name | Libraries, University of Southern California |
Repository address | Los Angeles, California |
Repository email | cisadmin@lib.usc.edu |
Filename | etd-Theodorou-4581 |
Archival file | uscthesesreloadpub_Volume14/etd-Theodorou-4581.pdf |
Description
Title | Page 288 |
Contributing entity | University of Southern California |
Repository email | cisadmin@lib.usc.edu |
Full text | Todorov, E. (2005), ‘Stochastic optimal control and estimation methods adapted to the noise characteristics of the sensorimotor system’, Neural Computation 17(5), 1084. Todorov, E. (2007), Linearly-solvable markov decision problems, in B. Scholkopf, J. Platt & T. Hoffman, eds, ‘Advances in Neural Information Processing Systems 19 (NIPS 2007)’, Cambridge, MA: MIT Press, Vancouver, BC. Todorov, E. (2008), General duality between optimal control and estimation, in ‘Decision and Control, 2008. CDC 2008. 47th IEEE Conference on’, pp. 4286 –4292. Todorov, E. & Jordan, M. I. (2002), ‘Optimal feedback control as a theory of motor coordination.’, Nature neuroscience 5(11), 1226–1235. URL: http://dx.doi.org/10.1038/nn963 Toussaint, M. & Storkey, A. (2006), ‘Probabilistic inference for solving discrete and con-tinuous state markov decision processes’. Valero-Cuevas, F. J. (2009), ‘A mathematical approach to the mechanical capabilities of limbs and fingers’, 629, 619–633. Valero-Cuevas, F. J., Johanson, M. E. & Towles, J. D. (2003), ‘Towards a realistic biome-chanical model of the thumb: the choice of kinematic description may be more critical than the solution method or the variability/uncertainty of musculoskeletal parameters.’, J Biomech 36(7), 1019–1030. Valero-Cuevas, F. J., Towles, J. D. & Hentz, V. R. (2000), ‘Quantification of fingertip force reduction in the forefinger following simulated paralysis of extensor and intrinsic muscles’, Journal of Biomechanics 33(12), 1601 – 1609. Valero-Cuevas, F. J., Zajac, F. E. & Burgar, C. G. (1998), ‘Large index-fingertip forces are produced by subject-independent patterns of muscle excitation’, Journal of Biome-chanics 31(8), 693 – 703. Venkadesan, M. & Valero-Cuevas, F. (2008a), ‘Effects of time delays on controlling contact transitions’, Royal Society . Venkadesan, M. & Valero-Cuevas, F. (2008b), ‘Neural control of motion- to force transi-tions with the fingertip’, The journal of Neuroscience 28(6), 1366–1373. Vlassis, N., Toussaint, M., Kontes, G. & S., P. (2009), ‘Learning model-free control by a monte-carlo em algorithm’, Autonomous Robots 27(2), 123–130. Whittle, P. (1990), Risk Sensitive Optimal Control, Wiley. Whittle, P. (1991), ‘Risk sensitive optimal linear quadratic gaussian control’, Adv. Appl. Probability 13, 746 – 777. Williams, R. J. (1992), ‘Simple statistical gradient-following algorithms for connectionist reinforcement learning’, Machine Learning 8, 229–256. 274 |