Page 227 |
Save page Remove page | Previous | 227 of 289 | Next |
|
small (250x250 max)
medium (500x500 max)
Large (1000x1000 max)
Extra Large
large ( > 500x500)
Full Resolution
All (PDF)
|
This page
All
|
this point, the basic shape of the gain scheduling has been determined. In the second phase, PI2 fine tunes the gains, and lowers them as much as the task permits. 6.7 Manipulation task 6.7.1 Task 2: Pushing open a door with the CBi humanoid In this task, the simulated CBi humanoid robot (Cheng, Hyon, Morimoto, Ude, Hale, Colvin, Scroggin & Jacobsen 2007) is required to open a door. This robot is accurately simulated with the SL software (Schaal 2009). For this task, we not only learn the gain schedules, but also improve the planned joint trajectories withPI2simultaneously. Regarding the initial trajectory in this task, we fix the base of the robot, and consider only the 7 degrees of freedom in the left arm. The initial trajectory before learning is a minimum jerk trajectory in joint space. In the initial state, the upper arm is kept parallel to the body, and the lower arm is pointing forward. The target state is depicted in Figure 6.12. With this task, we demonstrate that our approach can not only be applied to imitation of observed behavior, but also to manually specify trajectories, which are fine-tuned along with the gain schedules. The gains of the 7 joints are initialized to 1/10th of their default values. This leads to extremely compliant behavior, whereby the robot is not able to exert enough force to overcome the static friction of the door, and thus cannot move it. The minimum gain for all joints was set to 5. Optimizing both joint trajectories and gains leads to a 14-dimensional learning problem. 213
Object Description
Title | Iterative path integral stochastic optimal control: theory and applications to motor control |
Author | Theodorou, Evangelos A. |
Author email | etheodor@usc.edu; theo0027@umn.edu |
Degree | Doctor of Philosophy |
Document type | Dissertation |
Degree program | Computer Science |
School | Viterbi School of Engineering |
Date defended/completed | 2011-01-11 |
Date submitted | 2011 |
Restricted until | Unrestricted |
Date published | 2011-04-29 |
Advisor (committee chair) | Schaal, Stefan |
Advisor (committee member) |
Valero-Cuevas, Francisco Sukhatme, Gaurav S. Todorov, Emo Schweighofer, Nicolas |
Abstract | Motivated by the limitations of current optimal control and reinforcement learning methods in terms of their efficiency and scalability, this thesis proposes an iterative stochastic optimal control approach based on the generalized path integral formalism. More precisely, we suggest the use of the framework of stochastic optimal control with path integrals to derive a novel approach to RL with parameterized policies. While solidly grounded in value function estimation and optimal control based on the stochastic Hamilton Jacobi Bellman (HJB) equation, policy improvements can be transformed into an approximation problem of a path integral which has no open algorithmic parameters other than the exploration noise. The resulting algorithm can be conceived of as model-based, semi-model-based, or even model free, depending on how the learning problem is structured. The new algorithm, Policy Improvement with Path Integrals (PI2), demonstrates interesting similarities with previous RL research in the framework of probability matching and provides intuition why the slightly heuristically motivated probability matching approach can actually perform well. Applications to high dimensional robotic systems are presented for a variety of tasks that require optimal planning and gain scheduling.; In addition to the work on generalized path integral stochastic optimal control, in this thesis we extend model based iterative optimal control algorithms to the stochastic setting. More precisely we derive the Differential Dynamic Programming algorithm for stochastic systems with state and control multiplicative noise. Finally, in the last part of this thesis, model based iterative optimal control methods are applied to bio-mechanical models of the index finger with the goal to find the underlying tendon forces applied for the movements of, tapping and flexing. |
Keyword | stochastic optimal control; reinforcement learning,; robotics |
Language | English |
Part of collection | University of Southern California dissertations and theses |
Publisher (of the original version) | University of Southern California |
Place of publication (of the original version) | Los Angeles, California |
Publisher (of the digital version) | University of Southern California. Libraries |
Provenance | Electronically uploaded by the author |
Type | texts |
Legacy record ID | usctheses-m3804 |
Contributing entity | University of Southern California |
Rights | Theodorou, Evangelos A. |
Repository name | Libraries, University of Southern California |
Repository address | Los Angeles, California |
Repository email | cisadmin@lib.usc.edu |
Filename | etd-Theodorou-4581 |
Archival file | uscthesesreloadpub_Volume14/etd-Theodorou-4581.pdf |
Description
Title | Page 227 |
Contributing entity | University of Southern California |
Repository email | cisadmin@lib.usc.edu |
Full text | this point, the basic shape of the gain scheduling has been determined. In the second phase, PI2 fine tunes the gains, and lowers them as much as the task permits. 6.7 Manipulation task 6.7.1 Task 2: Pushing open a door with the CBi humanoid In this task, the simulated CBi humanoid robot (Cheng, Hyon, Morimoto, Ude, Hale, Colvin, Scroggin & Jacobsen 2007) is required to open a door. This robot is accurately simulated with the SL software (Schaal 2009). For this task, we not only learn the gain schedules, but also improve the planned joint trajectories withPI2simultaneously. Regarding the initial trajectory in this task, we fix the base of the robot, and consider only the 7 degrees of freedom in the left arm. The initial trajectory before learning is a minimum jerk trajectory in joint space. In the initial state, the upper arm is kept parallel to the body, and the lower arm is pointing forward. The target state is depicted in Figure 6.12. With this task, we demonstrate that our approach can not only be applied to imitation of observed behavior, but also to manually specify trajectories, which are fine-tuned along with the gain schedules. The gains of the 7 joints are initialized to 1/10th of their default values. This leads to extremely compliant behavior, whereby the robot is not able to exert enough force to overcome the static friction of the door, and thus cannot move it. The minimum gain for all joints was set to 5. Optimizing both joint trajectories and gains leads to a 14-dimensional learning problem. 213 |