A synthesis of reinforcement learning and robust control theory
dc.contributor.author | Kretchmar, R. Matthew, author | |
dc.contributor.author | Anderson, Charles, advisor | |
dc.contributor.author | Howe, Adele E., committee member | |
dc.contributor.author | Whitley, L. Darrell, committee member | |
dc.contributor.author | Young, Peter M., committee member | |
dc.contributor.author | Hittle, Douglas C., committee member | |
dc.date.accessioned | 2007-01-03T04:43:07Z | |
dc.date.available | 2007-01-03T04:43:07Z | |
dc.date.issued | 2000 | |
dc.description | Department Head: Stephen B. Seidman. | |
dc.description.abstract | The pursuit of control algorithms with improved performance drives the entire control research community as well as large parts of the mathematics, engineering, and artificial intelligence research communities. A fundamental limitation on achieving control performance is the conflicting requirement of maintaining system stability. In general, the more aggressive is the controller, the better the control performance but also the closer to system instability. Robust control is a collection of theories, techniques, the tools that form one of the leading edge approaches to control. Most controllers are designed not on the physical plant to be controlled, but on a mathematical model of the plant; hence, these controllers often do not perform well on the physical plant and are sometimes unstable. Robust control overcomes this problem by adding uncertainty to the mathematical model. The result is a more general, less aggressive controller which performs well on the both the model and the physical plant. However, the robust control method also sacrifices some control performance in order to achieve its guarantees of stability. Reinforcement learning based neural networks offer some distinct advantages for improving control performance. Their nonlinearity enables the neural network to implement a wider range of control functions, and their adaptability permits them to improve control performance via on-line, trial-and-error learning. However, neuro-control is typically plagued by a lack of stability guarantees. Even momentary instability cannot be tolerated in most physical plants, and thus, the threat of instability prohibits the application of neuro-control in many situations. In this dissertation, we develop a stable neuro-control scheme by synthesizing the two fields of reinforcement learning and robust control theory. We provide a learning system with many of the advantages of neuro-control. Using functional uncertainty to represent the nonlinear and time-varying components of the neuro networks, we apply the robust control techniques to guarantee the stability of our neuro-controller. Our scheme provides stable control not only for a specific fixed-weight, neural network, but also for a neuro-controller in which the weights are changing during learning. Furthermore, we apply our stable neuro-controller to several control tasks to demonstrate that the theoretical stability guarantee is readily applicable to real-life control situations. We also discuss several problems we encounter and identify potential avenues of future research. | |
dc.format.medium | doctoral dissertations | |
dc.identifier | 2000_summer_Kretchmar_COMS.pdf | |
dc.identifier | ETDF2000100002COMS | |
dc.identifier.uri | http://hdl.handle.net/10217/26305 | |
dc.language | English | |
dc.language.iso | eng | |
dc.publisher | Colorado State University. Libraries | |
dc.relation | Catalog record number (MMS ID): 991009402799703361 | |
dc.relation | Q325.6.K74 2000 | |
dc.relation.ispartof | 2000-2019 | |
dc.rights | Copyright and other restrictions may apply. User is responsible for compliance with all applicable laws. For information about copyright law, please see https://libguides.colostate.edu/copyright. | |
dc.subject | robust control theory | |
dc.subject | neuro-control scheme | |
dc.subject | neuro-controller | |
dc.subject | neural networks | |
dc.subject | Reinforcement learning | |
dc.subject | Control theory | |
dc.title | A synthesis of reinforcement learning and robust control theory | |
dc.type | Text | |
dcterms.rights.dpla | This Item is protected by copyright and/or related rights (https://rightsstatements.org/vocab/InC/1.0/). You are free to use this Item in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s). | |
thesis.degree.discipline | Computer Science | |
thesis.degree.grantor | Colorado State University | |
thesis.degree.level | Doctoral | |
thesis.degree.name | Doctor of Philosophy (Ph.D.) |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- 2000_summer_Kretchmar_COMS.pdf
- Size:
- 1.12 MB
- Format:
- Adobe Portable Document Format
- Description: