Creators: Pieter Abbeel, Adam Coates, Andrew Y. Ng, Stanford University.
Autonomous helicopter flight represents a challenging control problem with high dimensional, asymmetric, noisy, nonlinear, non-minimum phase dynamics. Though helicopters are significantly harder to control than fixed-wing
aircraft, they are uniquely suited to many applications requiring either low-speed flight or stable hovering. The control of autonomous helicopters thus provides an important and challenging testbed for learning and control algorithms.
Based on a simulator created by Andrew Ng's group at Stanford, the competition environment simulates an XCell Tempest helicopter in the flight regime close to hover. The agent's objective is to hover the helicopter by manipulating four continuous control inputs based on a 12-dimensional state space. A few pictures of the helicopter have been included with the simulator.
In the last few years, considerable progress has been made in finding good controllers for helicopters [Abbeel et al., 2006]. Other recent accounts of successful autonomous helicopter flight are given in: [Bagnell and Schneider, 2001] [Gavrilets et al., 2004], [La Civita et al., 2006], [Ng et al., 2004a], [Ng et al., 2004b], [Roberts et al., 2003], [Saripalli et al., 2003], and [Abbeel et al., 2008].
The helicopter's full state is given by its velocity, position, angular rate and orientation.
Observation Space: 12 dimensional, continuous valued
- forward velocity
- sideways velocity (to the right)
- downward velocity
- helicopter x-coord position - desired x-coord position -- helicopter's x-axis points forward
- helicopter y-coord position - desired y-coord position -- helicopter's y-axis points to the right
- helicopter z-coord position - desired z-coord position -- helicopter's z-axis points down
- angular rate around helicopter's x axis
- angular rate around helicopter's y axis
- angular rate around helicopter's z axis
- quaternion x entry
- quaternion y entry
- quaternion z entry
Action Space: 4 dimensional, continuous valued
- longitudinal (front-back) cyclic pitch
- latitudinal (left-right) cyclic pitch
- main rotor collective pitch
- tail rotor collective pitch
Rewards: function of the 12 dimensional observation
The simulator is set up to run for 6000 timesteps, and each simulation step is 0.1 seconds, thus giving runs of 10 minutes (although the simulator runs faster than real-time). If the simulator enters a terminal state before 6000 timesteps, a large negative reward is given, corresponding to getting the most negative reward achievable for the remaining time.
Note: the competition software will provide your agent with a task specification string that describes the basic inputs and outputs of the particular problem instance your agent is facing. For the competition, the ranges provided in task specification may not be tight; they provide a rough approximation of the actual observation and action ranges.
- [Abbeel et al., 2008] Pieter Abbeel, Adam Coates, Timothy Hunter and Andrew Y. Ng. Autonomous Autorotation of an RC Helicopter. In 11th International Symposium on Experimental Robotics (ISER), 2008.
- [Abbeel et al., 2006] Pieter Abbeel, Adam Coates, Morgan Quigley, Andrew Ng. An Application of Reinforcement Learning to Aerobatic Helicopter Flight. In NIPS 2006.
- [Bagnell and Schneider, 2001] J. Bagnell and J. Schneider. Autonomous helicopter control using reinforcement learning policy search methods. In IEEE International Conference on Robotics and Automation, 2001.
- [La Civita et al., 2006] M. La Civita, G. Papageorgiou, W. C. Messner, and T. Kanade. Design and flight testing of a high-bandwidth H(inf) loop shaping controller for arobotic helicopter. Journal of Guidance, Control, and Dynamics, 29(2):485-494, 2006.
- [Gavrilets et al., 2004] V. Gavrilets, B. Mettler, and E. Feron. Human-inspired control logic for automated maneuvering of miniature helicopter. Journal of Guidance, Control, and Dynamics, 27(5):752-759, 2004.
- [Ng et al., 2004a] A. Y. Ng, A. Coates, M. Diel, V. Ganapathi, J. Schulte, B. Tse, E. Berger, and E. Liang. Autonomous inverted helicopter flight via reinforcement learning. In 11th International Symposium on Experimental Robotics (ISER), 2004.
- [Ng et al., 2004b] Andrew Y. Ng, H. Jin Kim, Michael Jordan, and Shankar Sastry. Autnonomous helicopter flight via reinforcement learning. In NIPS 2004.
- [Roberts et al., 2003] Jonathan M. Roberts, Peter I. Corke, and Gregg Buskey. Low-cost flight control system for a small autonomous helicopter. In IEEE Intl Conf. on Robotics and Automation, 2003.
- [Saripalli et al., 2003] S. Saripalli, J. F. Montgomery, and G. S. Sukhatme. Visually-guided landing of an unmanned aerial vehicle. IEEE Transactions on Robotics and Autonomous Systems, 2003.