Domains‎ > ‎

Helicopter

Helicopter Hovering

Creators: Pieter Abbeel, Adam Coates, Andrew Y. Ng, Stanford University.

Autonomous helicopter flight represents a challenging control problem with high dimensional, asymmetric, noisy, nonlinear, non-minimum phase dynamics. Though helicopters are significantly harder to control than fixed-wing

aircraft, they are uniquely suited to many applications requiring either low-speed flight or stable hovering. The control of autonomous helicopters thus provides an important and challenging testbed for learning and control algorithms.

Based on a simulator created by Andrew Ng's group at Stanford, the competition environment simulates an XCell Tempest helicopter in the flight regime close to hover. The agent's objective is to hover the helicopter by manipulating four continuous control inputs based on a 12-dimensional state space. A few pictures of the helicopter have been included with the simulator.

In the last few years, considerable progress has been made in finding good controllers for helicopters [Abbeel et al., 2006]. Other recent accounts of successful autonomous helicopter flight are given in: [Bagnell and Schneider, 2001] [Gavrilets et al., 2004], [La Civita et al., 2006], [Ng et al., 2004a], [Ng et al., 2004b], [Roberts et al., 2003], [Saripalli et al., 2003], and [Abbeel et al., 2008].

 

Technical Details


The helicopter's full state is given by its velocity, position, angular rate and orientation.

Observation Space: 12 dimensional, continuous valued

  1. forward velocity
  2. sideways velocity (to the right)
  3. downward velocity
  4. helicopter x-coord position - desired x-coord position -- helicopter's x-axis points forward
  5. helicopter y-coord position - desired y-coord position -- helicopter's y-axis points to the right
  6. helicopter z-coord position - desired z-coord position -- helicopter's z-axis points down
  7. angular rate around helicopter's x axis
  8. angular rate around helicopter's y axis
  9. angular rate around helicopter's z axis
  10. quaternion x entry
  11. quaternion y entry
  12. quaternion z entry
Action Space: 4 dimensional, continuous valued
  1. longitudinal (front-back) cyclic pitch
  2. latitudinal (left-right) cyclic pitch
  3. main rotor collective pitch
  4. tail rotor collective pitch
Rewards: function of the 12 dimensional observation
End conditions

The simulator is set up to run for 6000 timesteps, and each simulation step is 0.1 seconds, thus giving runs of 10 minutes (although the simulator runs faster than real-time). If the simulator enters a terminal state before 6000 timesteps, a large negative reward is given, corresponding to getting the most negative reward achievable for the remaining time.

Note: the competition software will provide your agent with a task specification string that describes the basic inputs and outputs of the particular problem instance your agent is facing. For the competition, the ranges provided in task specification may not be tight; they provide a rough approximation of the actual observation and action ranges.

References

  • [Abbeel et al., 2008] Pieter Abbeel, Adam Coates, Timothy Hunter and Andrew Y. Ng. Autonomous Autorotation of an RC Helicopter. In 11th International Symposium on Experimental Robotics (ISER), 2008.
  • [Abbeel et al., 2006] Pieter Abbeel, Adam Coates, Morgan Quigley, Andrew Ng. An Application of Reinforcement Learning to Aerobatic Helicopter Flight. In NIPS 2006.
  • [Bagnell and Schneider, 2001] J. Bagnell and J. Schneider. Autonomous helicopter control using reinforcement learning policy search methods. In IEEE International Conference on Robotics and Automation, 2001.
  • [La Civita et al., 2006] M. La Civita, G. Papageorgiou, W. C. Messner, and T. Kanade. Design and flight testing of a high-bandwidth H(inf) loop shaping controller for arobotic helicopter. Journal of Guidance, Control, and Dynamics, 29(2):485-494, 2006.
  • [Gavrilets et al., 2004] V. Gavrilets, B. Mettler, and E. Feron. Human-inspired control logic for automated maneuvering of miniature helicopter. Journal of Guidance, Control, and Dynamics, 27(5):752-759, 2004.
  • [Ng et al., 2004a] A. Y. Ng, A. Coates, M. Diel, V. Ganapathi, J. Schulte, B. Tse, E. Berger, and E. Liang. Autonomous inverted helicopter flight via reinforcement learning. In 11th International Symposium on Experimental Robotics (ISER), 2004.
  • [Ng et al., 2004b] Andrew Y. Ng, H. Jin Kim, Michael Jordan, and Shankar Sastry. Autnonomous helicopter flight via reinforcement learning. In NIPS 2004.
  • [Roberts et al., 2003] Jonathan M. Roberts, Peter I. Corke, and Gregg Buskey. Low-cost flight control system for a small autonomous helicopter. In IEEE Intl Conf. on Robotics and Automation, 2003.
  • [Saripalli et al., 2003] S. Saripalli, J. F. Montgomery, and G. S. Sukhatme. Visually-guided landing of an unmanned aerial vehicle. IEEE Transactions on Robotics and Autonomous Systems, 2003.
ċ
helicopter.tar.bz2
(3836k)
Christos Dimitrakakis,
Apr 22, 2013, 1:18 PM
Comments