[
home]
[
Personal Program]
[
Help]
tag
13:30
30 mins
Stall flutter suppression with active camber morphing based on reinforcement learning
Jinying Li, Yuting Dai, Chao Yang
Session: Data-driven methods 2
Session starts: Tuesday 18 June, 13:30
Presentation starts: 13:30
Room: Room 1.3
Jinying Li ()
Yuting Dai ()
Chao Yang ()
Abstract:
The stall flutter has been facing the difficulty of high-dimension, nonlinearity, and unsteadiness for a long time, which makes it hard to predict and control. In recent years, the rise of data-driven methods has brought an inspiring perspective on certain topic. Among the thriving data-driven methods, reinforcement learning (RL) shows its outstanding ability in complex model prediction, directness, and generalization ability. This study investigates the adaptation of RL into stall flutter suppression. The geometric model is an NACA0012 airfoil with active trailing edge morphing. Firstly, an offline, rapid responsive stall flutter environment is constructed with differential equations, where the aerodynamic force is predicted with reduced order models. A double-Q-network (DQN) algorithm is adapted to train the controlling agent with proposed offline environment. The agent has 5 optional actions: large downward morph, small downward morph, stay still, small upward morph and large upward morph. The reward function is designed with a linear combined punishment of pitching angle and angular velocity, a large bonus reward on complete suppression and a large punishment on over-limit morphing. The trained agent shows a rapid and complete stall flutter suppression performance in offline environment simulation. Test is further conducted in online, high-fidelity, fluid-structure-interacted computation, where the trained agent also performs significant suppression effect. Additionally, a broad generalization ability of the trained agent is also observed in high-fidelity tests, which effectively suppresses stall flutter under various ranges of inflow airspeeds and computation timestep size.