Automatica 45 (2009) 1668–1678
Contents lists available at ScienceDirect
Automatica journal homepage: www.elsevier.com/locate/automatica
On-line robust trajectory generation on approach and landing for reusable launch vehiclesI Zhesheng Jiang ∗ , Raúl Ordóñez 1 Department of Electrical and Computer Engineering, University of Dayton, Dayton, OH 45469-0232, USA
article
info
Article history: Received 10 September 2007 Received in revised form 9 November 2008 Accepted 19 March 2009 Available online 29 April 2009 Keywords: Reusable launch vehicle Motion primitive Neighboring optimal control Adaptive techniques Robustification Neighboring feasible trajectory existence theorem Trajectory robustness theorem
abstract A major objective of next generation reusable launch vehicle (RLV) programs includes significant improvements in vehicle safety, reliability, and operational costs. In this paper, novel approaches that can deliver an RLV to its landing site safely and reliably are proposed. Trajectory generation on approach/landing (A&L) for RLVs using motion primitives (MPs) and neighboring optimal control (NOC) is first discussed. In this stage, the proposed trajectory generation approach is based on an MP scheme that consists of trims and maneuvers. From an initial point to a given touchdown point, all feasible trajectories that satisfy certain constraints are generated and saved into a trajectory database. An optimal trajectory can then be found off-line by using Dijkstra’s algorithm. If a vehicle failure occurs, perturbations are imposed on the initial states of the off-line optimal trajectory, and it is reshaped into a neighboring feasible trajectory on-line by using an NOC approach. If the perturbations are small enough, a neighboring feasible trajectory existence theorem (NFTET) is then investigated and its proof is provided as well. The approach given in the NFTET shows that a vehicle with stuck effectors can be recovered from failures in real time. However, when the perturbations become large, for example, in severe failure scenarios, the NFTET is no longer applicable and often the vehicle cannot be recovered from such failures. A new method is then used to deal with this situation. The NFTET is now extended to the trajectory robustness theorem (TRT). According to the TRT and its proof, a robustifying term is introduced to compensate for the effects of the linear approximation in the NFTET. The upper bounds with respect to input deviation are adaptively adjusted to eliminate their uncertainty. In order to obtain best performance, σ -modification is employed. The simulation results verify the excellent robust performance of this method. © 2009 Elsevier Ltd. All rights reserved.
1. Introduction The increased demand for commercial and military utilization of space is a substantial driver for the development of new technologies to improve space vehicle economics. Reusable launch vehicles (RLVs) have the potential to increase space launch efficiencies far beyond those achieved by current systems. Second generation (and future generation) RLVs may eventually take the place of the space shuttles, but not before scientists perfect
I This work was supported with AFRL/AFOSR grant No. F33615-01-2-3154. In addition the work was supported by DAGSI and the University of Dayton Graduate School Dean’s Summer Fellowship. The material in this paper was partially presented at the 31st Annual Dayton–Cincinnati Aerospace Science Symposium, 2007 American Control Conference, and 46th IEEE Conference on Decision and Control. This paper was recommended for publication in revised form by Associate Editor Shuzhi Sam Ge under the direction of Editor Miroslav Krstic. ∗ Corresponding author. Tel.: +1 937 829 6279; fax: +1 937 229 4529. E-mail addresses:
[email protected],
[email protected] (Z. Jiang),
[email protected] (R. Ordóñez). 1 Tel.: +1 937 2293183; fax: +1 937 2294529.
0005-1098/$ – see front matter © 2009 Elsevier Ltd. All rights reserved. doi:10.1016/j.automatica.2009.03.017
the technologies that make RLVs safer, more reliable, and less expensive than the shuttle fleet. To achieve this goal, a variety of RLV trajectory design approaches have recently been proposed. Generally, an RLV mission is composed of four major flight phases: ascent, re-entry, terminal area energy management (TAEM), and approach and landing (A&L). Some results on trajectory generation in ascent and re-entry phases were presented in Doman (2004). Some methods of trajectory planning for TAEM were presented in Burchett (2004), Grantham (2003) and Hull, Gandhi, and Schierman (2005). A few trajectory design approaches at the A&L phase were discussed in Barton and Tragesser (1999), Cox, Stepniewski, Jorgensen, Saeks, and Lewis (1999), Hull et al. (2005), Kluever (2004), Schierman, Hull, and Ward (2002), Schierman, Hull, and Ward (2003) and Schierman et al. (2004). A&L is a critical flight phase that brings the unpowered vehicle from the end of the TAEM phase to runway touchdown. Hull et al. (2005) addressed the on-line trajectory reshaping problem for RLVs during the TAEM and A&L phases of re-entry flight. In Kluever (2004) a guidance scheme that employs a trajectoryplanning algorithm was developed for the A&L phase of an unpowered RLV. The trajectory planning scheme computed a
Z. Jiang, R. Ordóñez / Automatica 45 (2009) 1668–1678
reference flight profile by piecing together several flight segments that were defined by a small set of geometric parameters. A feasible reference profile that brings the vehicle from its current state to a desired landing condition was obtained by iterating on a single geometric parameter. The flight-path angle at the start of the flare was selected as the iteration variable. Openand closed-loop guidance commands were readily available once the reference trajectory was obtained. In Schierman et al. (2002) the authors proposed an optimum-path-to-go (OPTG) algorithm, which was a general framework to perform the on-line trajectorycommand generation task. The methodology was applied to the X-33 RLV for the A&L, and a Monte Carlo simulation analysis was used to demonstrate the benefits of the approach. However, this approach did not effectively adjust to drastically altered vehicle dynamics caused by more serious problems, such as locked control surfaces or vehicle damage. In Schierman et al. (2003), an improvement on the OPTG algorithm was presented. The new OPTG approach eliminated these problems by referencing a database of precomputed, verified solutions. During flight, the OPTG routine chose a feasible trajectory for the failed vehicle while an adaptive guidance system made corrections for errors or disturbances. Schierman et al. (2004) presented a fault-tolerant autonomous landing system for a re-entry RLV with flight-test results of the new OPTG methodology. The results indicated that for more severe, multiple control failures, control reconfiguration, guidance adaptation, and trajectory reshaping were all needed to recover the mission. An autolanding trajectory design for the X34 Mach 8 vehicle was presented in Barton and Tragesser (1999). The techniques facilitate rapid design of reference trajectories. The trajectory of the X-34 based on the shuttle approach and landing design were from steep glideslope, circular flare, and exponential flare to shallow glideslope. In Cox et al. (1999), a neural network autolander for the X-33 prototype RLV was developed. The autolander was based on a new linear quadratic adaptive critic algorithm. It was implemented by an array of Functional Link Neural Networks and was trained by a modified Levenberg–Marquardt method. The goal of this paper is to develop new approaches that can deliver an RLV to its landing site safely and reliably, recover the vehicle from some failures, and avoid mission abort as much as possible. A motion primitive (MP) scheme is proposed to generate A&L trajectories off-line under nominal condition. A feasible trajectory database under nominal conditions for the RLV at the A&L phase is constructed. When the vehicle experiences a failure, neighboring optimal control (NOC) is then used to generate a neighboring feasible trajectory in real time to recover it from the failure. Failures considered in this paper correspond to the case of a stuck effector. When the perturbations applied during NOC implementation are small enough and hence linear approximation is applied, a neighboring feasible trajectory existence theorem (NFTET) is investigated and its proof is provided as well. When the perturbations become large, for example, in severe failure scenarios, NFTET is no longer applicable and often the vehicle cannot be recovered from such failures. A new method is used to deal with this situation and the robustness of the RLV system is then enhanced. The NFTET is now extended to the TRT — trajectory robustness theorem. Its proof will also be provided later. This approach has at least three advantages over existing trajectory planning methods. First, it uses MP to generate an offline feasible trajectory database that will be easily to create a new database for a new initial point. Second, it finds an off-line optimal trajectory using Dijkstra’s algorithm instead of the traditional optimal control method so that NOC promises to find an online neighboring feasible trajectory. Third, a novel robustification approach is introduced to enhance the robustness of the system and hence an RLV with severe failure might be recovered in real time.
1669
Section 2 introduces a motion primitive (MP) scheme and shows how to generate feasible trajectories in the A&L phase based on the MP scheme. Section 3 shows that the trajectory retargeting can recover the vehicle from some failures in real time by using an NOC approach. Section 4 discusses how to enhance the robustness of NOC using adaptive bounds with respect to input deviation. Some results and discussions are given in Section 5. The conclusions are drawn in Section 6. 2. Trajectory generation on approach and landing using motion primitives This section will first introduce the point-mass equations of motion for the A&L problem of an RLV and then briefly describe the MP scheme. After that, how to generate feasible trajectories using an MP scheme is discussed (Jiang & Ordóñez, 2007; Jiang, Ordóñez, Bolender, & Doman, 2006). 2.1. Point-mass equations of motion For an unpowered RLV during A&L, the discussion is restricted only to flight in the longitudinal plane. The gliding flight in a vertical plane of symmetry is then defined by the following pointmass equations (Jiang et al., 2006; Kluever, 2004; Miele, 1962):
˙V = − D − sin γ g , W
γ˙ =
L W
− cos γ
g V
,
(1)
(2)
h˙ = V sin γ ,
(3)
x˙ = V cos γ ,
(4)
where V is the vehicle velocity, γ is the flight-path angle, h is the altitude, x is the downrange, g is the gravitational acceleration, W is the vehicle weight, L = q¯ SCL is the lift, and D = q¯ SCD is the drag, where q¯ is the dynamic pressure, S is the aerodynamic reference area of the vehicle, and CL and CD are the lift and drag coefficients, respectively. The dynamic pressure q¯ = ρ V 2 /2, where the air density ρ at altitude h is approximated using an exponential model ρ = ρ0 e−β h , where ρ0 is the air density at sea level and β is the atmospheric density scale. Generally, the lift coefficient CL is a linear function of α , where α is the angle of attack and the drag coefficient CD is a quadratic function of CL , namely, CL = CL0 + CLα α and CD = CD0 + KCL2 , where CL0 is the lift coefficient at zero angle of attack, CLα is the lift slope coefficient, CD0 is the drag coefficient at zero lift, and K is a coefficient relative to induced drag. Substituting the CL expression into CD gives it as a function of α , namely, CD = kD0 + kD1 α + kD2 α 2 , where kD0 , kD1 , and kD2 are resulting coefficients with respect to α . The constraints at touchdown are h˙ TDmin ≤ h˙ TD ≤ h˙ TDmax ,
(5)
VTDmin ≤ VTD ≤ VTDmax ,
(6)
where h˙ TD is the sink rate at touchdown, h˙ TDmin and h˙ TDmax are its minimum and maximum values, respectively, VTD is the touchdown velocity, and VTDmin and VTDmax are its minimum and maximum values, respectively. In the state Eqs. (1)–(4), V , γ , h, and x are the four state variables and α is the control variable. 2.2. Motion primitives A motion plan consists of two classes of MPs (Frazzoli, 2002). The trajectory generation in this paper involves transitioning from one class of MPs to the other. The first class of MPs is a special class of trajectories, known as trims. A trim is a steady-state or quasi-
1670
Z. Jiang, R. Ordóñez / Automatica 45 (2009) 1668–1678
Fig. 1. A&L geometry and phases.
steady flight trajectory that includes a set of steep glideslopes and a set of shallow glideslopes on A&L. The second class of MPs consists of transitions between trims. These trajectory segments are known as maneuvers. A maneuver is an ‘‘unsteady’’ trajectory that includes pull-up and flare maneuvers on A&L. A pull-up maneuver transitions a steep glide to a shallow glide and a flare maneuver is added after the shallow glide to dissipate the energy of the impact at landing to touchdown with an acceptable sink rate for the vehicle (Walyus & Dalton, 1991). In this paper, a trajectory basically means a downrange-altitude trajectory that consists of these four segments (please see Fig. 1). From an initial point to a given touchdown point, all feasible trajectories that satisfy certain constraints are generated and saved into a trajectory database (Jiang et al., 2006).
Since the load factor n = L/W is fixed during the pull-up, the lift coefficient CL can be calculated by CL = nW . The control α is then q¯ S obtained by (9). The flare maneuver is to dissipate the energy to achieve an acceptable sink rate h˙ TD . Therefore, it becomes important to obtain an altitude function in this stage (see Fig. 1). A cubic polynomial model gives such a function with downrange as the independent variable (Jiang et al., 2006), h(x) = ax3 + bx2 + cx + d, where a, b, c, and d are constants to be determined from the desired initial altitude of the flare and the amount of downrange that is desired for the maneuver. Without loss of generality, it is assumed that the maneuver starts at x = 0. Letting x3 be the downrange at which the flare starts (recall that x3 = 0), it turns out that h3 = h(x3 ) and dh3 /dx3 = tan γ2 , where γ2 is the flight-path angle for shallow glide. Given h(xTD ) = 0 and dhTD /dxTD = tan γTD , where γTD is the flight-path angle at touchdown, the coefficients of the cubic polynomial are a = [2h3 + (tan γ2 + tan γTD )xTD ]/x3TD , b = −[3h3 + (2 tan γ2 + tan γTD )xTD ]/x2TD , c = tan γ2 , and d = h3 . Similarly, x is chosen to be the independent variable during flare maneuver. Based on the above equations, (1)–(4) now become (Jiang et al., 2006) dV dx dγ dx dh
2.3. Trajectory planning under nominal conditions
dx
2.3.1. Trims A steady or quasi-steady approach is applied during trims. Since the flight-path angle γ is a constant, dh/dx = tan γ can be easily calculated. In this stage, however, the velocity (and downrange) will still require numerical integration of their differential equations. If the altitude h is chosen to be the independent variable during trims, and noting that the gliding is in a constant flight-path angle, (1)–(4) now become (Jiang et al., 2006) dV
=
dh dx dh
=
(−D/W − sin γ )g , V sin γ 1 tan γ
.
(7) (8)
From γ˙ = 0, (2) becomes L = W cos γ . Noting that L = q¯ SCL , CL is W cos γ then given by CL = q¯ S and the control α is finally obtained by
α=
CL − CL0 CLα
.
(9)
2.3.2. Maneuvers For the maneuvering flights, a nonsteady approach has to be used. For convenience, time is not chosen to be the independent variable. Instead, γ can be used as the independent variable during pull-up. For the pull-up maneuver, (1)–(4) now become (Jiang et al., 2006) dV dγ dh dγ dx dγ
= = =
V (−D/W − sin γ ) L/W − cos γ V 2 sin γ g (L/W − cos γ ) V 2 cos γ g (L/W − cos γ )
, .
,
(10)
(11)
(12)
=
g (−D/W − sin γ ) V cos γ
= cos2 γ
d2 h dγ 2
,
(13)
= cos2 γ (6ax + 2b),
= tan γ = 3ax2 + 2bx + c .
Eqs. (2) and (4) give V 2 dγ
dγ dx
=
g (L/W −cos γ ) . V 2 cos γ
(14) (15) Hence, L = W cos γ (1 +
), where dγ /dx is already known from (14). Noting that L = g dx ¯qSCL , the control α is again obtained by (9). At each of four trajectory segments, corresponding equations are used to calculate that segment with different independent variables. Once the entire A&L trajectory meets the constraints (5) and (6), that trajectory is considered to be a feasible one for a specified initial point. Since an entire trajectory has four segments, there exist a few feasible trajectories for that initial point which satisfy constraints (5) and (6) at touchdown and the transitions between two adjacent segments have to be chosen carefully since the equations at different segments having different independent variables. All feasible trajectories for that initial point are then stored in an off-line feasible trajectory database. The feasible trajectories for other initial points are generated in the same way and saved into different trajectory databases, respectively. 3. On-line trajectory retargeting using neighboring optimal control When a failure significantly affects the forces on an RLV, or modifies the flight envelope constraint boundaries, trajectory retargeting may be used to recover it. Failures considered here correspond to the case of a stuck effector so that the nominal states and controls can be used to reshape the trajectory (Fahroo & Doman, 2004). An off-line optimal trajectory is always assumed to be found from the feasible trajectory database. When a failure occurs, NOC is used to determine its neighboring feasible trajectory on-line. If such a neighboring feasible trajectory exists, the RLV can then be recovered from that failure in real time. 3.1. Off-line trajectory optimization A shortest-path problem in computer graph and algorithm theories is used for the off-line optimal trajectory search. To find the shortest path, several algorithms can be used (Bellingham,
Z. Jiang, R. Ordóñez / Automatica 45 (2009) 1668–1678
1671
Kuwata, & How, 2003). Dijkstra’s algorithm (Dijkstra, 1959) is a reasonable choice since it is a best-first search and uses greedy strategy. It is guaranteed to find an optimal path by repeatedly selecting the vertex with the minimum shortest-path estimate or cost. The cost function or weight is defined by the mean value of α − α ∗ , where α ∗ is a nominal value of angle of attack α . For example, α ∗ = 5◦ corresponds to the base pitching moment coefficient Cm0 = 0 due to the wing body.
(nominal) state trajectory X ∗ (t ). G(t ) is the feedback gain matrix, which is given by
3.2. On-line trajectory optimization using NOC
Proof. According to the Assumptions 1 and 2, the optimal solution of the initial value problem (IVP) (16) does exist and Part (1) has already been proved when no system parameters are considered (Rampazzo & Vinter, 1999). Based on the consequence of Part (1), Part (2) can be proved as follows. For convenience, the independent variable t in parentheses will be neglected in the subsequent notations. Assume that the control u is of the form u = g (t , x, λ), where the system parameters λ are included. The system (16) then becomes an unforced system x˙ = f¯ (t , x, λ), x(t0 ) = x0 , where x ∈ C , λ ∈ P, and f¯ = f in general. Under the above conditions, a controller must exist satisfying the constraints of (17). First, we prove the existence of the optimal feasible solution of the IVP (16) with the new control input v of (19). By Theorem 3.5 in Khalil (2002), all the conditions of the NFTET are satisfied; therefore, there exists a unique solution of (16) defined on [t0 , tf ]. Now that the solution of the IVP (16) exists and its optimal solution (x∗ , u∗ ) can be found according to Assumption 2, the perturbations can be imposed on (x∗ , u∗ ). In the neighborhood of x∗ , the new state vector z can be defined as z = x∗ +δ x, where δ x is the resulting perturbation vector from the optimal state variables due to perturbations in the initial states x0 . Recall that the system parameters change from λ1 to λ2 , i.e., λ2 = λ1 +δλ. By Taylor series expansion, the real input control v ∗ at time t is now
Based on the off-line optimal trajectory, a small perturbation is imposed on each state variable and new control can be obtained by NOC. The RLV can be recovered from the aforementioned failures with stuck effectors. 3.2.1. NOC approach description The NOC approach is discussed in Bryson and Ho (1975), Speyer and Bryson (1968) and Seywald and Cliff (1994). However, not only the states but also the system parameters are under consideration in this paper and it is necessary to make some changes. Consider the following general nonlinear system: x˙ (t ) = f (t , x(t ), u(t ), λ(t )), x(t0 ) = x0 ,
(16)
where x(t ) ∈ C are state variables, u(t ) ∈ Ω are control inputs, λ(t ) ∈ P contains system parameters, and C ⊂ Rn , Ω ⊂ Rm , and P ⊂ Rp . With the convex cost function J (u) = φ(tf , xf , λf ), where the subscript ‘‘f ’’ denotes the final time, it is assumed that the optimal solution (x∗ (t ), u∗ (t )) can be found, where the optimal control is u∗ (t ) = g (t , x∗ (t ), λ(t )) under the state constraints and control constraints xmin ≤ x∗ (t ) ≤ xmax , umin ≤ u∗ (t ) ≤ umax .
(17)
When small perturbations δ x(t0 ) and small variations δλ are imposed on initial states x0 and system parameters λ, respectively, the neighboring feasible trajectory z (t ) and new optimal control v(t ) can then be determined by δ x(t0 ), δλ, and (16). Since system parameters are considered in the NOC approach of this paper, the following theorem is then investigated. 3.2.2. Neighboring feasible trajectory existence theorem (NFTET) It requires the following assumptions. Assumption 1. For the nonlinear system (16), the following must be satisfied. (a) f is continuous in (t , x, λ); (b) f is locally Lipschitz in x and uniformly Lipschitz in t and λ; (c) C ⊂ Rn , Ω ⊂ Rm , and P ⊂ Rp are convex, connected sets. Assumption 2. The open-loop optimal solution x∗ (t ) exists and optimal control u∗ (t ) is well defined for system (16). Theorem 1 (NFTET). (1) For an initial condition z (t0 ), which can be equal to x0 , there exist neighboring feasible state trajectories z (t ) and neighboring optimal control v(t ), satisfying sup kz (t ) − x∗ (t )k ≤ K ,
(18)
t
where K is an upper bound. (2) When the system parameters vary from λ1 to λ2 (λ2 = λ1 + δλ), the neighboring feasible state trajectories z (t ) and the neighboring optimal control v(t ) still exist, where v(t ) is determined by v(t ) = u∗ (t ) + G(t )(Z (t ) − X ∗ (t )) or
v(t ) = u∗ (t ) + G(t )∆X (t ), (19) h i h∗ i z (t ) x ( t ) where Z (t ) = λ , X ∗ (t ) = λ , and ∆X (t ) is a deviation vector 2 1 of the new augmented state vector Z (t ) from the open-loop optimal
G(t ) = δ U (t )δ X
−1
(t ),
(20)
where δ U (t ) are the control errors caused by the perturbations imposed on the augmented state vector X i (t ) and δ X (t ) is a perturbation matrix due to perturbations in the initial states xi (t0 ) and initial system parameters λi1 , where i = 1, 2, . . . , n + p.
v ∗ = g (t , z , λ2 ) = g (t , x∗ + δ x, λ1 + δλ) ∂g ∂g δx + δλ + H.O.T. = g (t , x∗ , λ1 ) + ∂x ∂λ ∂g ∂g (z − x∗ ) + (λ2 − λ1 ) + H.O.T. = u∗ + ∂x ∂λ ∗ ∂g ∂g z x = u∗ + − + H.O.T., λ λ 2 1 ∂ x ∂λ
(21)
where H.O.T. denotes the higher order terms. If the perturbation is small enough, i.e., δ x is small enough or z is in the feasible neighborhood of x∗ , in other words, (18) is satisfied, then the higher order products of δ x and δλ must be small enough as well. Moreover, δλ is basically small and then its higher order products are small enough. Therefore, the H.O.T. in (21) can be neglected at this point. After the linearization, the new controller is of the form v = u∗ + G(Z − X ∗ ), where Z ∈ Rn+p and X ∗ ∈ Rn+p are defined in the theorem. For the closed-loop system, the perturbations δ x and δλ are actually the measured deviations. Therefore, v = u∗ + G(Z − X ∗ ) is equivalent to (19), where ∆X denotes a deviation vector of the closed-loop state vector from the open-loop optimal (nominal) state trajectory and the gain matrix G is defined by
G=
∂g ∂x
∂g ∂g = ∈ Rm×(n+p) . ∂X ∂λ
(22)
Second, a numerical method is used to evaluate the gain matrix G at every time t ∈ [t0 , tf ] so that no partial derivatives are evaluated. From u = g (t , x, λ) and X := [xT , λT ]T , the total ∂g ∂g differential of u is du = ∂ X dX + ∂ t dt. For a known perturbation instance, its perturbation vectors are of the form δ X = Xperturbed − X ∗ ∈ Rn+p and δ u = uperturbed − u∗ ∈ Rm . Using a linear
1672
Z. Jiang, R. Ordóñez / Automatica 45 (2009) 1668–1678
approximation and noting that δ t = 0 at every time t ∈ [t0 , tf ], δ u can be obtained by
δu =
∂g ∂g δX + δ t = Gδ X . ∂X ∂t
(23)
In order to evaluate G, an (n + p) × (n + p) square matrix of δ X is required. From n + p known perturbation instances, two matrices are constructed by
δ X := δ X 1 . . . δ X n+p ∈ R(n+p)×(n+p) , δ U := δ u1 . . . δ un+p ∈ Rm×(n+p) ,
(24) (25)
where δ X , i = 1, 2, . . . , n + p and δ u , i = 1, 2, . . . , n + p are the ith known perturbation instances of δ X and δ u respectively. The matrix expression of (23) is then i
i
δ U = Gδ X .
(26)
When the known perturbation instances are chosen to be linearly independent yet still small enough, the invertibility of δ X can be guaranteed at almost every time point. In case δ X is singular at a time ts ∈ [t0 , tf ], that point is simply excluded in order that the remaining δ X (t ) is nonsingular where t ∈ [t0 , tf ] and t 6= ts . From (26), the gain matrix G can then be evaluated by (20).
If the perturbations are small enough, H.O.T. can be neglected and the NFTET is applicable. The neighboring feasible controller with linear approximation v(t ) can then be evaluated by (21). When the RLV experiences a failure that causes small changes of CL and CD , NOC can recover it from the failure and so demonstrates good robust performance in small perturbations (Jiang & Ordóñez, 2007). When the perturbations become large such as in a severe failure case, the H.O.T. cannot be neglected. In order to compensate for the effects of the linear approximation, a robustifying or stabilizing term vs (t ) is introduced to approximate the H.O.T. (Jiang & Ordóñez, 2007), and v(t ) is now re-defined as
v(t ) = u∗ (t ) + G(t )∆X (t ) + vs (t ),
where vs (t ) is to be determined. For simplicity, only single input systems (i.e., m = 1) are discussed. In this case, H.O.T. will denote the value of the higher order terms and hence becomes a scalar in all the subsequent notations. Extension to multiple input systems is left to future research. Since the H.O.T. must include a ∆X (t ) term, it is reasonable to assume that they satisfy
|H.O.T.| ≤ B∗T |∆X (t )|, ∗
3.2.3. Application of the NFTET to the RLV trajectory reshaping According to the NFTET, it can be easily shown that the neighboring feasible trajectory for an RLV system exists when applying the NOC method to the RLV system at A&L phase. In this system, V , γ , h, and x are the four state variables and α is the control variable; CL and CD are two system parameters. If assuming CL and CD are two special state variables and the variations of CL and CD between nominal and those failed conditions are viewed as perturbations as well, then the neighboring feasible trajectory does exist if all the perturbations on the six state variables are small enough according to the NFTET. Since the off-line optimal (nominal) controller α ∗ has already been obtained by Dijkstra’s algorithm from the feasible trajectory database using the MP scheme, the new controller αperturbed is then obtained by αperturbed = α ∗ + G(Z − X ∗ ), where G, Z and X ∗ have the same meanings as described in the NFTET. To evaluate G, some perturbations are imposed on X ∗ . Note that the altitude h is chosen to be the independent variable (instead of time t) so that the problem becomes a fixed ‘‘final time’’ problem (because hTD ≡ 0). Since the dimension of Z is now five, five perturbation instances of V , γ , x, CL , and CD are used. Similarly, there are five corresponding instances of α . Let δ X := [[V 1 − ∗ T 1 ∗ 1 ] . . . [V 5 − V ∗ γ 5 − γ ∗ x5 − − CL1 CD1 − CD1 V ∗ , γ 1 − γ ∗ x1 − x∗ CL1 ∗ T ∗ 5 1 ∗ 5 x CL1 − CL1 CD1 − CD1 ] ] and δα := [α − α ∗ , . . . , α 5 − α ∗ ], where superscript i denotes the ith perturbation instance, i = 1, . . . , 5. −1
The gain matrix G can then be evaluated by G(h) = δαδ X . ∗ ∗ T , CD1 ], Letting Z = [V , γ , x, CL2 , CD2 ]T and X ∗ = [V ∗ , γ ∗ , x∗ , CL1 where the subscript ‘‘1’’ of CL and CD denotes nominal condition and ‘‘2’’ denotes failure condition, the new controller αperturbed under a certain failure can then be determined easily. Finally, the neighboring feasible trajectories are determined by the same Eqs. (1)–(4) with the independent variable h. 4. Robustness enhancement on NOC using adaptive bounds with respect to input deviation In the proof of the NFTET, the real control input v (t ) for the general nonlinear system is obtained by (21), i.e., v ∗ (t ) = u∗ (t ) + G(t )∆X (t ) + H.O.T. ∗
(27)
n +p
(28) B∗i,1
where B ∈ R is a constant column vector where ≥ 0, i = 1, . . . , n + p and the | · | operation is performed elementwise. The bound B∗ is not explicitly known. Note that B∗ is not unique, since any B¯ ∗i,1 > B∗i,1 for i = 1, . . . , n + p satisfies inequality (28). To avoid confusion, for analytical purposes, B∗ is defined to be a constant vector with smallest non-negative elements such that (28) is satisfied (Polycarpou & Ioannou, 1996). The estimate of B∗ can then be found adaptively. To robustify NOC, a trajectory robustness theorem (TRT) is now investigated. 4.1. Trajectory robustness theorem (TRT) Theorem 2 (TRT). For the system (16) satisfying (28) with uncertain upper bound B∗ , there exists a robustifying term vs (t ) that can enhance the robustness of NOC. The robustifier vs (t ) is of the form
vs (t ) = −Bˆ T (t )|∆X (t )|sgn(Bˆ T (t )∆X (t )).
(29)
The adaptation law is defined by
˙
Bˆ (t ) = γb [|∆X (t )| − σ (Bˆ (t ) − Bˆ (0))],
(30)
where γb > 0 and σ > 0 are constant parameters, ∆X (t ) is the same as the definition in the NFTET, Bˆ (t ) is the estimate of B∗ , and Bˆ (0) is an initial guess of B∗ which is set to Bˆ (0) = 0. By defining B˜ (t ) = Bˆ (t ) − B∗ and the adaptation law (30), B˜ (t ) converges to a neighborhood of zero and its upper bound can be chosen by the designer. Proof. For convenience, the independent variable t will again be neglected in the subsequent notation. Whether the error dynamics goes to zero does not matter at this point (we are only concerned with the controller), so the Lyapunov candidate VL is simply chosen to be VL =
1 2γb
B˜ T B˜ ,
(31)
where γb > 0 and B˜ = Bˆ − B∗ , where Bˆ is the estimate of B∗ with non-negative elements. Recalling that B∗ is a constant column vector and from (28), the derivative of VL is then
Z. Jiang, R. Ordóñez / Automatica 45 (2009) 1668–1678
1 T˙ B˜ B˜ γb 1 T˙ vs − H.O.T. + γ B˜ Bˆ , if vs > H.O.T. b ≤ 1 H.O.T. − v + B˜ T B˙ˆ , if v ≤ H.O.T. s s γb 1 T˙ ∗T B |∆X | + γ B˜ Bˆ + vs , if vs > H.O.T. b ≤ 1 B∗T |∆X | + B˜ T Bˆ˙ − vs , if vs ≤ H.O.T. γb
Table 1 Some constant parameters.
V˙L =
= B∗T |∆X | +
1
γb
˙
B˜ T Bˆ + vs sgn(vs − H.O.T.),
sgn(y) =
1, −1,
if y > 0 if y ≤ 0.
(32)
(33)
Choosing the adaptation law
˙
Bˆ = γb |∆X |,
(34)
(32) becomes V˙L ≤ Bˆ T |∆X | + vs sgn(vs − H.O.T.).
(35)
If the robustifier or stabilizing term vs is of the form
vs = −Bˆ T |∆X |sgn(vs − H.O.T.),
(36)
41.45 89 32.2 4.348e–5 0.002377
Constrained variable
Value
Minimum touchdown velocity VTDmin (ft/s) Maximum touchdown velocity VTDmax (ft/s) Minimum sink rate h˙ TDmin (ft/s) Maximum sink rate h˙ TDmax (ft/s)
260 300 −10 0
By (i) and (ii), (29) and (36) are equivalent when |vs | ≤ |H.O.T.|. If H.O.T. is positive (H.O.T. is a scalar for a single input system and will never be zero by its definition), vs will reach H.O.T. from zero (since Bˆ (0) = 0, vs (0) = 0 by (29)) after a finite time. At that time, the effect of H.O.T. has been compensated completely (note that vs might become larger than H.O.T. later if there is no further modification). If it is negative, vs will not reach H.O.T. from zero and will never be beyond H.O.T., i.e., |vs | < |H.O.T.|, or vs > H.O.T. (but the effect of H.O.T. has not been compensated completely. If it is required to do so, simply changing the definition of the signum function will work). In order to ensure that |vs | ≤ |H.O.T.|, it is necessary to limit the increase of Bˆ and further modification on the adaptation law is
˙
(37)
which means that V˙L is negative semidefinite. This guarantees that |B˜ | is bounded. If vs > H.O.T., then sgn(vs − H.O.T.) = 1. Since Bˆ (0) is set to Bˆ (0) = 0 and all elements of Bˆ (t ) are monotonically nondecreasing due to the choice (34), Bˆ T |∆X | ≥ 0 and hence vs ≤ 0 from (36). Similarly, vs ≥ 0 if vs ≤ H.O.T. Therefore, there are only two cases under consideration if vs is chosen to be (36), namely, (a) 0 ≥ vs > H.O.T. and (b) H.O.T. ≥ vs ≥ 0. The two cases can be expressed in the same form: |vs | ≤ |H.O.T.| (This is just for the convenience of later discussion. More accurately, |vs | < |H.O.T.| for case (a).) However, it can be noticed that vs is included in the sgn(·) function in (36), so it has to be re-defined. From (28) and (36), both vs and the H.O.T. include a ∆X term, and therefore the following equality must hold:
vs − H.O.T. = E T ∆X ,
Value
Reference area S (ft2 ) Vehicle mass m (slug) Gravitational acceleration g (ft/s2 ) Density coefficient β(1/ft) Air density at sea level ρ0 (slug/ft3 )
then made. Now Bˆ is further chosen by using a σ -modification such that V˙L < 0 (Ioannou & Sun, 1996; Polycarpou & Ioannou, 1996). If
(35) becomes V˙L ≤ 0,
Parameter
Table 2 Constraints.
where the signum function can be defined in many different ways. Considering the RLV application later, one such definition is given by
1673
(38)
where E is a deviation column vector with respect to ∆X . Since both Bˆ and E are small, it is possible to replace E with Bˆ in the sgn(·) function on the right side of (36). Hence (36) becomes (29). Next, it is needed to verify if (37) still holds when vs is of the form (29) instead of (36). As discussed above, there are only cases (a) and (b) based on the choice of (34) and (36). Therefore, only these two cases need to be verified. (i) If Bˆ T ∆X > 0, then vs = −Bˆ T |∆X | ≤ 0 from (29). This is Case (a), i.e., H.O.T. < vs ≤ 0. When vs > H.O.T., vs sgn(vs − H.O.T.) = −Bˆ T |∆X | in (35) and hence (37) still holds. (ii) If Bˆ T ∆X ≤ 0, then vs = Bˆ T |∆X | ≥ 0 from (29). This is Case (b), i.e., H.O.T. ≥ vs ≥ 0. When vs ≤ H.O.T., vs sgn(vs − H.O.T.) = −Bˆ T |∆X | in (35) and hence (37) still holds.
˙
Bˆ is defined by (30), where σ > 0 and Bˆ (0) is an initial guess of B∗ which is set to Bˆ (0) = 0, it will result in V˙L ≤ −σ B˜ T (Bˆ − Bˆ (0))
= −σ B˜ T (B˜ + B∗ − Bˆ (0)) σ σ ≤ − B˜ T B˜ + (Bˆ − Bˆ (0))T (Bˆ − Bˆ (0)) 2 2 1 T σ = −σ γb B˜ B˜ + (Bˆ − Bˆ (0))T (Bˆ − Bˆ (0)) 2γb 2 = −k1 VL + k2 ,
(39) σ
where k1 = σ γb > 0 and k2 = 2 (Bˆ − Bˆ (0))T (Bˆ − Bˆ (0)) ≥ 0 are bounded constants. From Lemma 2.1 in Spooner, Maggiore, and Ordóñez (2002), VL =
1 2γb
B˜ T B˜ ≤
k2 k1
k2 + VL (0) − e−k1 t . k1
(40)
From the above equation, it turns out that
s lim kB˜ k ≤
t →∞
2γb k2 k1
= ε¯ .
(41)
Therefore, B˜ converges to a neighborhood of zero whose upper bound is ε¯ . Now, |vs | ≤ |H.O.T.| is guaranteed by (36). Application of vs to the NOC scheme will result in increased tolerance to severe deviations from the off-line optimal trajectory. This is because the effects of the H.O.T. are no longer ignored, thereby rendering the robustified control v into a closer approximation of (21).
1674
Z. Jiang, R. Ordóñez / Automatica 45 (2009) 1668–1678
Fig. 2. Trajectory reshaping using NOC without robustifier under the Failure Case of a speedbrake fixed at 50◦ and a body flap at −10◦ . Table 3 CL and CD .
CL0 CLα (1/deg) kD0 kD1 (1/deg) kD2 (1/deg2 )
Nominal case
Failure case
Severe failure case 1
Severe failure case 2
0.04798 0.07487 0.1446 −5.051e-005 8.157e-004
0.05975 0.06683 0.2103 −0.0003217 0.0007856
0.1761 0.05804 0.2851 −0.001623 0.0007283
−0.1575
4.2. Application of the TRT to the on-line RLV trajectory generation on A&L According to the TRT, the new robustified controller αrobustified is then obtained by αrobustified = α ∗ + G(Z − X ∗ ) + αs , where α ∗ , G, Z and X ∗ have the same meanings as described in Section 2 (also Jiang and Ordóñez (2007)), while αs is the robustifying term. Since the altitude h is chosen to be the independent variable instead of time t as described in Jiang et al. (2006) and Jiang and Ordóñez (2007), the total number of state variables for the RLV system reduces to five from six. The dimension of B∗ and Bˆ reduces to five from six, accordingly. The gain matrix G is obtained in the same way as before while the robustifying term αs is determined by (29) except that the independent variable is h. Unlike the MP scheme, the new control variable αrobustified is evaluated by the above unique equation. Also note that dBˆ /dh
˙
ˆ is used in the adaptation law of (30) since the (instead of B) ˙ ˙ independent variable is h, where dBˆ /dh = Bˆ /h. 5. Results and discussions Given initial and touchdown conditions for an RLV, a feasible trajectory database is built under the nominal condition. The initial point for the trajectory generation in all cases has the following values: V0 = 465.7 ft/s, γ0 = −21 deg, h0 = 10066 ft, and x0 = −25000 ft. Tables 1 and 2 list the parameters and constraints used in the MATLAB simulation when constructing the
0.06159 0.2075 −0.004345 0.0007568
feasible trajectory database. Under all nominal conditions for the RLV system (1)–(4), the lift and drag contributions only depend on angle of attack α . Its lift and drag coefficients in the nominal condition are given in Table 3. When a failure occurs for the vehicle, trajectory reshaping can be used to recover it. To observe the results of trajectory reshaping using NOC and adaptive bounds with respect to input deviation, one failure case and two severe failure cases for the RLV with stuck effectors are illustrated based on the lift and drag characteristics of the vehicle. The lift and drag coefficients CL and CD for these failure cases after a curve fit are also listed in Table 3. The first failure case is the speedbrake fixed at 50◦ and the body flap at −10◦ . The two severe failure cases are: (1) both right rudder and left rudder at −4◦ and speedbrake at 70◦ ; (2) left flap at −16◦ and speedbrake at 55◦ . Fig. 2 indicates the trajectory reshaping without the robustifier. In the plot of flight-path angle, it can be seen that the first segment is a steep glide with γ = −21◦ , the second segment is a pull-up with n = 1.1, the third segment is a shallow glide with γ = −15◦ , and the fourth segment is a flare with VTD = 298.7 ft/s and h˙ TD = −9.890 ft/s. It can also be observed that the RLV is recovered from the first failure case in real time because the touchdown constraints (5) and (6) are met. Fig. 3 shows that the trajectory reshaping with the robustifier recovers the RLV in real time from the same failure. The touchdown velocity VTD and sink rate h˙ TD are now 282.5 ft/s and −5.370 ft/s, respectively. In the MATLAB simulation, γb = 0.0000148, σ = 5, and Bˆ (0) = [0, 0, 0, 0, 0]T .
Z. Jiang, R. Ordóñez / Automatica 45 (2009) 1668–1678
1675
Fig. 3. Trajectory reshaping using NOC with robustifier under the Failure Case of a speedbrake fixed at 50◦ and body flap at −10◦ .
Fig. 4. Trajectory reshaping using NOC without robustifier under Severe Failure Case 1.
Rh
The control energy is calculated by h TD α 2 dh/(hTD − h0 ) and the 0 elapsed time is the system execution time from the initial point to touchdown in the same computer environment (using ‘‘tic’’ and ‘‘toc’’ functions in MATLAB, running on a COMPAQ Presario 2199US computer on Windows XP and a 2.12 GHz processor). The energies consumed and times for the trajectory reshaping without the robustifier and with the robustifier are 30.96, 1.662 s and 38.84,
2.394 s, respectively. The latter consumes more energy and needs more time, as expected. Figs. 4 and 5 show the trajectory reshaping without and with the robustifier under Severe Failure Case 1, respectively. The four trajectory segments are labeled on the plot of the flight-path angle in Fig. 4. The trajectory reshaping without the robustifier fails to recover the vehicle from Severe Failure Case 1 since the touchdown constraint of h˙ TD = −39.41 ft/s is not met. After applying the
1676
Z. Jiang, R. Ordóñez / Automatica 45 (2009) 1668–1678
Fig. 5. Trajectory reshaping using NOC with the robustifier under Severe Failure Case 1.
of linear approximation. Similar αs curves can be found for other failure cases. The plots are not drawn here due to limited space. Fig. 7 gives an example where the trajectory reshaping both with and without the robustifier fails to recover from Severe Failure Case 2. It can still be observed that the trajectory reshaping with the robustifying term demonstrates better performance. The touchdown conditions with the robustifier become closer to the desired range since VTD and h˙ TD are changed to 373.3 ft/s and −51.97 ft/s from 421.4 ft/s and −246.4 ft/s after applying robustification, respectively. In a case like this, the failure to find a feasible trajectory would indicate that the vehicle cannot be recovered and that the mission should be aborted. Crucially, this information would be available in a couple of seconds, thereby allowing time to take life-saving measures and helping the vehicle’s crew initiate bail-out procedure before complete disaster strikes. Fig. 6. Robustifier (the independent variable is the altitude).
robustifier, the trajectory reshaping recovers the vehicle in real time from that failure as VTD = 282.5 ft/s and h˙ TD = −6.688 ft/s now. The energies consumed and times for the trajectory reshaping without the robustifier and with the robustifier are 37.92, 1.662 s and 38.48, 2.163 s, respectively. Again, the latter consumes more energy and needs more time, as expected. Note that all the relevant parameters and hardware in the simulation are the same as the first failure case since all the parameters only need to be tuned once and will be applied to all cases. From Fig. 5 it can also be seen that the x − h trajectory with the robustifier is very close to the off-line optimal trajectory, which means that the robustifier αs does compensate for the effects of the linear approximation. By reducing the sensitivity of system performance with respect to system uncertainty, the method presented appears to greatly enhance the robustness of the original NOC approach. Fig. 6 shows the robustifier αs in the Severe Failure Case 1. It smooths the instantaneous jumps of α by compensating the effect
6. Conclusions Under nominal condition for an RLV, a feasible trajectory database is built by using the MP scheme. The database is well defined under the nominal condition. Under some failures such as stuck effectors, trajectory reshaping can recover the RLV. When those failures are not severe, the NOC approach does help recover the vehicle in real time. The neighboring feasible trajectory existence theorem (NFTET) has been proved in this paper and it guarantees that the RLV can be delivered to its landing site safely and reliably under those failures by reshaping the trajectory using the NOC approach. Therefore, it indicates good robust performance of NOC approach. When a severe failure occurs, however, the robustness of NOC has to be enhanced in order to improve the chances of recovering the vehicle in real time. The trajectory robustness theorem (TRT) has also been proved in this paper and a robustifier has been applied to compensate for the effects of the linear approximation employed in the NFTET. The robustifying term is adaptively determined by the adaptation law with respect to the input
Z. Jiang, R. Ordóñez / Automatica 45 (2009) 1668–1678
1677
Fig. 7. Trajectory reshaping fails using NOC both with and without the robustifier under Severe Failure Case 2.
deviation bound. The simulation results show that this approach greatly improves the robust performance of NOC and allows the RLV to be delivered to its landing site safely and reliably even under certain severe failures. Moreover, even when the method fails to generate a feasible trajectory for landing, it yields this negative result in real time, thereby allowing the vehicle’s crew to bail out before complete disaster strikes. As mentioned in Section 4, the TRT is limited to single input systems. Extension to multiple input systems is left to future research. As a future direction, the TRT can be extended to more failure scenarios when wind effects are considered. Acknowledgements The authors wish to acknowledge Dr. Michael A. Bolender and Dr. David B. Doman in the Air Force Research Laboratory at Wright–Patterson Air Force Base for providing helpful suggestions and enlightening ideas on the research. The authors would also like to acknowledge Dr. George Doyle in the Mechanical and Aerospace Engineering Department at the University of Dayton for his comments on this paper.
Fahroo, F., & Doman, D. B. (2004). A direct method for approach and landing trajectory reshaping with failure effect estimation. In Proceedings of the AIAA guidance, navigation, and control conference and exhibit. Frazzoli, E. (2002). Maneuver-based motion planning and coordination for single and multiple UAV’s. In AIAA’s 1st technical conference and workshop on unmanned aerospace vehicles. Grantham, K. (2003). Sub-optimal analytic guidance laws for reusable launch vehicles. In Proceedings of the AIAA guidance, navigation, and control conference and exhibit. Hull, J. R., Gandhi, N., & Schierman, J. D. (2005). In-flight TAEM/final approach trajectory generation for reusable launch vehicles. Infotech@Aerospace. Ioannou, P. A., & Sun, J. (1996). Robust adaptive control. Englewood Cliffs, NJ: Prentice-Hall. Jiang, Z., & Ordóñez, R. (2007). Trajectory generation on approach and landing for RLVs using motion primitives and neighboring optimal control. In Proceedings of the American control conference. Jiang, Z., & Ordóñez, R. (2007). On-line approach/landing trajectory generation with input deviation bound uncertainty for reusable launch vehicles. In Proceedings of the 46th IEEE conference on decision and control. Jiang, Z., Ordóñez, R., Bolender, M. A., & Doman, D. B. (2006). Trajectory generation on approach and landing for RLVs using motion primitives. In 31st Annual Dayton-Cincinnati aerospace science symposium. Khalil, H. K. (2002). Nonlinear systems (3rd ed.) (p. 97). Upper Saddle River, NJ: Prentice Hall. Kluever, C. A. (2004). Unpowered approach and landing guidance using trajectory planning. Journal of Guidance, Control, and Dynamics, 27(6), 967–974. Miele, A. (1962). Flight mechanics. Reading, MA: Addison-Wesley.
References Barton, G. H., & Tragesser, S. G. (1999). Autolanding trajectory design for the X-34, In Proceedings of the AIAA atmospheric flight mechanics conference and exhibit. Bellingham, J., Kuwata, Y., & How, J. (2003). Stable receding horizon trajectory control for complex environments. In Proceedings of the AIAA guidance, navigation, and control conference and exhibit. Bryson, A. E., Jr., & Ho, Y. C. (1975). Applied optimal control: Optimization, estimation, and control. New York: Hemisphere. Burchett, B. T. (2004). Fuzzy logic trajectory design and guidance for terminal area energy management. Journal of Spacecraft and Rockets, 41(3), 444–450. Cox, C., Stepniewski, S., Jorgensen, C., Saeks, R., & Lewis, C. (1999). On the design of a neural network autolander. International Journal of Robust and Nonlinear Control, 9(14), 1071–1096. Dijkstra, E. W. (1959). A note on two problems in connexion with graphs. Numerische Mathematik, 1, 269–271. Doman, D. B. (2004). Introduction: reusable launch vehicle guidance and control. Journal of Guidance, Control, and Dynamics, 27(6), 929–929.
Polycarpou, M. M., & Ioannou, P. A. (1996). A robust adaptive nonlinear control design. Automatica, 32(3), 423–427. Rampazzo, F., & Vinter, R. B. (1999). A theorem on existence of neighboring trajectories satisfying a state constraint, with applications to optimal control. IMA Journal of Mathematical Control & Information, 16(4), 335–351. Schierman, J. D., Hull, J. R., & Ward, D. G. (2002). Adaptive guidance with trajectory reshaping for reusable launch vehicles. In Proceedings of the AIAA guidance, navigation, and control conference and exhibit. Schierman, J. D., Hull, J. R., & Ward, D. G. (2003). On-line trajectory command reshaping for reusable launch vehicles. In Proceedings of the AIAA guidance, navigation, and control conference and exhibit. Schierman, J. D., Ward, D. G., Hull, J. R., Gandhi, N., Oppenheimer, M. W., & Doman, D. B. (2004). Integrated adaptive guidance and control for re-entry vehicles with flight-test results. Journal of Guidance, Control and Dynamics, 27(6), 975–988. Seywald, H., & Cliff, E. M. (1994). Neighboring optimal control based feedback law for the advanced launch system. Journal of Guidance, Control and Dynamics, 17(6), 1154–1162.
1678
Z. Jiang, R. Ordóñez / Automatica 45 (2009) 1668–1678
Speyer, J. L., & Bryson, A. E., Jr. (1968). A neighboring optimum feedback control scheme based on estimated time-to-go with application to reentry flight paths. AIAA Journal, 6(5), 769–776. Spooner, J. T., Maggiore, M., Ordóñez, R., & Passino, K. M. (2002). Stable adaptive control and estimation for nonlinear systems: Neural and fuzzy approximator techniques. New York: Wiley-Interscience. Walyus, K. D., & Dalton, C. (1991). Approach and landing simulator space shuttle orbiter touchdown conditions. Journal of Spacecraft and Rockets, 28(4), 478–485.
Zhesheng Jiang is a Ph.D. candidate in the Electrical and Computer Engineering Department at the University of Dayton. He received his B.S. and M.S. degrees from Zhejiang University and the Chinese Academy of Sciences, respectively. He is working on trajectory generation and planning for reusable launch vehicles. His research interests also include adaptive control, robust control, optimization, and algorithm design. Most of his research works were sponsored by an AFRL/AFOSR grant, DAGSI scholarship, and University of Dayton Graduate School Dean’s Summer Fellowship.
Raúl Ordóñez received his M.S. and Ph.D. in electrical engineering from the Ohio State University in 1996 and 1999, respectively. He spent two years as an assistant professor in the department of electrical and computer engineering at Rowan University, and then joined the ECE department at the University of Dayton, where he has been since 2001 and is now an associate professor. He has worked with the IEEE Control Systems Society as a member of the Conference Editorial Board of the IEEE Control Systems Society since 1999; Publicity Chair for the 2001 International Symposium on Intelligent Control; member of the Program Committee and Program Chair for the 2001 Conference on Decision and Control; and Publications Chair for the 2008 IEEE Multi-conference on Systems and Control. Dr. Ordóñez is also serving as Associate Editor for the control journal Automatica. He is a coauthor of the text book ‘‘Stable Adaptive Control and Estimation for Nonlinear Systems: Neural and Fuzzy Approximator Techniques’’ (2002). Between 2001 and 2007, he worked in the research team of the Collaborative Center for Control Science (CCCS), funded by AFRL, AFOSR and DAGSI at the Ohio State University. His research in the CCCS has focused on coordination and control of UAV teams, as well as on real-time trajectory generation for reusable hypersonic launch vehicles. Dr. Ordóñez received a Boeing Welliver Faculty Fellowship in 2008.