ARTICLE IN PRESS
JID: JSSE
[m5G;July 17, 2017;8:48]
The Journal of Space Safety Engineering 0 0 0 (2017) 1–6
Contents lists available at ScienceDirect
The Journal of Space Safety Engineering journal homepage: www.elsevier.com/locate/jsse
The Right Stuff ‘v’ the Right (Safe) Thing Andy Quinn a,∗, Ivan Sikora b a b
Saturn Safety Management Systems Ltd, Bath BA2 9BR, UK City University, London EC1V 0HB, UK
a r t i c l e
i n f o
Article history: Received 12 May 2017 Revised 7 July 2017 Accepted 7 July 2017 Available online xxx
a b s t r a c t This is not the 1950’s where test pilots needed the ‘right stuff’ and certainly not the beginning of aviation where the Wright brother’s early designs needed pilots with more than the right stuff. In those formative years of aviation and jet development, designers and pilots did not have the same design understanding and knowledge that we have today. In addition, they did not have the same understanding and knowledge of Systems Safety engineering and Human Factors expertise that we have today. Manned suborbital flights of today should be undertaken in vehicles that have been designed effectively with appropriately derived safety requirements including fault-tolerance, safe life and design-for-minimum risk approaches – and all to an acceptable level of safety. Therefore, although initial suborbital pilots will originate from flight test schools and still possess similar traits to their earlier test pilot brethren, they should be protected by the right (safe) thing by design and analysis rather than rely on the right stuff due to ineffective design and operating procedures. The paper presents a review of the SpaceShip2 accident as a case study to highlight the right (safe) things that should be considered in the design, analysis and operations for suborbital operators. The authors of this paper contend that suborbital piloted vehicles should be designed with the knowledge and understanding and lessons learned from those early X-plane flights, lessons learned from general space safety, lessons learned from pilot Human Factors/Crew Resource Management training and by understanding that safety management and safety engineering are essential disciplines that should be integrated with the design team from the concept phase. © 2017 International Association for the Advancement of Space Safety. Published by Elsevier Ltd. All rights reserved.
1. Introduction During the early days of airliners, military fast jet and rocket development, aircraft were plagued by technical issues that resulted in incidents and accidents. In the case of US fast jet and rocket development on the X-15 project, this meant relying on pilots with the ‘right stuff’. Consequently, throughout the project,
this meant reactive fixes to the reliability and design issues i.e. ‘fly, fix, fly’. As technology and complexity improved over time, causal factors identified in incidents and accidents are more plagued by people and organisations, per Fig. 1. To counter this, technical designers are now being supported by other engineering disciplines including human factors and systems safety engineering: 2. Review of accident crew casual factors
Acronyms/Abbreviations: AAR, Air Accident Report; AC, Advisory Circular; AsMA, Aerospace Medical Association; ATP, Airline Test Pilot; CRM, Crew Resource Management; DMR, Design for Minimum Risk; ECSS, European Cooperation for Space Standardization; FAA-AST, Federal Aviation Administration – Office for Commercial Space Transportation; FMECA, Failure Modes Effects & Criticality Analysis; FTA, Fault Tree Analysis; HF, Human Factors; HFACS, Human Factors Analysis and Classification System; HFE, Human Factors and Ergonomics; NASA, National Aerospace & Space Administration; NTSB, National Transportation Safety Board; OSHA, Operating & Support Hazard Analysis; PRA, Probabilistic Risk Analysis; SMM, Safety Management Manual; SMS, Safety Management System; SOP, Standard Operating Procedure; SPP, Safety Program Plan; SQEP, Suitably Qualified Experienced Personnel; SRA, Systems Readiness Assessment; SS2, SpaceShip2; STAMP, Systems-Theoretic Accident Model and Processes; VG, Virgin Galactic. ∗ Corresponding author. E-mail address:
[email protected] (A. Quinn).
Based on National Transportation Safety Board (NTSB) sources of significant crew causal factors in 93 major hull losses from 1977–1984, Lautman and Gallimore [2] found that pilot deviation and inadequate cross-check were the main causal factors: • 33% - Pilot deviated from SOP’s • 26% - Inadequate cross check by second crew member • 9% - Crew’s not trained for correct response in emergency situations • 6% - Pilot did not recognise need for go-around • 4% - Pilot Incapacitation • 4% - Inadequate piloting skills • 3% - Improper procedure during go-around
http://dx.doi.org/10.1016/j.jsse.2017.07.001 2468-8967/© 2017 International Association for the Advancement of Space Safety. Published by Elsevier Ltd. All rights reserved.
Please cite this article as: A. Quinn, I. Sikora, The Right Stuff ‘v’ the Right (Safe) Thing, The Journal of Space Safety Engineering (2017), http://dx.doi.org/10.1016/j.jsse.2017.07.001
ARTICLE IN PRESS
JID: JSSE 2
[m5G;July 17, 2017;8:48]
A. Quinn, I. Sikora / The Journal of Space Safety Engineering 000 (2017) 1–6
He had been the co-pilot of SS2 for PF01 (one powered flight) on April 2013. The co-pilot, age 39, held an airline transport pilot (ATP) certificate with a rating for airplane multi-engine land and commercial pilot privileges for airplane single-engine land and sea and glider. The co-pilot held a second-class medical certificate, dated May 22, 2014, with the limitation that he must wear corrective lenses. The cockpit image recording showed that the co-pilot was wearing glasses during the accident flight. The NTSB concluded that the pilot and co-pilot were properly certificated and qualified.
3.1. Analysis versus NTSB known (aviation) crew casual factors Fig. 1. Evolution of Safety Thinking [1].
Fig. 2. Accident Causal Factors–Extract from Sumwalt Presentation.
• 3% - Crew errors during training flight • 3% - Pilot not conditioned to respond promptly to Ground Proximity Warning • 3% - Inexperienced on ac type The findings are backed up in a more recent Boeing study as presented by NTSB senior expert, Sumwalt [3], whereby the analysis showed that ‘Procedural errors, such as not making required callouts or failing to use appropriate checklists, were found in 29 of the 37 (78%) reviewed accident’. Fig. 2 highlights the percentage of accidents with primary factor of non-adherence to procedures by the flying pilot (48%) and non-flying pilot (30%): 3. Review of Spaceship 2 Accident During Powered Flight number four (PF04) SpaceShip2 (SS2) suffered an in-flight break-up shortly after rocket motor ignition. This was due to uncommanded deployment of the feathering device at Mach 0.8 and under the aerodynamic loads tore the vehicle apart. The NTSB conclusions [4] had no doubt that the initiating (causal) factor was the co-pilot, as non-handling pilot, incorrectly unlocking the feathering device by operating the arming lever early (at Mach 0.8 instead of Mach 1.4). The NTSB concludes that the copilot was experiencing high workload as a result of recalling tasks from memory while performing under time pressure and with vibration and loads that he had not recently experienced, which increased the opportunity for errors. The NTSB notes that Scaled did not recognize and mitigate the possibility that a test pilot could unlock the feather early. The accident co-pilot was the co-pilot of SS2 on seven glide flights occurring between October 10, 2010, and August 28, 2014.
As indicated previously, the NTSB research (into crew casual factors in aircraft accidents) found that the top two factors were (i) pilot deviated from SOPs and (ii) Inadequate cross check by second crew member. These causal factors appear to be the same for the SS2 accident. The Scaled Composites’ SS2 co-pilot came from the aviation domain, and he would have been aware of the need for effective Crew Resource Management (CRM), as defined by Helmreich et.al. [5]; ‘CRM can broadly be defined as the utilization of all available human, informational, and equipment resources toward the effective performance of a safe and efficient flight.’ As the definition implies, CRM is not related to the technical knowledge and flying skills of pilots but rather with the interpersonal skills and cognitive skills in conjunction with the aircraft (equipment) i.e. the ability to communicate, maintaining situational awareness and for solving problems and making effective decisions. Designs should not rely on pilots having the ‘right stuff’ all of the time. Indeed, the Federal Aviation Administration Office of Commercial Space Transportation (FAA-AST) §460.15 [6] detail the following human factors requirements: An operator must take the precautions necessary to account for human factors that can affect a crew’s ability to perform safety-critical roles, including in the following safety critical areas— (a) Design and layout of displays and controls; (b) Mission planning, which includes analysing tasks and allocating functions between humans and equipment; (c) Restraint or stowage of all individuals and objects in a vehicle; and (d) Vehicle operation, so that the vehicle will be operated in a manner that flight crew can withstand any physical stress factors, such as acceleration, vibration, and noise. Note item (d) above in relation to the NTSB findings detailed in bold further above in para. 3. Additionally, in terms of being able to withstand any physical stress factors, FAA-AST launch licensing requirements detail the crew requirements, including pilot experience and medical requirements [6]. However, the IAASS [7] and Aerospace Medical Association (Asthma) [8] recommend that suborbital spaceplane pilots have a first-class medical certificate (not a second-class, as recommended by the FAA-AST) and be selected from military fast jet programs/astronauts i.e. pilots whom have experienced high g-forces and can cope with the environment; thereafter when mature suborbital operations are established, this could be relaxed to allow other (airline) pilots to be employed.
Please cite this article as: A. Quinn, I. Sikora, The Right Stuff ‘v’ the Right (Safe) Thing, The Journal of Space Safety Engineering (2017), http://dx.doi.org/10.1016/j.jsse.2017.07.001
ARTICLE IN PRESS
JID: JSSE
[m5G;July 17, 2017;8:48]
A. Quinn, I. Sikora / The Journal of Space Safety Engineering 000 (2017) 1–6
3
4. Safety by design Suborbital spaceplanes (or Reusable Launch Vehicles using the US terminology) are designed and operated using a mix of Aviation & Space attributes. So what design standards should be followed? A mixture of both or are forerunners designing on the edge of innovation and designing their way using their own practices, whilst being cognisant of eventual FAA-AST Launch Licensing requirements? This is stated as a question because the FAA-AST are NOT certifying these vehicles. Space safety standards dictate a Design for Minimum Risk (DMR) philosophy. This includes deriving Fault Tolerance, Safe-Life and Fail Safe criteria (also for aviation). The FAA-AST DMR philosophy within Advisory Circular AC 437.55–1 [9] details the following Safety Precedence Sequence: • • • •
Eliminate hazards (by design or operation) Incorporate safety devices Provide warning devices Develop and implement procedures and training.
Notice that in space the key term is ‘hazard’. Hazards are analysed and then, as part of quantitative analysis (see Section 5) a cumulative assessment is made in order to determine whether the Target Level of Safety has been met i.e. top-down analysis. In aviation, although the term hazard is recognised, the focus is on lower level failure conditions associated with failure modes i.e. in order to meet safety objectives such as 1 × 10−9 per flying hour for catastrophic events i.e. bottom-up analysis per AC1309 [10]. As part of the FAA-AST requirements for a launch license permit (CFR §437.55) the safety design analysis must consider the following: (1) Identify and describe hazards, including but not limited to each of those that result from— (i) (ii) (iii) (iv) (v) (vi)
Component, subsystem, or system failures or faults; Software errors; Environmental conditions; Human errors; Design inadequacies; or Procedural deficiencies.
The NTSB report [4] found that the designers (Scaled Composites) had not undertaken human error analysis (as the cause) in relation to decision errors and skill-based errors. The FAA-AST inspectors noted this and for PF04 provided a waiver against §437.55 (for not completing human error or software fault as a causal factor). We all want the nascent industry to succeed and we want innovative designs (we don’t want to ‘stifle’ the industry) BUT we want safety driving success – not waivers for incomplete analysis. 4.1. Key safety requirements By understanding space and aviation requirements the authorities and/or designers could identify key safety requirements that should be achieved in the design. For instance, a space requirement is that any Inadvertent Failure Modes that could result in a catastrophic outcome should have 3 INHIBITS. This means 3 separate and independent inhibits which could be hardware (physical switches/guards etc.), software (latches) or combination thereof. The IAASS Space Safety Manual [11] is based on the National Aeronautics and Space Administration (NASA) and European Cooperation for Space Standardization (ECSS) standards and rationalised/consolidated into one manual; in relation to ‘Functions Resulting in Catastrophic Hazards’ the following requirement is stated:
Fig. 3. FAA-AST 3-Pronged Safety Approach.
A system function whose inadvertent operation could result in a catastrophic hazard shall be controlled by a minimum of three independent inhibits, whenever the hazard potential exists. One of these inhibits shall preclude operation by a radio frequency (RF) command or the RF link shall be encrypted. In addition, the ground return for the function circuit must be interrupted by one of the independent inhibits. At least two of the three required inhibits shall be monitored. Unfortunately, the Commercial Space Transportation Advisory Committee (COMSTAC), whom advise the FAA-AST, gave the IAASS Suborbital Safety Guidance Manual the ‘Cold-Shoulder’ [12]; the Space News article stated that COMSTAC believed that (in relation to the IAASS Manual) ‘it was unwise to establish safety standards through theoretical analyses rather than flight experience’ (in relation to setting an Acceptable Level of Safety for Loss of Vehicle at 1 in 10,0 0 0 per mission). At the time of the 8th IAASS Conference, COMSTAC stakeholders included suborbital companies such as XCOR and Virgin Galactic. Since the SS2 accident and following NTSB findings, Virgin Galactic have undertaken additional systems safety analysis and have implemented modifications to SS2 [13] including: 1. DESIGN: Feather locking pin, controlled by the vehicle flight computer, which prohibits pilots from unlocking the tail section early a. Pilots will have a mechanical override if the locking pin fails 2. PROCEDURAL: VG now deciding to keep the feather locked until after the rocket engine has shut down a. Pilots will have three- to five minutes to troubleshoot in case of problems before the feather would be needed for re-entry 5. Safety tools & techniques There are several standard system safety analysis techniques such as Fault Tree Analysis (FTA), Event Tree Analysis (ETA), Failure Modes Effects & Criticality Analysis (FMECA) to name a few [14]. The FAA-AST hazard analysis AC [9] also refers to the System Safety Process AC 431.35–2A [15], which includes an exemplar Safety Programme Plan (SPP); this AC dictates these standard techniques including the FMECA. This analysis would have identified an inadvertent failure mode, resulting in catastrophic outcome, and hence demanded further mitigation. The FAA-AST provide the following 3-pronged approach (Fig. 3) and additional detailed methodology (Fig. 4) within AC 431.35–2A: The 3-pronged approach includes: • Expected Casualty Analysis; this relates to analysing the risks to the public in the event of a vehicle break-up, explosion or malfunction turn for instance
Please cite this article as: A. Quinn, I. Sikora, The Right Stuff ‘v’ the Right (Safe) Thing, The Journal of Space Safety Engineering (2017), http://dx.doi.org/10.1016/j.jsse.2017.07.001
ARTICLE IN PRESS
JID: JSSE 4
[m5G;July 17, 2017;8:48]
A. Quinn, I. Sikora / The Journal of Space Safety Engineering 000 (2017) 1–6
Fig. 4. FAA-AST System Safety Engineering Guide.
Fig. 6. Human Factors that affect performance [18].
call, followed by the actual unlocking of the feather. The NTSB report notes that the co-pilot had memorised these tasks (from cockpit video footage) rather than read these critical steps from the procedures card (which was on his knee-pad) – which arguably could then have been ‘checked’ by the pilot. 6. System safety & human factors engineering disciplines
Fig. 5. SS2 PF04 Flight Crew Procedures.
• Operating Requirements; this relates to deriving operating procedures and limitations to minimise risk to the public • System Safety Analysis; this provides detailed guidance for vehicle design organisations in order to derive safety requirements that need to be considered as part of the design (as detailed in the following figure) Note in the middle of the figure above, the requirement to undertake a FMECA. Another safety technique is the Operating and Support Hazard Analysis (OSHA) [16] - here the aim is to analyse the pilot (and maintainer) procedures and the safety engineers then review these to determine whether any hazards are affected by the procedure and also to identify where the human can skip steps or do the steps incorrectly. The FAA OSHA guidelines state that: This is performed by the Contractor primarily to identify and evaluate hazards associated with the interactions between humans and equipment/systems Back to the SS2 accident, it was clear that the co-pilot did the procedure step too early as indicated by the red circle below [17] (Fig. 5): Note the ‘verbal call’ at 0.8 Mach that the co-pilot had to make whilst in this busy stressful phase, as well as the trim stabilizer
This paper has focused on the system safety analysis that is performed (normally) in accordance with standard safety tools and techniques that would address human-machine interactions (both as causes and procedural controls). By undertaking diverse safety analysis, appropriate coverage is provided in order to identify all hazards and provide derived safety requirements (see Fig. 4) that will then be verified and validated during the development. Human Factors experts also provide valuable analysis as part of a programme. Here the focus is on human factors integration (or human factors and ergonomics [HFE]) and this includes analysing the cockpit displays for instance i.e. anthropometry, cognitive psychology, display layouts including colours for cautions and warnings etc. and analysing test procedures. In relation to the SS2 PF04 the NTSB Human Performance Presentation noted the following Stressors contributing to the accident: • Memorization of tasks - Flight test data card not referenced • Time pressure - Complete tasks within 26 seconds - Abort at 1.8 Mach if feather not unlocked • Operational environment - No recent experience with SS2 vibration and load The Stressors noted above relate to ‘human capabilities’ in Fig. 6 in that the co-pilot memorized his tasks which needed to be completed by Mach 1.8 and arguably his ‘mental state’ may have been affected by the ‘operational environmental’ in the rocket phase of flight (during the transonic bubble) with vibration and noise. Fig. 7, Fig. 8. The NTSB report [19] provides further details including details of the cockpit layout and instrumentation i.e. human-machine in-
Please cite this article as: A. Quinn, I. Sikora, The Right Stuff ‘v’ the Right (Safe) Thing, The Journal of Space Safety Engineering (2017), http://dx.doi.org/10.1016/j.jsse.2017.07.001
ARTICLE IN PRESS
JID: JSSE
[m5G;July 17, 2017;8:48]
A. Quinn, I. Sikora / The Journal of Space Safety Engineering 000 (2017) 1–6
5
about using safety and human factors specialists with appropriate qualifications, experience and training (from the beginning)? 7. Focusing on the Right (SAFE) Thing
Fig. 7. Human Factors 5-M model.
Fig. 8. SAE ARP 4761 System Safety Process – detailing typical lifecycle (hence requirement to have safety engineer from the beginning, undertaking diverse and formal safety analysis).
terface (or ergonomics) aspects. The NTSB found no major contributory factors. On projects, a problem can exist when the separate disciplines do not work together effectively i.e. design engineers, safety engineers, human factors engineers, software engineers etc. Previous IAASS conferences have had presentations from NASA in their goal for continuous improvement stating that design and safety engineers are now working more closely; and then in a separate presentation stating that designers and HF engineers are working better; well how about all disciplines working together? And how
The SS2 accident has highlighted that although initial suborbital pilots should originate from flight test schools and still possess similar traits to their earlier (1950 s) test pilot brethren, they should be protected by the right (safe) thing by design and analysis rather than rely on the right stuff due to ineffective design and operating procedures. This paragraph details some aspects of the right (safe) thing: Organisational factors. The co-pilot of SS2 PF04 did not mean to make a critical mistake and arguably contributory factors lay at the organisation level. This relates to the pilot procedures (which should be a safety net to prevent errors) and training; here the procedures were practiced in the simulator, however the simulator cannot realistically replicate the transonic phase with the vibration and noise etc. Professor Nancy Leveson’s Systems-Theoretic Accident Model and Processes (STAMP) [20] takes a holistic organisational approach and, instead of defining safety management in terms of preventing component failure events, it is defined as a continuous control task to impose the constraints (control actions) necessary to limit systems behaviour to safe changes and adaptations. So here we can learn that particular focus should be spent on analysing the procedures (controls) more effectively to prevent errors – if this means adding design steps to prevent errors, all the better. The simple answer is to always have a check-response (feedback loop, per STAMP) for procedural steps during critical stages of the flight (or indeed design the system such that minimal pilot actions/verbal calls are required). Human Factors. The suborbital flight profile involves rocketpowered flight at Mach 3, with g-forces, vibration, noise and mix of atmospheric and space loads. This is extreme interaction of man, the machine and the media (environment) is depicted well within the military-based Operational Risk Management as detailed in the FAA System Safety Handbook [21]: The management can have a major influence on whether a suborbital flight results in a successful mission or a mishap (accident). It is the management (organisation) who make the decisions to fly at the Flight Readiness Reviews and who decide on the skillset of the pilots, the safety engineers and the human factors engineers. A HF specialist should be employed to analyse the HFE aspects, particularly for suborbital flight. This can be based on existing military and civil knowledge base but adapted for the unique suborbital profile. Indeed, a HF specialist could employ the Human Factors Analysis and Classification System (HFACS) technique in conjunction with systems design to better understand the probability of systems failures AND (in conjunction with) human error/human factors. These aspects have been modelled within a Systems Readiness Assessment (SRA) framework [22] and in particular, the results corroborate that 60–80% of aviation accidents are attributed to human error. The paper highlights ‘the significance of the HF contribution to the overall steady-state system failure rate. These findings enable quantitative analyses of HF implications related to the SRA process. The results indicate there are common failure modes across a range of aviation system domains as defined within the context of the HFACS. Further work can continue to evaluate the failure rates across the disparate system types represented in the NTSB data.’. Safety by Design. It is of no use to have a ‘safety officer’ employed at some point in the programme to do some ‘hazard analysis’ to satisfy the FAA-AST requirements. System Safety Engineers should be employed from the concept phase so that they can follow best practice by undertaking detailed and diverse safety analysis. By doing this, derived safety requirements will then be able
Please cite this article as: A. Quinn, I. Sikora, The Right Stuff ‘v’ the Right (Safe) Thing, The Journal of Space Safety Engineering (2017), http://dx.doi.org/10.1016/j.jsse.2017.07.001
ARTICLE IN PRESS
JID: JSSE 6
[m5G;July 17, 2017;8:48]
A. Quinn, I. Sikora / The Journal of Space Safety Engineering 000 (2017) 1–6
to influence the design i.e. these will then require design decisions as to acceptance or rejection (with rationale). Such requirements would include levels of fault tolerance, fail-safe and safe life design, and design and development assurance levels. Section 4 further above details the (space) safety precedence sequence using the Design for Minimum Risk approach. The Aerospace Recommended Practices detail the ‘typical’ development lifecycle from Concept phase to Design Validation & Verification (including testing); this then also is carried through to operations. It is imperative to have a SQEP safety engineer from the start who understands the diverse safety techniques and tools. Safety Tools and Techniques (used by SQEP). It is of no use to just have a Fault Tree and hazard log as your safety artefacts. Aviation and space best practice detail diverse safety techniques (and tools) in order to identify hazards (as well as functional failure modes). Indeed, the FAA-AST have reasonable guidance, as detailed in Section 5 above. The FAA-AST specifically state that human error analysis (as a cause) should be carried out. You only know about diverse safety techniques if you have learned about them, have been trained to use them and have used them in appropriate context. Also, that your work has been independently checked by experts and/or authorities i.e. you have been involved in programmes using airworthiness/spaceworthiness standards (meaning that you know about standards and regulations). So, it is vitally important that suborbital operators/designers employ SQEP safety engineers. 8. Conclusions This paper reviewed the SS2 accident in order to highlight some of the issues associated with suborbital flights. These flights involve a rocket-power, phase, g-forces and extreme environments and so pilots must be provided with the design and the tools (procedures) to cope with the exacting profile. The pilots must also be suitably fit (medically) and be provided with the ‘realistic’ training. Why? – so that they do not have to rely on the ‘right stuff’ i.e. flying on the seat of their pants and dealing with problems on their own. By undertaking effective systems safety analysis (per best practice/guidance) then the vehicle will be designed with consideration for safety (from derived safety requirements). The safety (and human factors) analysis will have included human error as a cause (as well as considering the human as a control i.e. pilot recovery to a malfunction). The safety analysis would also have covered inadvertent operation/function of a system (by fault or human incorrect selection) and, with a catastrophic outcome, would have derived that 3 Inhibits are required.
So suborbital operators/designers should ensure that the pilots are protected by the right (safe) thing by design and analysis rather than rely on the right stuff. References [1] Safety Management Manual (SMM), (Doc 9859), International Civil Aviation Organisation (ICAO), second ed. 2009 Montreal. [2] L. Lautman and P.L. Gallimore, (1987), Control of the crew-caused accident: Results of a 12-operator survey, Boeing Airliner Magazine, April- June 1987, pages 1–6. [3] R.L. Sumwalt, (20/9/2012), Obtaining Better Compliance with Standard Operating Procedures (SOPs), NTSB Presentation. [4] NTSB/AAR-15/02 PB2015-10545, Aerospace Accident Report, In-Flight Breakup During Test Flight Scaled Composites SpaceShipTwo, N339SS Near Koehn Dry Lake, October 31, 2014 California. [5] R.L. Helmreich, J.R. Klinect, J.A. Wilhelm, Models of threat, error, and CRM in flight operations, in: Proceedings of the Tenth International Symposium on Aviation Psychology, The Ohio State University, Columbus, OH, 1999, pp. 677–682. [6] FAA Code of Federal Regulations, Title 14, Chapter III, Subchapter C, Part 460, Human Spaceflight Requirements. Source: Docket No. FAA-2005-23449, 71 FR 75632, Dec. 15, 2006, unless otherwise noted. [7] A. Quinn, A.A. Yepez, M. Klicker, D. Howard, A. Del Bianco, C. Chavagnac, C. Campbell, Suborbital safety guidelines manual update, in: Space Safety is No Accident: The 7th IAASS Conference, Springer, 2015, June, p. 243. [8] J. Vanderploeg, M. Campbell, M. Antuñano, J. Bagian, E. Bopp, G. Carminati, J. Charles, R. Clague, J. Clark, J. Gedmark, R. Jennings, Suborbital commercial space flight crewmember medical issues, Aviat Space Environ Med 82 (4) (2011) 475–484. [9] FAA-AST, Advisory Circular 437.55-1, Hazard Analyses for the Launch or Reentry of a Reusable Suborbital Rocket Under an Experimental Permit, April 2007. [10] FAA, Advisory Circular 25.1309-1A - System Design and Analysis, 1988 [11] IAASS-ISSB-S-1700-Rev-B, Space Safety Standard, Commercial Human-Rated System, Requirement 201.2.2. March 2010. [12] http://spacenews.com/40615international- suborbital- safety- proposal- getscold- shoulder- in- us by Jeff Foust — May 19, 2014 [13] http://news.discovery.com/space/private-spaceflight/ new- virgin- galactic- spaceshiptwo- to- debut- 160219.htm. [14] ARP 4761 - Guidelines & Methods for Conducting SSA”, Available from: www. sae.org/technical/standards/ARP4761. [15] FAA-AST, Advisory Circular AC 431.35-2A, Reusable Launch and Re-entry Vehicle System Safety Process, July 2005. [16] FAA, FAA-DI-SAFT-105, Operating & Support Hazard Analysis. [17] NTSB, Human and Organisational Issues, Human Performance Presentation, Supporting 2015 [4]. [18] FAA, Aviation Maintenance Technician – Handbook, Chapter 14, Figure 14-1. Feb 2012. [19] NTSB, Human Factors and Organizational Issues, Human Performance Presentation. 2015. [20] Leveson, N: A New Accident Model for Engineering Safer Systems, Massachusetts Institute of Technology paper. 2004. [21] FAA System Safety Handbook, Chapter 15: Operational Risk Management December 30, 20 0 0. [22] J.A. Erjavac, R. Iammartino, J. Fossaceca, An evaluation of human factors failure data in relation to system readiness assessment, in: Proceedings of the World Congress on Engineering and Computer Science 2016 Vol II WCECS 2016, October 19-21, San Francisco, USA, 2016.
Please cite this article as: A. Quinn, I. Sikora, The Right Stuff ‘v’ the Right (Safe) Thing, The Journal of Space Safety Engineering (2017), http://dx.doi.org/10.1016/j.jsse.2017.07.001