Command and data management system (CDMS) of the Philae lander

Command and data management system (CDMS) of the Philae lander

Acta Astronautica ∎ (∎∎∎∎) ∎∎∎–∎∎∎ Contents lists available at ScienceDirect Acta Astronautica journal homepage: www.elsevier.com/locate/actaastro ...

2MB Sizes 0 Downloads 66 Views

Acta Astronautica ∎ (∎∎∎∎) ∎∎∎–∎∎∎

Contents lists available at ScienceDirect

Acta Astronautica journal homepage: www.elsevier.com/locate/actaastro

Command and data management system (CDMS) of the Philae lander A. Balázs a,n, A. Baksa b, H. Bitterlich c, I. Hernyes a, O. Küchemann e, Z. Pálos a, J. Rustenbach c, W. Schmidt d, P. Spányi a, J. Sulyán b, S. Szalai b, L. Várhalmi a a

Wigner (former KFKI) Research Centre for Physics, Budapest, Hungary Space and Ground Facilities Co. Ltd., Budapest, Hungary Max-Planck Institute for Solar System Research (MPS), Göttingen, Germany d Finnish Meteorological Institute (FMI), Helsinki, Finland e Deutsches Zentrum für Luft- und Raumfahrt (DLR), Cologne, Germany b c

a r t i c l e i n f o

abstract

Article history: Received 16 September 2015 Received in revised form 9 December 2015 Accepted 13 December 2015

The paper covers the principal requirements, design concepts and implementation of the hardware and software for the central on-board computer (CDMS) of the Philae lander in the context of the ESA Rosetta space mission, including some technical details. The focus is on the implementation of fault tolerance, autonomous operation and operational flexibility by means of specific linked data structures and code execution mechanisms that can be interpreted as a kind of object oriented model for mission sequencing. & 2015 Published by Elsevier Ltd. on behalf of IAA.

Keywords: Critical embedded systems Fault tolerance Hybrid architectures Object oriented model for mission sequencing Patch-working technique On-board autonomy and operational flexibility

1. CDMS in relation to Philae subsystems and instruments 1.1. Context of operation After a ten-year journey across the Solar System and many complicated manoeuvres, the Rosetta spacecraft [1] with the Philae lander [2] attached to it smoothly approached a small (2–4 km in diameter) celestial body, comet CG/67P. Furthermore, the spacecraft executed additional fine manoeuvres to fly a multitude of low and high altitude orbits around the comet, mapping its shape and surface in detail never seen before, and has continued to observe it for more than a year n

Corresponding author. Tel.: þ 36 1 392 2747. E-mail address: [email protected] (A. Balázs).

since then. The Rosetta spacecraft and Philae lander are equipped with scientific instruments that delivered a wealth of new knowledge about the comet, in addition to spectacular pictures (Fig. 1). The Philae maintenance/calibration and comet science operations were partitioned into 1 þ3 mission phases. Philae mission phase

Duration

Energy sources

Cruise

 10 years 7 h

max. 53 W by Rosetta

SeparationDescent-Landing (SDL) First Comet Science sequences (FSS)

 60 h

solar generators and  1300 þ100 W h by primary and rechargeable battery solar generators and  1300 þ100 W h by primary and rechargeable battery

http://dx.doi.org/10.1016/j.actaastro.2015.12.013 0094-5765/& 2015 Published by Elsevier Ltd. on behalf of IAA.

Please cite this article as: A. Balázs, et al., Command and data management system (CDMS) of the Philae lander, Acta Astronautica (2015), http://dx.doi.org/10.1016/j.actaastro.2015.12.013i

A. Balázs et al. / Acta Astronautica ∎ (∎∎∎∎) ∎∎∎–∎∎∎

2 Long Terms Science operations (LTS)

max.  14 solar generators and  100 W h by months rechargeable battery

The landing of Philae on the surface of the comet was initiated by the Rosetta spacecraft 500 million km away from Earth, at a distance of 22.5 km from the comet on 12 November 2014. Upon touching down – after a ballistic descent phase of 7 h – the lander could not attach itself to the comet due to an unexpected systematic failure in the dual redundant anchoring subsystem and a malfunction of the non-redundant active descent subsystem. The lander, however, remained mechanically and functionally intact even during a triple bouncing period over the comet. It also kept radio contact alive with the Rosetta spacecraft, which served as a relay station between Philae and Earth. The direct measurements made by the instruments of the Philae lander on the surface of the comet provided significant scientific results. The probe's primary energy sources delivered energy for doing science on the surface for roughly 60 h on first run. Thereafter Philae fell into a state of hibernation due to the disadvantageous environmental conditions at its final landing site  3 Astronomical Units (AU) away from the Sun. Philae woke up after about six months of hibernation at the end of April 2015, and its central on-board computer autonomously entered the Long Term Science mode, driven by solar power and the rechargeable battery.

The hardware architecture and software support determined whether multiple failures could have led to partial degradation or complete loss of the Philae lander. The Electrical Separation and Communication System (ESS) aboard the Rosetta spacecraft controlled Philae separation and it was in charge of establishing and maintaining bidirectional telecommunication link with the Philae lander. This link mostly used redundant, opto-coupled umbilical channels during the cruise phase, but exclusively radio communication after Philae separated. Rosetta, specifically its ESS, served as a bidirectional relay station between the Philae operations control centre on ground and the Philae lander. The structure of Philae telecommand (TC) and telemetry (TM) packets and their time-stamping rules had to be adapted to those of the Rosetta spacecraft. In order to simplify the requirements of CDMS hardware design (e.g. mass-memory) and software support, and to facilitate establishment and maintenance of the bidirectional handshaking telecommunication link between Philae lander and the Rosetta orbiter, both TC and TM packets were of constant length (TC¼41, TM¼141 words, including packet header and command/data field). In addition to collecting TM packets carrying housekeeping parameters of all active Philae units or science data collected from Philae instruments, so called event TM packets were also to be generated. These were to enable Philae operators to keep track with ongoing on-board processes and operational mode changes for a posteriori

Event

Date

Distance

Illumination period

Solar power

Compartment temperatures

Hibernation Wake-up

2014 Nov 2015 April

 3 AU  1.8 AU

 1.1 h 3 h

o 5W 4 20 W

⪡  50 °C @sunrise 4  50 °C @sunrise

The principal requirements, design concepts and implementation of the hardware and software for the central onboard computer of the Philae lander are described below.

1.2. Basic technical and operational requirements No single failure at any point inside the central onboard computer (CDMS) should lead to functional degradation. Extending tolerance against single-point failures towards some critical parts of the entire Philae system was also a design requirement. This had to include control and redundancy management of the – dual modular-redundant radio receiver and transmi tter units, – various touchdown detection methods, – anchoring subsystems, – power distribution subsystem, – thermal control units, – Philae internal command/data interfaces.

Philae system checkout and a priori controlling purposes. For periods when there was no telecommunication link between Philae and Rosetta, formatted TM packets were to be stored in the FIFO-like mass-memories. After re-establishment of the link, the contents of the mass-memories were to dump on to the telemetry channel and then CDMS was to enter the real-time telemetry mode. The starting times and durations of the radio visibility windows between the Rosetta spacecraft and the Philae lander depended on such factors as the flight track executed by the Rosetta spacecraft, the 12.4 h rotation period of the comet, the landing site, and the orientation of Philae on the comet. Although these time windows were nominally calculable, Philae – in particular CDMS – had to be prepared for deviations from the predictions. We clearly anticipated that the operators might have limited, sporadic and time-restricted opportunities for intervention. For safety reasons, we also had to ensure that the stored mission sequencing database had a consistent structure throughout. In addition, the long 2-way signal travel time (in the range of 2*20 min) between the ground station and the Rosetta spacecraft almost ruled out the possibility of

Please cite this article as: A. Balázs, et al., Command and data management system (CDMS) of the Philae lander, Acta Astronautica (2015), http://dx.doi.org/10.1016/j.actaastro.2015.12.013i

A. Balázs et al. / Acta Astronautica ∎ (∎∎∎∎) ∎∎∎–∎∎∎

Alpha-p-X-Ray Spectrometer

Isotopic Gas Analyser

Instruments

Molecular Gas Analyser

Service system

Comet Nucleus Sounding

Radio Receiver/Transmitter Dual-redundant

CDMS Philae on-board computer Dual&triple-redundant

Touch-down Thruster Landing-gear Tuch-down Sensor Dual-redundant

Radio link (16 kbit/s) after Philae separation Panoramic & Descent Cam

Instruments

Opto-coupled umbilical link from/to ESS aboard Rosetta spacecraft

Subsystems Anchoring Dual-redundant

3

Power Distribution Batteries/Solar panels Partially dual-redundant

Magnetometer Plasma Monitor Sub-Surface Gas Analyser Science A Acoustic Sens Dust Monitor

Drill & Sampler Thermal Control Dual-redundant

Fig. 1. Service system, subsystems and scientific instruments aboard Philae lander [1,2].

on-line checkouts and immediate intervention, particularly during comet operations. Since energy available for the science programmes from the primary and rechargeable batteries and the solar panels was rather limited especially far from the Sun, the periods of falling back into stand-by mode and waiting for operator instructions had to be minimised. CDMS also had to be prepared for the Long Term Science phase, driven solely by solar power and the rechargeable battery, requiring control and power flow management for fully autonomous battery operations. In order to meet these requirements – especially the latter two – CDMS had to be made capable of performing on-board autonomous control of on-comet Philae operations and supporting operational flexibility, particularly during the comet science phases. Software re-programmability and hardware reconfigurability were also important design aspects.

2. Functional subunits of CDMS 2.1. Data processing units (DPUs) and processor (CPU) The central on-board computer (CDMS) of Philae is a critical embedded system in the Philae lander [2], subject to the very strict flight hardware requirements imposed on the Rosetta mission [1]. The electronic components used in CDMS – including a total of 13 highly integrated programmable gate arrays (FPGAs) – had to withstand a high radiation dose during the more than 10-year flight and enable operation within a wide temperature range (thermal-vacuum tested between  60 … þ70 °C) at low power consumption (super-low-power, low-power and nominal modes implemented between 1.5 and 3.5 W). In order to

meet these expectations, CDMS was built around a microprocessor type with a long track record in space, the RTX2010, manufactured by Harris Semiconductor. It is a highly reliable, radiation-hardened stack machine processor that implements the Forth programming environment in hardware. Subroutine calls and returns take only one processor cycle and the interrupt latency is consistently very low – only four cycles. These features make it suitable for real-time applications. The 16-bit processor also integrates timers and an interrupt controller, and provides some support for multitasking. 2.2. Code and data memories in the dual modular redundant DPUs

■ 32 kbyte read-only memory (PROM) for compressed bootstrap, with fall-back software with reduced functionality including full telecommunication control to receive upgraded (primarily acting) software versions and copy/store them into on-board reprogrammable memory (EEPROM). Telecommands – including those for managing the uplink of upgraded executable code – are received by both hot-redundant DPUs. ■ 4*64 Kbyte Hamming code protected EEPROM for executable code of the primarily acting software version, parameter tables and mission sequencing database (see later). ■ 4*64 kbyte Hamming code protected RAM for software code, data and variables. Hamming-code error correction protects the DPU memories against radiation-caused single event upset (SEU). Upon booting, the primarily acting software version is copied from EEPROM into RAM for execution.

Please cite this article as: A. Balázs, et al., Command and data management system (CDMS) of the Philae lander, Acta Astronautica (2015), http://dx.doi.org/10.1016/j.actaastro.2015.12.013i

A. Balázs et al. / Acta Astronautica ∎ (∎∎∎∎) ∎∎∎–∎∎∎

4

2.3. Oscillators Every single functional subunit, including each DPU, has its own local clocking oscillator. The DPUs also receive a permutative feed of the two RTC subunits' clock signals to provide an independent clock signal for their watchdog hardware. A qualifier circuit selects one of two clock signals for the local processor. 2.4. Power distribution board and latching current limiters (PWR and LCLs) CDMS is supplied by 2*2 power converters whose outputs are OR’d together pair-wise by diodes to build two internal power buses for the internal CDMS hardware subunits. All of these have dedicated latching current limiters to protect the common power buses against short circuits. Internal subunits with very low power consumption are powered through simple protective resistors. 2.5. Emergency hardware telecommand decoders (ETCDs) The last line of defence in unexpected emergencies is a top mechanism involving triple hot-redundant high-priority hardware telecommand decoders. These provide some basic hardware and software configuration control such as: – Enable/disable write into protected EEPROM area while the primarily acting CDMS software version is burned. – Forced run of the fall-back PROM software. – Processor reset. – Forced primary DPU selection. – Reactivation of latching current limiters of DPUs. – Turn off a DPU, enabling the operators to override the autonomy of the self-repairing CDMS core (see later) and arbitrarily isolate an irretrievably failed DPU from the system. 2.6. On-board time counters (RTCs) Philae on-board time is kept by dual hot-redundant read-write hardware counters with the required accuracy for TM packet time stamping and other functions that use absolute timing. The time counters are protected at FPGA level against single event upset (SEU) caused by radiation in space. The Rosetta ESS unit issues specific TC packets to synchronise Philae on-board time to that of the Rosetta spacecraft. The software maintains a solar power driven day counter for periods without absolute time synchronisation between Philae and Rosetta, e.g. during the survival mode of the Long Term Science phase (see later). 2.7. Mass-memories (MMs) Formatted TM packets are stored in dual hot- or coldredundant FIFO-like mass-memory units with 2*16 Mbit (2*8727 TM packets) Hamming code protected RAM working areas. The contents of the RAM working area can be saved into EEPROM memories of the same size to enable the massmemories to be used in power-cycling (cold-redundant) mode or to handle temporary power losses of the complete

CDMS. The latter was likely to occur, for example, in survival mode during the Long Term Science phase (see later). The parallel-to-serial converted outputs of the mass-memory boards are directly multiplexed to the radio transmitter (TX) units. The software bypasses the mass-memories when feeding the real-time telemetry path. 2.8. Philae internal command/data interfaces/channels (CIUs) Dual hot-, or cold-redundant, bidirectional serial links with two sets of individual command (TC), data (TM) and clock (CLK) signals to/from every single Philae unit. To facilitate failure isolation at Philae system level, the usual bus-structured internal communication interface was rejected in favour of star-structured channels between the DPUs and Philae units. The data transmission rate of 32 kbit/ sec was twice that of the radio link between Philae and Rosetta (16 kbit/sec). This speed was a compromise. It minimised the software performance demanded from the Philae units when receiving commands and managing data flow towards the DPU, while maintaining optimum use of the 16 kbit/sec raw telemetry channel rate. We defined a common command and data exchange protocol for all Philae units. This had to cope with both intelligent units (instruments) and non-intelligent units (mostly subsystems with no processor and software). It enables CDMS to provide services such as – on-board time and Philae/CDMS system status distribution to Philae units (e.g. request undue, quota exhausted, cometary day/night, link established, etc.), – individual or broadcasted, “waterfall-like” (direct or time-tagged) or buffered (transfer of dedicated telecommand buffers requested by units, then serviced by CDMS) commanding of Philae units, – individual or grouped housekeeping parameter access from any Philae subsystem or instrument, – request-service based science data collection from Philae instruments, – support of data exchange and event flagging between Philae units via CDMS using the so called backup-RAM mechanism, – flagging of operation completed by any instruments.

3. Operating system and application tasks The software had to meet the following low-level requirements (not including the actual functionality, which is described later). ■ Real-time behaviour: low granularity in task switching and predictable response times to external events, especially in critical mission phases (e.g. at comet touchdown) ■ Flexibility and re-programmability: This criterion is self evident if we consider the 10 year flight time [1,2]. The software was not fully ready at launch and was updated several times during the mission ■ Fault tolerance ■ Small memory footprint

Please cite this article as: A. Balázs, et al., Command and data management system (CDMS) of the Philae lander, Acta Astronautica (2015), http://dx.doi.org/10.1016/j.actaastro.2015.12.013i

A. Balázs et al. / Acta Astronautica ∎ (∎∎∎∎) ∎∎∎–∎∎∎

Based on our team's experiences with similar projects in the past [3-8,10,11], we decided to develop a purposebuilt proprietary operating system from scratch [12], for the following reasons:

routines and so do not lead to memory corruption. To further reduce the danger of memory overwriting, we decided not to use dynamic memory allocation. The Application Program Interface (API) of the operating system is implemented via software interrupt system calls which provide I/O device reservation and release, read/write, task management (activation, suspension, termination, priority setting), software timers handling, synchronised inter-task communication, on-board time setting and other functions. The application program is divided into tasks, and the distribution of the application-level functions among the tasks is static. Each application task has a special role as follows:

■ Full control of design allowed us to utilise the unique features of the RTX2010 processor. The whole on-board software was written in Forth, the native language of the processor. Forth results in compact code which can be patched easily during runtime. ■ The proprietary hardware required specialised drivers for the interfaces to the instruments, the communication links, the background data memory (mass-memory) and the on-board timers. ■ We could keep within the memory limitation by restricting features and services to those essential for our purposes. The entire operating system occupied  4.5 kByte.

■ Task-1: Establishment and maintenance of the bidirectional telecommunication link with the Rosetta spacecraft ■ Task-2: Experiment control, commanding and telemetry data collection from the Philae instruments and subsystems ■ Task-3: Autonomous operational sequencing of the Philae instruments ■ Task-4: Power and thermal control of the Philae lander ■ Task-5: Philae anchoring and touch-down control ■ Task-6: Self-diagnostics and redundancy management of functional subunits ■ Task-7: CDMS internal memory test routines and housekeeping packets generator ■ Task-8: Idle task for processor load and run-time analysis

The operating system [12] provides pre-emptive multitasking scheduling, and ensures that no task can hold the Central Processing Unit (CPU) resources longer than a fixed time defined by the real-time clock tick of 4 ms. The stackoriented Forth system uses two stacks: data and return. The RTX2010 processor has integrated data and return stacks, each 256 words long, and provides the option of splitting them into multiple independent pieces to support multitasking. We utilised this feature to enable eight tasks to run in parallel on two individually selectable, mission phase dependent priority levels. Stack overflow and underflow events are handled in interrupt

PhilaeUnit-n_Cmd/main

SEL

+15V_A +15V

MPX

Serial Out

Serial In

OR

Q Set Res

DPU-1 reset DPU-2 off EEPROM write MPX

DPU-1

X-link

DPU Fault OR

DPU2 DPU1

CPU PROM EEPROM RAM

Latching Current Limiter

SW & HW (ETC-1) decoder

DMPX

MPX

Serial Out

Serial In

CPU PROM EEPROM RAM

+5V

+5V

Latching Current Limiter

Latching Current Limiter (LCL)

DPU Fault OR

Q Set Res

DPU-2 reset DPU-1 off EEPROM write

DPU-2 MPX

MPX

+15V_B

DPU2/1primary_C DPU2/1primary_B DPU2/1primary_A

+5V DMPX

+5V_B

OR

OR

SW & HW (ETC-2) decoder

Q Set Res

OR

DPU-2 off DPU-1 off DPU1&2 LCL on

ETCD

DPU2 DPU1

PhilaeUnit-1_Cmd/main

+5V_A +5V

MPX

PhilaeUnit-n_Data/main

DMPX

PhilaeUnit-1_Data/main

MPX

MPX

TX-2

PWR

DPU2 DPU1

MPX

RX-2

LCL Real-time TM-2 LCL Real-time TM-1 LCL MM-2 LCL MM-1 LCL RTC-2 LCL RTC-1 LCL CIU-2 LCL CIU-1 Majority Voter

TX-1 RX-1

5

HW (ETC-3) decoder

Umbilical-1&2 Fig. 2. Block diagram and redundancy control scheme of CDMS.

Please cite this article as: A. Balázs, et al., Command and data management system (CDMS) of the Philae lander, Acta Astronautica (2015), http://dx.doi.org/10.1016/j.actaastro.2015.12.013i

A. Balázs et al. / Acta Astronautica ∎ (∎∎∎∎) ∎∎∎–∎∎∎

6

4. Fault tolerant hardware architecture and software support

modular redundant fault masking systems [9]. In this hybrid architecture:

4.1. Design concept

■ Only the vital functional subunits are triple modular redundant, for example, logic circuits for primary DPU selection and the high priority hardware telecommand decoders. ■ Others are dual hot-redundant, such as DPUs, on-board time counters and Philae internal serial communication interfaces. ■ Yet others are dual cold-redundant, such as the massmemories and RX/TX radio receiver/transmitter units.

Fault tolerance requires fault recognition, isolation and system recovery functions implemented in hardware structures and, most importantly, specific software mechanisms. If a limited region [2], in particular the intelligent core of a complex system (the processing units; fault tolerant multi processor [9]) can be built to be fault tolerant with self-repairing capabilities, the extension of fault tolerance towards the entire system may be simply the question of “intention”. This means that such an extension is constrained only by the time and cost of developing and validating the required additional software. The extension involves specific software coding rules, techniques and even tricks, such as self-diagnostic routines, optimised algorithms to cover all conceivable failures, context data integrity and code consistency checking techniques and mechanisms. In contrast to the intelligent core, the rest of the system (interfaces, functional subunits) does not necessarily need to be provided with self-repairing capabilities, especially where there are no critical real-time requirements to be met and dynamic recovery is allowed. The self-repairing fault tolerant and intelligent core can reliably test, select and finally activate any other intact functional subunit, provided it has its embedded spare counterpart. This concept was the basic design guideline for the central on-board computer of Philae. The severe technical constraints on the on-board computer (e.g. mass 1.3 kg, volume 1 dm3, complex interconnecting harness within CDMS and on the Philae system level, low power consumption) left no realistic alternative to a hybrid hardware architectural design and activity sharing between hardware and software. This software supported fault tolerance contrasts with triple

In order to reduce the complexity of the interconnecting harness, serial communication interfaces and lines were implemented between the processor units (DPUs) and all other redundant functional subunits of CDMS (see Fig. 2). 4.2. Fault recognition, isolation and system recovery schemes Simple, distributed, triple hot-redundant hardware logic decides on the current role (primary vs. secondary) of the two identical, dual hot-redundant processor units (see Fig. 2). Three signals (DPU2/1_primary_A,B,C) are produced by this logic for majority-voters that are accommodated in each functional subunit. They select the serial command line driven by the current primary DPU. Both DPUs can command and acquire data from any other functional subunits of CDMS in any combination of main and redundant counterparts. Both provide constant control of the Philae lander, but only the current primary DPU has effective control. A serial bidirectional crosslink is established between the two DPUs to keep them in sync (Fig. 3). Should the current primary DPU fail, the current secondary DPU takes over the primary role. To enable the

Fig. 3. Stack of CDMS boards, a DPU and RTC/ETCD board.

Please cite this article as: A. Balázs, et al., Command and data management system (CDMS) of the Philae lander, Acta Astronautica (2015), http://dx.doi.org/10.1016/j.actaastro.2015.12.013i

A. Balázs et al. / Acta Astronautica ∎ (∎∎∎∎) ∎∎∎–∎∎∎

handling of transient errors, there is no limit on the number of role-changes. The two DPUs are completely identical, without any distinguishing features. The CDMS hardware architecture does not permit to recover after a DPU role-change at the level of the fine structures (instructions or application tasks) of the running software, as is the case in other fault tolerant systems. DPU rolechange and follow-on recovery therefore had to be adapted to the CDMS hardware architecture and takes place at the level of elementary sequencing items, which are discussed in detail below. Examples of these are “ignore event and continue” and/or “repeat a step vs. jump backward/ forward in a sequence” and/or “activate a recovery/fallback sequence”. A hardware–software watchdog technique supports fault recognition in the local DPUs. Numerous further software self-diagnostics routines and mechanisms support fault recognition, isolation and recovery in CDMS and at all points where it is specifically required at the level of the Philae lander. We will look at the most prominent of these. 4.2.1. Primarily acting (EEPROM) vs. fall-back (PROM) onboard software PROM software is for compressed bootstrap and fallback on-board software with reduced functionality and telecommunication control to receive the executable code of upgraded – primarily acting – software versions. The received software code is copied into the write-protected area of the local EEPROM in both DPUs. In order to prevent the primarily acting on-board software from being overwritten in case of a software crash, EEPROM burning in the software area needs to be explicitly enabled/disabled via high-priority hardware decoded telecommands. The primarily acting on-board software version is double-stored in the local EEPROM memory of a DPU, and both executable code images are checked for integrity prior to copying a non-corrupted one into RAM for execution. This makes 2*2 copies of the same full software code in the two DPUs. If one becomes corrupted, another is executed, and if necessary, a self-initiated DPU switchover takes place. Only if all EEPROM software versions in both DPUs have become corrupted, the PROM fall-back software is activated, an event of extremely low probability. 4.2.2. Radio receiver (RX) units Several software mechanisms take care of turning on RX units with changeable periodicity, RX1 and/or RX2 cyclically-alternately as long as no bidirectional telecommunication link can be established or both permanently. A hardware mechanism was added to these to raise the margin of safety. It turns RX units on and off with fixed periodicity, RX1 or RX2 cyclically-alternately. As it did with the umbilical channels to the ESS telecommunication unit aboard Rosetta, CDMS software keeps checking if meaningful protocol control patterns are received by a particular RX unit. For periods with established telecommunication link the currently active RX unit is kept constantly powered on by the software. The reception of meaningful protocol control patterns is the only criterion for fault recognition, or rather intact RX unit selection, and so the

7

method covers all of the failure containment points (hardware, harness, overload, etc…) in the chain of RX units’ control. 4.2.3. Radio transmitter (TX) units Once meaningful protocol control patterns are received, one of the TX units is powered on and CDMS starts to establish the bidirectional hand-shaking telecommunication link with Rosetta/ESS. If the link cannot be established within a particular time-out, CDMS makes an attempt with the redundant TX unit, then with the main one again, and so forth. The fault recognition/intact unit selection method is the same as for RX, and with the same strengths. 4.2.4. Bunched, checksum-protected uplink of sets of telecommands An aspect of Philae operations specific to the comet phase is the possibility of unstable and sporadic telecommunication links between Rosetta and Philae when the spacecraft happens to fly on the edge of Philae’s radio visibility cone/corridor. For the sake of safety, the whole mission sequencing database must have a consistent structure. This is discussed in detail below. A lossless uplink can be achieved by building checksum-protected bunches of TCs and up-linking them as many times as necessary for the desired level of safety. 4.2.5. “Blind” commanding in TC-backup mode CDMS can receive telecommands in the so called TCbackup (unidirectional) mode without having to establish a bidirectional link with ESS. This CDMS commandreception capability is designed specifically for critical and emergency situations such as insufficient energy to turn on a radio transmitter unit, as required for a bidirectional link. More generally, when the absence of telemetry reception over a long period conflicts with reasonable predictions, TC-backup mode may be used to attempt to command the lander to restore its nominal communication capability on a trial-and-error basis. 4.2.6. On-board time counters CDMS maintains an on-board software time counter as a local reference for credibility checking of the time values provided by the dual redundant on-board hardware time counters. The hardware on-board time counter found to be “true” is selected as an accurate on-board time provider. 4.2.7. Handling of basic off-nominal events Three basic off-nominal events are handled by CDMS: – DPU switchover, – temporary loss of power to CDMS or the complete Philae system, – overload of Power Distribution Subsystem power converters supplying Philae units. Follow-on system recovery is common for all three and takes place at the level of elementary sequencing items, such as “ignore event and continue” and/or “repeat a step vs. jump backward/forward in a sequence” and/or “activate a recovery/fall-back sequence” (see later).

Please cite this article as: A. Balázs, et al., Command and data management system (CDMS) of the Philae lander, Acta Astronautica (2015), http://dx.doi.org/10.1016/j.actaastro.2015.12.013i

A. Balázs et al. / Acta Astronautica ∎ (∎∎∎∎) ∎∎∎–∎∎∎

8

4.2.8. Philae internal command/data interfaces/channels CDMS software constantly assesses the functionality of the internal command/data interfaces/channels in each Philae subsystem and instrument. If the main channel to a unit proves to be faulty, the redundant channel is selected. 4.2.9. Mass-memory Dual redundant FIFO-like mass-memory boards can be individually enabled/disabled from ground. These are not subject to an explicit on-board software appraisal mechanism. There are specific software mechanisms to save mass-memory RAM contents into EEPROM at regular intervals and prior to system shut-down, and to retrieve saved contents from EEPROM into the working area of RAM after power on. This prepares the system for accidental power losses and controlled shut-down of the complete Philae system, including CDMS. 4.2.10. Overload protection CDMS software constantly checks whether any of its currently-active internal hardware functional subunits (including RX/TX units) become unusable due to overload. If yes, attempts are made to reactivate them on-board autonomously or via ground command. 4.2.11. TM data quota checking mechanism Three optionally selectable TM data quota checking mechanisms have been implemented in the software:

“strict-”, “soft-” and “no-quota”. The aim of the first two is to prevent the CDMS system resources and services from being hogged by a single Philae unit. The soft-quota checking mechanism, if activated, allows for accepting science data from a particular Philae unit even if its quota is exhausted, in which case the affected unit is pushed downwards in the list of priorities as a “penalty”. This provides dynamic priority management of Philae units. Due to the rather limited storage capability and the FIFOlike structure of the mass-memories as well as occasional unpredictability of telecommunication windows, deciding which quota checking mechanism to activate during each mission phase is not always straightforward. The softquota checking mechanism, however, can significantly simplify the process of Philae operations and resource planning while providing failsafe operation at Philae system level. 4.2.12. Anchoring The dual redundant anchoring subsystem was designed to shoot two harpoons after touchdown on the comet. This was handled by a hard-coded sequence. 4.2.13. Active vs. passive state-of-charge monitoring of the secondary battery CDMS software compares the results of the two methods provided by the power distribution subsystem hardware. It selects the more accurate active method if the

Telecommand Processor

OR Ground station

Mission Timeline Mission Sequencing Object - A Elementary Sequencing Item - A Attributes: active units, priorities, quotas, data rates, identifier of attached relative time-tagged TCs, When to terminate? Where to jump? Methods: Data collection, Philae system relevant activities, In-flight/run-time methods Events: Nominal and off-nominal event handlers

1

TC to Unit-A TC to Unit-B

Condition?

Method constructor TC primitives: - Memory operations (R/W) - Get data from Philae units - Arithmetic/logic ops. - Conditional TC execution - Timing/looping, branching - Subroutine execution - Construct/activate a new mission sequencng object

Patch-working sequence - A TC to Unit-Y

TC primitive

Condition?

TC primitive

TC primitive

Mission Sequencing Object - B Elementary Sequencing Item - B 2 TC to Unit-C

Condition?

TC to Unit-D

x Elementary Sequencing Item - N k

Condition? TC to Unit-E TC to Unit-F

Fig. 4. Mission sequencing scheme.

Please cite this article as: A. Balázs, et al., Command and data management system (CDMS) of the Philae lander, Acta Astronautica (2015), http://dx.doi.org/10.1016/j.actaastro.2015.12.013i

A. Balázs et al. / Acta Astronautica ∎ (∎∎∎∎) ∎∎∎–∎∎∎

difference falls into a predefined interval and the passive method otherwise.

5. Operational flexibility and on-board autonomy 5.1. Telecommanding Specific telecommand (TC) structures have been defined for up-linking: ■ Tables of operational parameters to be stored in CDMS, such as hardware configuration table, basic mission and mode setting parameters, unit administration parameters, units priority order, set-points of thermal control units and Long Term Science mode control parameters. ■ Mission sequencing database to be stored in CDMS, such as the queue of absolute time-tagged commands, elementary sequencing items, mission sequencing objects, and list of relative time-tagged commands (see below). ■ Telecommands addressed to any Philae units. The list of relative time-tagged TCs may contain commands addressed to either the Philae lander units (instruments and subsystems) or to the processor unit (CDMS) itself. Regardless of where the TCs are addressed, they can be executed either immediately or in an absolute or relative time-tagged manner. 5.2. Mission sequencing scheme [12] A high degree of operational flexibility is ensured by a three-level scheme for facilitating autonomous operation and sequencing of scientific measurements, executing system level nominal activities, and handling off-nominal events. In order to fit these requirements to the embedded nature of the on-board computer, we defined specific linked data structures and code execution mechanisms (see Fig. 4). These can be interpreted as a kind of object oriented model for mission sequencing. The three levels are as follows: Level-1: Construction of elementary sequencing items (instances of a specific class stored in CDMS memory) with ■ individual attributes and member variables, including the active lander units to be powered on, selectable power converters, unit priorities, data collection rates, data quotas, and a pointer attribute to the list of relative time-tagged telecommands (TCs) to be executed in conjunction with this item. Further attributes define the conditions under which a particular elementary sequencing item should be deactivated (i.e. terminated upon time-out, occurrence of nominal or off-nominal events, or upon initiation of a particular lander unit after its operation has been completed), and the means of carrying on the sequencing (e.g. step to the next item or conditional vs. unconditional jump to another item), ■ nominal and off-nominal event handler routines,

9

■ common methods, either “instance” or “static” in Java terminology. These cover such areas as housekeeping and science data collection and execution of system relevant nominal activities (e.g. primary battery conditioning, secondary battery charging, etc.). The behaviour of methods depends on the individual attributes and member variables of particular objects. A special feature is provided by a set of stored commands addressed to the processor called “method constructor TC primitives” (see below). This feature allows new methods to be added during runtime, even during flight (in-flight methods). Level-2: Construction of mission sequencing objects (instances of another class stored in CDMS memory), mostly – but not necessarily – as series of elementary sequencing items. Level-3: Construction of a complex mission timeline. A timeline is composed of parallel-running mission sequencing objects. In practice, we need to differentiate as-planned from as-run timelines. The as-run (i.e. finally-executed) timeline is not necessarily predictable a priori; it may change due to nominal-, off-nominal event terminated elementary sequencing items and/or condition dependent branching in mission sequencing objects, or on the initiation of individual methods of elementary sequencing items. The elementary sequencing items, the mission sequencing objects, and the list and contents of relative time-tagged commands – including method constructor TC primitives addressed to the processor – are all changeable and pre-stored in the EEPROM memory of the on-board computer by means of arbitrarily structured, checksum protected tables. These tables are linked together by appropriate pointers, actually identifiers of their elements. Because these elements are not necessarily inserted in the tables incrementally, the software uses a search process to identify them individually, e.g. as being for activation, execution or deletion. The zero reference time of the relative time-tagged commands (TCs) attached to a particular elementary sequencing item is the moment of activation of the sequencing item. Upon activation, the attached TCs are taken from the list/table of relative time-tagged TCs, converted from relative into absolute time-tagged ones, and put into the queue of the absolute time-tagged TCs for execution at their due time. The object oriented data structures, relative timetagged TCs and in-flight methods are not inherent parts of the hard-coded on-board core software. The elements of these structures/tables can be deleted individually or group-wise and replaced by new ones via standalone telecommands. In order not to lose them upon temporary power losses, they are stored in reprogrammable EEPROM memory. The list of relative time-tagged TCs may contain commands addressed to either the Philae lander units or the CDMS processor unit itself. The latter feature allows the user to create in-flight methods during run-time in any

Please cite this article as: A. Balázs, et al., Command and data management system (CDMS) of the Philae lander, Acta Astronautica (2015), http://dx.doi.org/10.1016/j.actaastro.2015.12.013i

10

A. Balázs et al. / Acta Astronautica ∎ (∎∎∎∎) ∎∎∎–∎∎∎

elementary sequencing item, at any points of the sequences. A specific set of method constructor TC primitives has been defined to realise such in-flight methods by providing functions for – memory read/write, – obtaining data from any Philae unit, – executing arithmetic and logical operations on local variables ( þ, ,*,/, ¼,4 ,o,&,||), – condition checking and conditional TC execution, – timing, branching, looping, – code overlaying to make more efficient use of the limited code memory space, – subroutine execution, – constructing and activating new mission sequencing objects. The relative time-tagged TCs for subroutine execution accommodate executable machine code with an integrity checking mechanism. Such subroutines may also contain references/calls of subroutines that are pre-coded in the core software. 5.3. A patch-working technique for operational flexibility and on-board autonomy When the various functions of all method constructor TC primitives are looked at together, they strikingly resemble primitives – i.e. fundamental instructions – of a “programming language”. Method constructor TC primitives are suitable for introducing an additional software layer on top of the CDMS core software. The TC primitive “Construct/activate a new mission sequencing object” can also be called in practice as “Start a

patch-working sequence”. A particular patch-working sequence has its own unique identifier and is composed of a series of time-tagged TCs to execute. These may be any kind of TCs, addressed either to Philae units or to the CDMS processor alone, including any method constructor TC primitives. A particular patch-working sequence can be started either upon ground command or at any point within any running sequences. The number of userdefined patch-working sequences is limited only by the memory available for stored TCs. Hard-coded patchworking sequences are also inserted in the CDMS core software at many points. These have the form of either single-shot or regular-launch sequences, but are merely place-holders and thus by default have no attached timetagged TCs. Upon necessity, these place-holder sequences can be updated with arbitrary content in the form of relative time-tagged TCs for execution. The scheme (see Fig. 4) has provided a very high degree of flexibility and the ability to establish on-board autonomy in terms of operational reorganisation, rescheduling both in the software development phase and after landing on the comet. Eventually, a single master patch-working sequence can manage conditional and/or time-out driven start and termination of any pre-stored sequences and sub-sequences. This allows the software to easily adapt its behaviour to unpredictable operational and environmental conditions on the comet.

6. Philae on-comet specific jobs of CDMS 6.1. Philae Separation-Descent-Landing control Upon Philae separation from Rosetta spacecraft, CDMS automatically entered the Separation-Descent-Landing

Fig. 5. A. Examples of simulated, typical solar power profiles. at various Philae landing sites, orientations and heliocentric distances (3, 2.5, 2 AU), B. In-situ measured power profiles of solar panels/generators, and reconstructed Sun trajectories after Philae wake up at end of April 2015.

Please cite this article as: A. Balázs, et al., Command and data management system (CDMS) of the Philae lander, Acta Astronautica (2015), http://dx.doi.org/10.1016/j.actaastro.2015.12.013i

A. Balázs et al. / Acta Astronautica ∎ (∎∎∎∎) ∎∎∎–∎∎∎

(SDL) control phase. Numerous subsystems and most of the Philae instruments were working and operated in parallel during the SDL phase. This put a high load on the on-board computer. Prior to touching down on the comet, CDMS entered the “touchdown listening mode” by raising the priority of the respective application task to high and focusing on the operation of vital subsystems such as [2] Landing Gear with its touch-down sensors, ADS subsystem with its hold-down thrusters and the Anchoring subsystem. A hard-coded sequence was in charge of shooting the two harpoons of the dual redundant Anchoring subsystem after touching-down on the comet. In addition to a touch-down sensor provided by the Landing Gear subsystem, another touchdown detecting method required significant software support on the CDMS side, a specific algorithm which kept evaluating when Philae’s legs touched the comet, the condition for allowing the comet science operations to start. 6.2. Autonomous power flow control during the long terms science phase Autonomous battery operations control and power flow management were of particular relevance for the Long Term Science (LTS) operational phase on the comet. The primary aim of implementing LTS survival mode of the CDMS software was to charge the battery to a rational level. The challenge was to achieve it autonomously even under extreme environmental conditions, contradictory technical constraints and operational requirements. To consider were in the first place: ■ Thermal cycling of Philae on the comet between extreme high and low temperatures both on diurnal basis and on long term as the comet moved in its elliptical orbit around the Sun. As in-situ measurements showed, Philae’s internal compartment temperatures rose increasingly above  50 °C upon sunrise after Philae wake-up with a temperature gradient of  0.7 °C/ cometary day. ■ The profile and magnitude of solar illumination and power (see Fig. 5) depended on several factors, such as the finally achieved landing site of Philae, its orientation, the local obstacles and global horizon around, the rotational kinematics, seasonal changes of the Sun orbit and heliocentric distance of the comet. Most of these were not predictable in advance – especially not in the software design phase – and may have changed in wide ranges. Power from illuminated solar generators (Psolar) – is first fed into the vital subsystems, i.e. power distribution subsystem, CDMS and thermal control units (Pplatform ¼PPSS þPCDMS þPTCU). – The rest of incoming solar power is directed – in order of priority – to the battery heaters (Pheaters) and/or to RX units (PRX) and/or mass-memories (PMM), – and the leftover (Preserved) flows either into the shunt regulators or – if battery charging is enabled by the software – charges the battery (Pcharge).

11

Pcharge ¼Preserved ¼Psolar  (Pplatform þC1 * Pheaters þ C2 * PRX þC3 * PMM) where C1, C2 and C3 are under software control with possible values 1 (on) or 0 (off). The basic principle of control is to prevent frequent power bus collapses in the power distribution subsystem when incoming solar power is low and the battery is fully depleted by not routing power to any consumer until there is sufficient surplus/reserved solar power to do so. Prior to turning on an additional consumer, the software shall make sure that Preserved is higher than the power demand of that consumer. In response to turning on a consumer, Preserved – obeying physical laws at given Psolar – will be automatically reduced. Since the in-situ value of Psolar is also changing, Preserved needs to be regularly measured/ recalculated in the main control loop. The power consumptions of consumers are pre-stored in a parameter table of CDMS. Battery charging requires battery temperature to be above 0 °C. The software therefore provides a hysteresislike heating and charging control of the battery. Once the upper temperature limit of the hysteresis window is reached, battery heaters are turned off until the battery cools down to the lower limit. The gradient of battery heating – when both heaters are on – is about 10 °C/h. Battery heaters should be powered only by the solar generators, in other words, only if Preserved is higher than Pheaters, so that the battery charging vs. discharging budget does not go negative. Consequently, in LTS survival mode Philae must be completely turned off for the period of cometary nights. A software controlled shutdown process saves mass-memory contents and turns off the entire Philae system at sunset on the comet. Philae is powered on again at each sunrise. The power control software for the mass-memories and especially for the RX and TX units, given the vital importance of communication capability, checks the battery state of charge as well as solar power availability. A reasonable way to make battery charging more efficient – especially in periods of low incoming solar power – is the reduction of the power demand of CDMS (PCDMS). To achieve this, optionally selectable and battery state dependent low-power modes of CDMS have been implemented, as follows: ■ Low-power mode: The current secondary DPU turns itself off regularly for a pre-programmed period of time. Note that similar effect can be achieved by turning off a DPU by means of a hardware decoded telecommand. ■ Super-low-power mode: The current primary DPU turns itself off regularly for a pre-programmed period of time, and in return, the current secondary DPU becomes the primary one, which also turns itself off somewhat later. Consequently, the DPUs are powered on alternately for a short period of time each, in other words, none of the DPUs consumes power most of the time. ■ Battery charging in rescue mode: After enabling battery charging in the power distribution subsystem, CDMS turns itself off completely for the rest of a cometary day.

Please cite this article as: A. Balázs, et al., Command and data management system (CDMS) of the Philae lander, Acta Astronautica (2015), http://dx.doi.org/10.1016/j.actaastro.2015.12.013i

12

A. Balázs et al. / Acta Astronautica ∎ (∎∎∎∎) ∎∎∎–∎∎∎

Once the required battery state of charge is reached or the incoming solar power is high enough, the software control can be passed over to another LTS mode for doing science, either upon ground command or under autonomous on-board control. The flexible patch-working technique allows fully autonomous on-board execution of a complex science program over a long period of cometary days and nights, even when there is no radio visibility between the Philae lander and Rosetta spacecraft. The software can autonomously switch between LTS survival and science doing mode according to the battery state of charge, which can be checked regularly (e.g. by a method constructor TC primitive). This permits stepwise execution of the science programme.

7. Conclusions and lessons learnt CDMS performed well during all three comet phases (SDL, FSS, LTS), although several non-critical and some more severe problems (failures, hardware–software interaction problems, disturbances with Electromagnetic Compatibility) were encountered during the 10-year flight prior to Philae separation. All these problems at CDMS level were discovered in time and overcome by software workarounds and/or hardware configuration changes. The operation of a mass-memory board, however, became somewhat erratic during the Long Term Science phase. Otherwise, there were no unexpected incidents inside CDMS during the comet phases. Several critical hardware failures were encountered in some vital Philae subsystems under CDMS control during comet operations. Some affected touchdown on the comet and others gradually appeared during the Long Term Science (LTS) phase. The latter were likely due to the extremely harsh environmental conditions the lander had to endure over many months on the comet. The built-in redundancy management of some Philae subsystems (e.g. RX/TX telecommunication units), the “blind” CDMS commanding mode and the flexible patch-working mechanism for reprogramming the CDMS proved to be very useful in critical situations. During sporadic radio visibility periods, housekeeping data from vital Philae subsystems were received on ground via the Rosetta spacecraft. By analysing this data, the team was able to assess the hardware status of Philae subsystems and make attempts to restore the full complexity of Philae’s science capabilities. The experience has reminded us of a typical problem in modular redundant systems, the negative – and potentially fatal – impact of systematic design, module or component errors. One method of preventing this is design diversity, involving different hardware and/or software control for redundant modules, i.e. “N-version” programming. Good identification of fault-containment regions in a system of high complexity is also an important step when preparing the design concept.

CDMS related work-packages and acknowledgements CDMS was designed, developed, manufactured, tested, validated on ground and operated in space and on the comet CG/67P in a Europe-wide cooperation as follows: CDMS software requirements definition and development performed by Wigner (former KFKI) Research Centre for Physics, Budapest, Hungary Space and Ground Facilities Co. Ltd., Budapest, Hungary Financed by Hungarian Space Office through ESA PRODEX Office Wigner (former KFKI) Research Centre for Physics, Budapest, Hungary CDMS hardware design performed by Wigner (former KFKI) Research Centre for Physics, Budapest, Hungary Space and Ground Facilities Co. Ltd., Budapest, Hungary Financed by Max-Planck Institute for Solar System Research (MPS), Göttingen, Germany Wigner (former KFKI) Research Centre for Physics, Budapest, Hungary Hungarian Space Office through ESA PRODEX Office CDMS hardware components and CDMS manufacturing financed by Max-Planck Institute for Solar System Research (MPS), Göttingen, Germany Mass-memory hardware components, design/manufacturing performed and financed by Finnish Meteorological Institute (FMI), Helsinki, Finland CDMS software validation on the Philae Ground Reference Model and Philae flight operations control performed by Deutsches Zentrum für Luft- und Raumfahrt (DLR), Lander Control Centre, Cologne, Germany Philae science operations planning supported by Centre National d’Études Spatiales (CNES), Philae Science Operations and Navigation Centre, Toulouse, France

References [1] K.H. Glassmeier, H. Boehnhardt, D. Koschny, E. Kührt, I. Richter, Rosetta mission: flying towards the origin of the Solar System, Space Sci. Rev. 128 (2007) 1–21, http://dx.doi.org/10.1007/s11214-006-9140-8. [2] J.-P. Bibring, H. Rosenbauer, H. Boehnhardt, S. Ulamec, A. Balazs, J. Biele, et al., Rosetta lander Philae: system overview, Space Sci. Rev. 128 (2007) 1–21, http://dx.doi.org/10.1007/s11214-006-913-8-2.0. [3] R.Z. Sagdeev, F. Szabó, G.A. Avanesov, P. Cruvellier, L. Szabó, K. Szegő, A. Abergel, A. Balázs, I.V. Barinov, J.L. Bertoux, J. Blamont, M. Detaille, E. Demarelis, G.N. Dulnev, G. Endrőczy, M. Gárdos, M. Kanyó, V.I. Kostenko, V.A. Krasikov, T. Nguyen-Trong, Z. Nyitrai, I. Rényi, P. Rusznyák, V.A. Shamis, B. Smith, K.G. Sukhanov, S. Szalai, V.I. Tarnapolsky, I. Tóth, G. Tsukanova, B.I. Valnicek, L. Várhalmi, Yu.K. Zaiko, S.I. Zatsepin, Ya.L. Ziman, M. Zsenei, B.S. Zhukov, Television observation of comet Halley from VEGA spacecraft, Nature 321 (1986) 262–266. [4] R.Z. Sagdeev, G.A. Avanesov, V.I. Kostenko, V.A. Krasikov, V.A. Shamis, K.G. Sukhanov, V.I. Tarnopolsky, Yu.K. Zaiko, S.I. Zatsepin, Ya.L. Ziman, B.S. Zhukov, F. Szabó, L. Szabó, K. Szegő, A. Balázs, G. Endrőczy, M. Gárdos, M. Kanyó, Z. Nyitrai, I. Rényi, P. Rusznyák, B. Smith, S. Szalai, I. Tóth, L. Várhalmi, M. Zsenei, P. Cruvellier, M. Detaille, T. Nguyen-Trong, A. Abergel, J.L. Bertoux, J. Blamont, E. Demarelis, G.N. Dulnev, G. Tsukanova, B.I. Valnicek, TV Experiment in VEGA mission: strategy, hardware, software, in: Proceedings of the 20th ESLAB Symposium on the Exploration of Halley`s Comet, Heildelberg, 27–30 October l986, ESA SP-250, December 1986, pp. 289–294. [5] I. Horváth, Gy. Korga, I. Kovács, Z. Pálos, P. Rusznyák, S. Szalai, A. Steiner, L. Várhalmi, Highly reliable data acquisition system for space research in the phobos mission, XIII. in: Proceedings of the International Symposium on Nuclear Electronics, Varna, Bulgaria, September 12–18, 1988. [6] A. Baksa, A. Balázs, Z. Pálos, S. Szalai, L. Várhalmi, Embedded Computer System on the Rosetta Lander Prague, DASIA 2003 Data Systems In Aerospace, Prague,SP-532, 2–6 June, 2000, pp. 250–256.

Please cite this article as: A. Balázs, et al., Command and data management system (CDMS) of the Philae lander, Acta Astronautica (2015), http://dx.doi.org/10.1016/j.actaastro.2015.12.013i

A. Balázs et al. / Acta Astronautica ∎ (∎∎∎∎) ∎∎∎–∎∎∎ [7] S. Szalai, A. Balazs, A. Baksa, G. Tróznai, Rosetta lander software simulator, in: Proceedings of the 57th International Astronautical Congress, Valencia, Spain, 2006, (on DVD of 57 IAC). [8] J. Biro Balazs, S. Szalai, Transputer based onboard computer, Workshop on Computer Vision for Space Applications, Antibes, France, September 22–24, 1994, ISBN: 2-7261-0811; C-151. [9] Web-link, 〈http://www.cs.ucla.edu/  rennels/article98.pdf〉. [10] S. Balázs, L. Szalai, A. Várhalmi, A multipurpose computer for Mars space missions, in: Proceedings of the Fifth IAESTED International Conference Reliability and Quality Control, Lugano, l989, pp.132– 143.

13

[11] J. Balázs, I. Biró, I. Hernyes, S. Horváth, A. Szalai, V. Grintchenko, G. Kachirine, S. Kozlov, V. Medvedev, Michkiniouk, L. Marechal, Locomotion system of the IARES demonstrator for planetary exploration, Space Technol. 17 (3/4) (1997) 173–182. [12] A. Balázs, A. Baksa, H. Bitterlich, I. Hernyes, O. Küchemann, Z. Pálos, J. Rustenbach, W. Schmidt, P. Spányi, J. Sulyán, S. Szalai, L. Várhalmi, The central on-board computer of the Philae lander in the context of the Rosetta space mission, in: Proceedings of conference on Reliable Software Technologies – Ada-Europe, Madrid, LNCS 9111, 2015, pp. 18–30, doi: 10.1007/978-3-319-19584-1_2.

Please cite this article as: A. Balázs, et al., Command and data management system (CDMS) of the Philae lander, Acta Astronautica (2015), http://dx.doi.org/10.1016/j.actaastro.2015.12.013i