Rapid prototyping of real-time control laws for complex mechatronic systems: a case study

Rapid prototyping of real-time control laws for complex mechatronic systems: a case study

The Journal of Systems and Software 70 (2004) 263–274 www.elsevier.com/locate/jss Rapid prototyping of real-time control laws for complex mechatronic...

809KB Sizes 1 Downloads 98 Views

The Journal of Systems and Software 70 (2004) 263–274 www.elsevier.com/locate/jss

Rapid prototyping of real-time control laws for complex mechatronic systems: a case study M. Deppe

a,*

, M. Zanella

a,1

, M. Robrecht

b,2

, W. Hardt

c,3

a

c

University of Paderborn, Mechatronics Laboratory Paderborn (MLaP), Pohlweg 98, Paderborn D-33098, Germany b iXtronics Ltd., Technologiepark 11, Paderborn D-33100, Germany University of Paderborn, Informatics and Process, Laboratory (IPL), Warburger Str. 100, Paderborn D-33098, Germany Received 3 December 2002; accepted 11 January 2003

Abstract Rapid prototyping of complex systems embedded in even more complex environments raises the need for a layered design approach. Our example is a mechatronic design taken from the automotive industry and illustrates the rapid-prototyping procedure of real-time-critical control laws. The approach is based on an object-oriented structuring allowing not only central control units but also distributed control units as needed by today’s designs. The implementation of control laws is a hardware-in-the-loop simulation, refined in steps and reducing the simulation part at every one of these. On the lower level, common platforms, such as FPGAs, microcontrollers or specialized platforms, can be instantiated.  2003 Elsevier Inc. All rights reserved. Keywords: Mechatronics; Distributed controller realization; Rapid-prototyping; Hardware-in-the-loop simulation

1. Introduction Demands on the information-processing unit in mechatronic systems are steadily increasing. This is particularly evident in the automotive industry. More and more aggregates with mechatronic functions are integrated and interlinked. The result is a complex network of control units in the car for the basic control of car dynamics such as for instance ESP (electronic stability program), ABS (anti-lock braking system), ABC (active body control) (Becker et al., 1996) and ASR (antispin regulation). With respect to these requirements we expound a concept for the rapid prototyping of real-time

* Corresponding author. Tel.: +49-5251-60-5559; fax: +49-5251-605579. E-mail addresses: [email protected] (M. Deppe), mauro. [email protected] (M. Zanella), [email protected] (M. Robrecht), [email protected] (W. Hardt). URLs: http://www.mlap.de, http://www.ixtronics.de, http://www. upb.de/cs/ag-hardt. 1 Tel.: +49-5251-60-5614; fax: +49-5251-60-5579. 2 Tel.: +49-5251-68-6909; fax: +49-5251-68-6918. 3 Tel.: +49-5251-60-5250; fax: +49 5251-60-3241.

0164-1212/$ - see front matter  2003 Elsevier Inc. All rights reserved. doi:10.1016/S0164-1212(03)00073-6

control laws that is based on decentralized hardware. Our concept allows design and test of interlinked control units. In addition and as an alternative to microcontrollers, FPGAs (Kasper and Reinemann, 2000; Cumplido-Parra et al., 2000) and asynchronous architectures, e.g. FLYSIG dataflow processors (Hardt and Kleinjohann, 1998a), can be employed as hardware for the controller realization. This reprogrammable hardware is the basis for a subsequent use in mass production in the shape of an ASIC. Realization of control laws on this hardware offers the following advantages: • Controller sampling rates beyond the microcontroller range can be realized. • A modular assembly of systems with inexpensive modular information processing is made possible. • On account of the modularity, the resulting hardware architecture is easily scalable. The paper is organized as follows: In Section 2 the basic concept of rapid prototyping of mechatronic systems is presented. Section 3 describes our objectoriented software integration platform for distributed simulation. Details about the according modular

264

M. Deppe et al. / The Journal of Systems and Software 70 (2004) 263–274

The software system allows models from different mechatronic domains (mechanics, hydraulics, information processing) to be built. CAMeL-View provides features for a hierarchical-modular modelling based on an object-oriented class concept for the description elements. To simplify the modelling and keep it apart from numerical processing, three different levels of model description languages for mechatronic systems were defined at the MLaP: Fig. 1. The X-mobile.

hardware platforms for controller prototyping can be found in Section 4. Finally Section 5 details some application examples to demonstrate the applicability of the rapid prototyping concept. For an illustration we use the example of a model vehicle called X-mobile. 1.1. The X-mobile The X-mobile (Zanella et al., 2001a), an autonomous vehicle at the scale of 1:8, is the visible result of a cooperation between different departments of the University of Paderborn. The most important design challenge of the X-mobile is the modularity and flexibility of the system. The vehicle has four fully independent electrical wheel drives, steering, and an active suspension. Fig. 1 displays the mechanical construction. The development of the vehicle offered a chance to validate novel approaches in the research on mechatronic systems. Different strategies and control laws can be tested with the X-mobile, e.g., effects of interconnecting complex controllers such as ESP and ABC. The X-mobile serves as a testing ground for real-time soft and hardware.

2. Rapid prototyping of complex mechatronic systems The development of a mechatronic product requires interdisciplinary work. Integration of computation and design certainly is a precondition for an efficient overall product functionality. For the development of mechatronic systems an integrative software environment is needed that makes possible co-operative design, simulation, and optimization functionalities. As a result, the Mechatronics Laboratory Paderborn (MLaP) has developed CAMeL (Computer-Aided Mechatronics Laboratory) (Meier-Noe and Hahn, 1999). Its modeling tool CAMeL-View (Hahn, 1999) allows models to be built on the topological and mathematical levels with the help of object orientation. It offers different derivation formalisms for multibody systems, analysis, visualization, and optimization tools.

(1) O-DSS (objective-dynamic system structure) on the structural level to specify the system topology by descriptions proper to the various technical disciplines. For modelling on this level the graphical CAMeLView workbench is employed. (2) By using automatic derivation formalisms, the mathematical level O-DSL (objective-dynamic system language) can be generated from O-DSS which describes the system dynamics by means of explicit ordinary differential equations. (3) When sorting the equations for evaluation and transforming them to single-assignment code, one reaches the DSC (dynamic system code) level. DSC is a compact yet hardware-independent description of the system behavior. For the DSC description an ANSI-C code generator is available. In the design of mechatronic systems, roughly four stages can be distinguished: modelling, analysis, controller synthesis, and hardware-in-the-loop-simulation (HILS) to test the system in the lab. In HILS the stepwise transition from the pure model representation of the mechatronic system to actually mounted mechanical, hydraulic and electrical components takes place. By using a commercial, centralized HILS hardware platform when running a testbed, one can handle the problems concerning numeric limitations, real-time conditions, I/O coupling, etc. But what about further steps towards a prototype of the distributed ECUhardware (electronic control units) for subsequent mass production? One has to deal with linked processing nodes, each with low computing power. Additionally the questions about number, type, linkage, and location of the ECUs arise. At the MLaP, a modular-hierarchical structuring concept is being developed which helps to organize the information-processing structure of mechatronic systems (cf. Section 2.1). It’s basic idea is that the topology of the distributed ECUs is oriented according to the physical (aggregate) structure of the mechatronic system. This supports rapid prototyping with its stepwise modular exchange of model parts against physical aggregates because of the matching topology of model and mechatronic system. To preserve the model structure for analysis, offline simulation and HILS, we use a software-based ob-

M. Deppe et al. / The Journal of Systems and Software 70 (2004) 263–274

265

microcontrollers and reprogrammable hardware modules are employed (see (3) and (4) in Fig. 2). The modular and decentralized controller realization is supported by our rapid-prototyping platform (cf. Section 4.1). Its aim is to provide a modular prototyping of ECUs. For a rapid-prototyping implementation of synchronous modules FPGAs (field programmable gate arrays) are commonly used. In some cases the average performance can be highly increased by asynchronous architectures. Such an architecture is to be found in the FLYSIG dataflow processor (cf. Section 4.2) that is employed with the prototyping of periodically iterated algorithms. 2.1. Modular-hierarchical structuring of mechatronic systems

Fig. 2. Structure preserving rapid prototyping of distributed controllers.

ject-oriented integration platform (cf. Section 3). After modelling the system, individual model parts can be mapped to designated (software) objects (see step 1 in Fig. 2). This mapping is hardware-independent and hierarchical. Without further manual configurations non-distributed PC-based off-line simulations for test and debugging are now possible (step 2, No. 1). When software objects are mapped to hardware nodes and the technical coupling is configured (cf. Section 3.3), a HILS can be performed after step 2. Every hardware configuration uses identical C code as a description of the model parts and the data dependencies for implementation. Only for the reprogrammable parts are VHDL specifications provided (cf. Section 3.2). In view of a rapid prototyping of controllers the hardware-in-the-loop stage of the system design is also structured into several parts (Stolpe et al., 2000): after off-line simulation with the help of a PC, there will usually be an HILS test performed on a powerful HILS hardware platform (see (2) in Fig. 2). Here the simulated part (mechanics, hydraulics, etc.) is still very large. In this process the controller prototypes are implemented on the basis of floating-point variables so that numerical issues can be neglected. During this HILS stage the ranges of the input-, output- and state-variables of the controller can be detected in view of the subsequent process of scaling the control law for fixed-point or integer targets. In further steps ever more mechatronic aggregates are constructed physically and therefore require their own local information processing. It is precisely at this point that our modular and decentralized controller realization sets in. In addition, specific HILS hardware

At the lowest level of mechatronic systems, there are the mechatronic function modules (MFM). An MFM consists of a passive mechanical frame, sensors, actuators, and discrete-continuous information processing. The MFM concept combines the idea of information encapsulation, developed in software engineering, with that of the aggregate which is a wellestablished term in engineering. An MFM is assigned a certain task within a mechatronic system, usually the task to control its dynamic behavior. It disposes of physical and informational interfaces for interconnection. An example of such an MFM in the X-mobile is the local steering of a single wheel (see local steering in Fig. 3). A structured instantiation of MFMs leads to hierarchical MFMs (see wheel module). A grouping of

Fig. 3. Hierarchical structure of the X-mobile with its mechanical and informational couplings.

266

M. Deppe et al. / The Journal of Systems and Software 70 (2004) 263–274

hierarchical MFMs with a shared superordinated function, is called a mechatronic function group (MFG) in the following. The common feature of this group is the informational coupling of several MFMs and the lack of any actuator or mechanical structure (see steering, drive and suspension control). The next hierarchical level is that of the autonomous mechatronic system (AMS). An AMS consists of a passive mechanical frame, sensors, and information processing. It has no actuators of its own but uses the mechanically coupled MFMs for actuation. The Xmobile with all its substructures is such an AMS. Information processing at the AMS level, like the human–machine interface, manages the autonomy of the system. When more than one AMS are to be employed in a co-operating system, the necessary co-ordination has to be realized on the information-processing level. If for instance several vehicles of the X-mobile type are going to explore an area it will be indispensable to co-ordinate the reference values for steering and drive by exchanging global data (e.g., by using a wireless LAN). Those AMSs that operate in a co-operating system make up a socalled crosslinked mechatronic system (CMS) (Honekamp et al., 1997; L€ uckel et al., 2001). At the CMS level, discrete information processing prevails in comparison to (continuous) controllers. An example for CMS is the decentralized management of autonomous vehicles at intersections as described in (L€ uckel et al., 1999). This hierarchical-modular structuring approach increases the composability of a mechatronic system. Extensions or changes at a certain hierarchical level will only affect subsystems.

3. Software integration platform IPANEMA means integration platform for networked mechatronic systems (Honekamp, 1998) and is a platform concept used for distributed real-time simulation, a basic requirement of HILS. It allows a hierarchical and modular processing of control tasks according to the principles of the structuring concept presented. IPANEMA is coded in C to increase platform independence. The initial implementation was done for an INMOS Transputer system (T80x). Our current real-time implementations are running on 750 and 555 Motorola PowerPC architectures. For distributed off-line simulation purpose we use IPANEMA on networked Win32- and LINUX-PCs. IPANEMA structures each task in distributed simulation in an object-oriented way. Objects of the calculator class implement the simulation kernels treating the individual partial models (Fig. 4). They do not comprise any functionality as to administration and data management. These tasks will fall to objects of the

Fig. 4. Hierarchical X-mobile control mapped to IPANEMA objects.

assistant class to relieve the calculator objects. To every calculator object an assistant is assigned. By their services, assistants provide a neat distinction between those parts of the simulation environment that have to operate under hard real-time conditions (calculator objects) and those that may run under soft ones. To meet the demands of the two different real-time conditions the assistant makes use of low- and high-priority threads exchanging data via ring buffers. Assistant objects encapsulate the corresponding calculator object against an object of the moderator class that serves as an interface to the user and the control panel. Moderator objects coordinate the actions of the assistant/calculator team whenever this becomes necessary (e.g., to start or stop a simulation run). To couple a technical process (the physical part of a mechatronic system) with digital information processing (in this case, simulation), another class, the adaptors, is available. Adaptor objects transform the physical values relevant for simulation into their numerical equivalents (including scaling and offset). Although IPANEMA was originally developed for the software realization of controllers, the structure is also suitable for hardware realization using reprogrammable hardware. For mixed hardware/software implementations the adaptor objects are employed to connect the hardware nodes to the calculator software objects. Hierarchical-modular models, formulated in ODSS according to the structuring concept presented, can be converted into an equivalent processing structure by means of IPANEMA. For the modelling process, ODSS is the language of highest importance while O-DSL and DSC are used for analysis/synthesis and HILS. For each IPANEMA calculator or adaptor the complete toolchain from O-DSS to C is iterated individually. Also the specification (C code) of the data dependencies between the objects is generated automatically (see Fig. 5).

M. Deppe et al. / The Journal of Systems and Software 70 (2004) 263–274

267

(1) It is suitable for linear and non-linear systems. (2) It allows adaption of the parameters of the differential equations during simulation. This is a basic condition for test and optimization in the HILS. (3) It is suitable for hard real-time realization. (4) Its precision is scaleable from first-order approximation (Euler’s method) to second-order approximation (Heun’s method) up to fourth-order approximation (method of Runge-Kutta 4).

Fig. 5. IPANEMA in the modular simulation of structured models.

3.1. Rapid prototyping of control laws Modern control design methods allow complex multivariable control laws to be created. The need for a transparent and straightforward design process often leads to the software implementation of controllers, i.e., to the programming of microprocessor devices in a high-level language using floating-point variables. This approach is inappropriate for applications with high sampling rates (above 20 kHz) as well as for highly modular applications. With high-level design tools, such as VHDL and logic-synthesis CAD tools designed for large, low-cost, reprogrammable FPGAs, a rapid prototyping of complex modular control laws has become possible. The state-space approach is a unified method for modelling and analyzing non-linear and time-invariant control laws. The mathematical equations are divided into two parts: a set of Eq. (1) relating the state variables x to the input signal u, and a second set of Eq. (2) relating the state variables and the current input to the output signal y. The general form of the state-space equations into which all forms of control systems can be converted is: x_ ðtÞ ¼ A  xðtÞ þ B  uðtÞ ð1Þ yðtÞ ¼ C  xðtÞ þ D  uðtÞ

ð2Þ

For the software realization of controllers equations (1) and (2) can be solved on-line, e.g., by means of the explicit Runge-Kutta method (Middleton and Goodwin, 1990). This method offers four major advantages:

When restricted to linear controllers, which are appropriate for a large class of applications, the ABCD matrix elements turn from non-linear equations into constant coefficients. Linear controllers are preeminently suited for hardware realization because they can be transformed into recursive difference equations which can be evaluated very efficiently without an extra solver: xxþ1 ¼ Ad  xk þ Bd  uk

ð3Þ

y k ¼ C d  x k þ D d  uk

ð4Þ

There are so many methods available for translating linear time-invariant systems into a discrete equivalent system (which in fact can never be completely equivalent) that this topic could be the subject of a survey itself. We use the most prominent discretization method, the so-called bilinear transformation, also known as Tustin’s method. It relates to implicit trapezoidal integration and has the nice property of never generating unstable discrete poles as long as the continuous poles are stable. Another class of widely used methods is based on assumptions on test input signals applied both to the continuous system and to the discrete one. The objective is to achieve the same output values for a step- or a ramp-input signal. This leads to the so-called stepinvariant and ramp-invariant discretization methods. In certain cases the results from the ramp-invariant method are better than the results from Tustin’s method. However, the step-invariant results show a bad frequency response compared to those of the continuous system in practically every application (Hanselmann, 1987). To deal with controller realization a step-wise implementation is recommended. Design parameters such as equation type, solver, data type, and word length of variables must be taken into consideration. Table 1 details the properties of typical realization steps.

Table 1 Typical realization steps of controller prototyping Equations

Solver

Data type

Word length

Target

(1), (2) (3), (4) (3), (4)

Runge-Kutta None None

Floating-point Fixed-/floating-point Fixed-P./Integer

32 or 64 bits 32 or 64 bits [8. . .32] bits

HILS hardware ECU prototype ECU

268

M. Deppe et al. / The Journal of Systems and Software 70 (2004) 263–274

3.2. Controllers in hardware

3.3. Creating the distributed application

In CAMeL, controllers can be automatically laid out for hardware realization. The differential equations of the controller-design phase are transformed into difference equations and scaled from the floating-point to the fixed-point or integer range of numbers. The final result of the software-supported controller transformations is a controller specification including the discrete ABCD matrices and the sampling time that is automatically converted into VHDL resp. C code. For hardware implementation a VHDL description including two parallel fixed-point MEC components (matrix equation calculator) for solving Eqs. (3) and (4) is used. Internally every MEC consists of two scalar multipliers used for a parallelized computation of the two scalar products included in Eq. (3) resp. Eq. (4). The VHDL code contains generic declarations to be customizable to the dimensions of the controller-specific ABCD matrices (Slomka et al., 2002). The choice of the word length of the variables for hardware implementation is a compromise between the numerical precision of the controller and the resources required. It is useful to provide different word lengths for states, inputs, outputs, and internal multiplication/ addition registers. Our approach provides a simulationbased possibility to select the number of bits for the controller variables before starting the target-specific synthesis of the controller. To enable this controller prototyping we designed a CAMeL component for the software-based suggestion of scaled state-space controllers with a word length that is tunable during runtime.

After modelling the system with the help of CAMeL-View individual system model parts at the AMS, MFG and MFM levels can be marked (partitioning) and mapped to calculator objects (see step 1 in Fig. 6). This mapping is hardware-independent and hierarchical. By an analysis of the connections between the controllers and the plant even the adaptor can be derived automatically. Plant-controller data connections will become electrical connections between the electronics and the physical plant. Up to now no specification about the target hardware has been necessary. Our approach to the distribution of the calculators and adaptors to the implementation hardware is to generate a hardware description. This description is called a hardware model and can be understood as a non-executable specification of the topology of the distributed target hardware. With the help of this model the IPANEMA objects are mapped to their target hardware modules (see step 2 in Fig. 6). A main feature of this proposal is the possibility to do analysis/synthesis by means of the off-line environment without having to reconfigure, redefine or redo the system model (plant and controllers); this means that the hardware components are not physically included in the system model. The hardware model contains the following information: • hardware structure (multiprocessor description is possible); • characteristics of hardware elements (processor, I/O board, etc.); • scale, offset, upper and lower bounds of the I/Os; • mapping of calculators to CPUs and calculator interfaces to buses;

Fig. 6. Partial system model, IPANEMA, and hardware model.

M. Deppe et al. / The Journal of Systems and Software 70 (2004) 263–274

• mapping of the adaptor interface to physical I/O channels or buses.

4. Rapid-prototyping hardware platforms For a HILS at an early design stage we use a commercial HILS hardware platform by dSPACE Inc. This is a modular system with processor boards (DS1005) connected by optical links (GIGALINK) and I/O boards connected by a bus (PHS). The DS1005 processor board is based on the Motorola PowerPC 750 series and is running at 480 MHz (above 800 MIPS). IPANEMA has been ported to run with the according dSPACE real-time kernel. The real-time kernel is designed for multiprocessing tasks, as is our simulation platform IPANEMA. The hardware platform is used for controller prototyping but is not laid out for the modular prototyping of ECUs (electronic control units). For this reason we have developed our own rapid-prototyping platform. 4.1. Rapid prototyping of ECU hardware The modular and decentralized controller realization is supported by our own rapid-prototyping platform, called RABBIT. Its aim is to provide a modular prototyping of ECUs. RABBIT comprises three main components: IEEE 1394, MPC555 microcontroller, and FPGAs. The most important features of RABBIT are its flexibility and extensibility, brought about by an open system interface and high modularity. The platform allows a distributed implementation of control algorithms. One module of the RABBIT system consists of a rack which can contain the different submodules, as shown in Fig. 7, connected via a local system bus. The

269

driver submodule consists of power drivers, galvanic isolators for inputs and the on-board intelligence, a Xilinx Spartan-II FPGA (Xilinx, 2000). The FPGA is employed with linear control algorithms (cf. Section 3.1). Thus the board can also work in stand-alone mode. An example of a RABBIT driver submodule is shown in Section 5.3. The main component of the DSP (digital signal processing) submodule is a Xilinx Virtex-E FPGA (Xilinx, 2000). In addition to the systembus interface, the Virtex-E has another, local system bus interface. Via this bus, it is possible to connect I/O devices, e.g., ADCs, DACs, and encoders. These components are mounted on a piggyback board. Each DSP submodule can be equipped with two of these piggyback boards. The piggyback I/O configuration can be adapted to the application demands. The DSP submodule is designed for fast/parallel discrete controllers (e.g., current controllers and ultrasonic motors), as well as for digital filter algorithms. Sampling rates of up to 100 kHz or even higher ones are possible. For linear controllers or filters the DSP submodule can be configured by using the VHDL code described in Section 3.2. The microcontroller submodule uses the Motorola PowerPC (Motorola, 1999). It consists of an MPC555 (52.7 MIPS, 40 MHz) with its on-chip peripheral devices and an extra bus interface to transmit the memory bus signals to the local system bus. The on-chip peripherals are the serial communication (RS232), CAN interface, 32 ADC (10 bit/10 ls and PWM (in/out) interfaces, and 50 timers. Hence the PowerPC submodule can also work in stand-alone mode. Its core has a 32-bit integer ALU and a 64-bit floating-point unit combined in the PowerPC RISC architecture. We developed a separate lean real-time kernel for the MPC555. Complex control algorithms, e.g., linear/non-linear or continuous/discrete, can be mapped to this unit of the RABBIT system.

Fig. 7. RABBIT system overview.

270

M. Deppe et al. / The Journal of Systems and Software 70 (2004) 263–274

The fourth element is the IEEE 1394 submodule (Anderson, 1999). The bus is a multimaster bus (tree topology) which configures itself at the system start or on hot-plugging of a further network device. Each IEEE 1394 submodule has three communication ports. IEEE 1394 allows isochronous communication at a cycle frequency of 8 kHz with a bandwidth of 400 Mbit/s. This allows real-time communication of distributed control systems and high-speed field bus systems. In Section 5, some applications making use of the RABBIT microcontroller submodule and the driver submodule will be described in detail. 4.2. Asynchronous architecture Recent solutions at the MLaP make use of FPGAs. However, the use of asynchronous architectures can improve performance and power aspects. A dataflow-oriented architecture like the FLYSIG approach (Hardt, 2000) offers advantages, such as flexible asynchronous designs and high performance. The FLYSIG prototyping architecture is designed for periodically iterated algorithms and reflects this characteristic by its ringlike component interconnection. As depicted in Fig. 8, there is an operator network made up of AOB components (arithmetic operator block). Each AOB implements one or more arithmetic fixed-point operation. This operator network can be configured by two types of switching blocks: the HSB (horizontal switching block) controls the data routing between two lines of AOBs, while data transportation to further lines of AOBs (not depicted in Fig. 8) is managed by VSBs (vertical switching blocks). The FLYSIG prototyping architecture is implemented asynchronously (Hardt and Kleinjohann, 1998b) and reaches very high throughput rates because the internal AOB implementation is based on a deeply pipelined bit-serial operator architecture.

Fig. 9. FLYSIG prototyping design steps.

Therefore three main design steps have been implemented by automated design tools. The design flow starts out from the DSC code generated for each controller module (Fig. 9). First of all, an analysis phase is applied. Within this phase the performance is estimated with respect to the FLYSIG architecture. The result shows if the performance constraints for the controller can be met. The algorithmic details of the performance analysis have been published in (Hardt et al., 2000). At the next step, which is a scheduling stage, standard algorithms such as list scheduling or force-directed scheduling are used. Such algorithms are provided within our design environment PMOSS (Eikerling et al., 1996; Hardt et al., 1999a). The scheduling phase ensures correct data processing. The mapping phase is dedicated to the FLYSIG architecture. Here we developed an algorithm for an efficient mapping of all operators from the DSC code to the operator network of the FLYSIG. The mapping also determines the configuration of all routing blocks (HSB and VSB) and generates the configuration string. The utilization of the FLYSIG operator network depends on the actual implementation of the AOBs. Details can be found in Hardt et al. (1999b) and Visarius (2001). Once this stage is reached, the prototyping implementation of the controller is available. For the final mass-product implementation, an ASIC implementation is projected and an ASIC version of the FLYSIC architecture can be derived. The ASIC version of FLYSIG removes all configurable functionality as well as all unnecessary AOBs. The savings in terms of chip area are at about 30%, a feature which is of extreme importance in view of mass production. 4.3. Overview of other rapid-prototyping platforms

Fig. 8. The FLYSIG prototyping architecture.

Currently available standard platforms for the development of mechatronic systems offer tools for increasing the computational performance, but the majority are not designed for distributed systems. On the other hand, they only use processors for controller re-

M. Deppe et al. / The Journal of Systems and Software 70 (2004) 263–274

alization, not FPGAs. The FPGAs are used for preprocessing and signal analysis. Most rapid-prototyping systems that employ FPGAs are to be found in the field of the rapid prototyping of ASICs (application-specific integrated circuits) or ASIPs (application-specific processors). These systems use the FPGA to emulate the algorithms of the ASIC or the processor in their specific environment. Some rapid-prototyping systems are more flexible and combine FPGAs, processors, and I/O systems to make up a heterogeneous system. Examples are the Spyder (Nitsch et al., 2000), the Raptor (Kalte et al., 2000), or the HPU (Petters et al., 1999) systems. All these systems are more or less PC-based. Hence they are not compact and can still be regarded as experimental platforms for single systems and not as distributed systems. The greater part of the communication is managed via the PC platform which requires a complex operating system and large housing.

5. Applications 5.1. X-mobile The first and currently employed implementation of the X-mobile is based on one processing unit. The embedded hardware chosen is the RABBIT CPU submodule. The control algorithms run on a real-time kernel interface where the programming and scheduling are fixed by the user. The communication between ECU and user was implemented via a radio link (see Fig. 3). I/ O drivers and communication routines are supplied by the lower-level software. • Calculator: the calculator includes the solver for the controller state-space equations (cf. Section 3.1). • Adaptor: the adaptor defines the information required for the signal conversion. The conversion defines which controller inputs and outputs are assigned to which hardware I/O. Four main modules make up the X-mobile hardware: • CPU board: it consists of an MPC555 with its on-chip peripheral devices This module is composed of the microcontroller, local power supply, connectors with all pins from the MPC555, and a BDM connector for code programming and debugging. • Power electronics: it comprises two boards, each equipped with 6 independent full-bridge MOSFET power electronics for the drive, suspension, and steering motors. The maximum current for the drive motor is 7 A and for the other motor 3 A. The current of the drive motor is measured by an LEM module, amplified from 0 to 5 V, and connected to an analog

271

Fig. 10. Hardware structure of the X-mobile.

input of the MPC555. Each motor can be enabled and disabled by means of software, an external switch, or an external digital input. This feature allows another processor to be used for redundancy or for testing parts of the X-mobile. • Radio communication: it is made up of two boards, one being the power supply and the other a Gigaset 101 (RS 232 radio link) developed by Siemens. • Battery: this unit comprises two sets of 12 V/4 A h NiMh battery cells, one of them for the motors and the other for the electronics. The battery range is about 2 h. Fig. 10 shows the CPU and the power-electronics modules. 5.2. Modular solution for the X-mobile The X-mobile application described above cannot be regarded as highly modular. This is due to the fact that the X-mobile is a model vehicle at the scale of 1:8. Because of the aspects of space, power consumption and weight, a compact solution was chosen. But a close look at the X-mobile as a prototype of a vehicle scaled at 1:1, as described in Zanella et al. (2001b), reveals that a modular solution has several advantages: • Scalability: The number of MFMs can be altered quite easily because each MFM carries its own resources as regards actuators, sensors and even information processing. • Composability: The system manufacturer can buy MFMs as black-box components from suppliers. Internal changes in the realization of the MFM do not require additional testing or redesigning of superordinated system parts. The suppliers are not forced to reveal sensitive specialist knowledge to the manufacturer or to competitors. Fig. 11 shows a possible modular implementation of the X-mobile. There is no need for a distribution of individual strut controllers for steering, driving, and suspension to the respective RABBIT modules because they are physically very close. Thus every strut becomes

272

M. Deppe et al. / The Journal of Systems and Software 70 (2004) 263–274

modules are employed for control on the AMS and MFG levels. 5.3. Step-motor controller

Fig. 11. Modular controller structure of the X-mobile (top view).

an integral MFM that can be designed and tested individually. The struts are linked to the vehicle via informational and mechanical interfaces. Essential RABBIT

There are some fields of application where the microcontroller reaches the limits to its performance, e.g., the implementation of fast controllers for step-motor drives in the area of precision mechanics (automatic teller machines, bank statement printers, and passbook printers). The control of the motor currents makes especially high demands on the system electronics. Motors with very small inductivities require controller sampling rates of up to 40 kHz. This requirement can be met only by a hardware implementation of the controllers. This step-motor module is an example of a RABBIT driver submodule. To the user, the step-motor module (see Fig. 12) offers several control algorithms with sampling rates of up to 80 kHz. The step-motor current controllers are

Fig. 12. Step-motor module.

Fig. 13. Structure of the step-motor current control.

M. Deppe et al. / The Journal of Systems and Software 70 (2004) 263–274

Fig. 14. Measured half-step two-phase step-motor currents.

implemented by the current-controller FPGA. The microstep-controller FPGA generates the setpoints for the current controllers. The system serves as a prototyping platform for use in the lab where the demands on a subsequent mass production of the step-motor electronics are estimated. The major advantage of this FPGA-based solution is its flexibility compared to commercial IC solutions. Now it is possible to test all kinds of control algorithms, from simple switching controllers to PID-T1 controllers, with the help of just one board. Fig. 13 illustrates the structure of the step-motor current control. The motor currents measured by Hall sensors are compared to the setpoints yielded by the microstep-controller FPGA. The corresponding controller outputs are converted into PWM signals (pulse-width modulation) by pulse generators which are connected to the full-bridges of the power electronics. In parallel to the controllers the motor currents are limited to a configurable maximum value. There are two trace-memory blocks to allow data tracing for further analysis or optimization. The entire controller and the corresponding step-motor were modeled in CAMeL-View in a first step. With the help of the model the controller coefficients and parameters can be determined. Some results of a controlled step-motor operation are displayed in Fig. 14. Here the measured motor currents for a half-step twophase mode are plotted. The controller employed is a discrete PI controller with a sampling rate of 40 kHz. The results are very promising but there is still some potential to optimize the rectangular shape of the motor-current trajectories. Therefore we are planning to connect our optimization tool to the CAMeL-View model of the system and also to the physical system.

6. Conclusion The paper presented the development of mechatronic systems with emphasis on the prototyping of controllers and ECUs in the context of hardware-in-theloop simulation. It was shown that for a step-wise system realization, the structure of the information-processing hardware and software must be closely related to

273

that of the mechanical, hydraulic, and electrical components of a mechatronic system. The example of the Xmobile served to demonstrate this hierarchical-modular structuring concept which is applicable to mechatronic systems in general. Our concept allows a rapid prototyping of decentralized hierarchical control systems with software or even hardware implementation of controllers. We expounded the way a systematic structuring supports the transition from the modelling phase to a distributed HILS. We described the three crucial points: the structured modelling, the object-oriented imaging of the distributed processing objects (IPANEMA), and the use of a flexible distributed hardware platform (RABBIT). Our approach leads to highly modular mechatronic systems boasting considerably improved composability––the latter being an important feature to make the development of complex systems easier.

References Anderson, D., 1999. FireWire systems architecture: IEEE 1394a. second ed., MindShare, Inc., Addison-Wesley, Reading, MA, Menlo Park, CA. Becker, M., J€aker, K.-P., Fr€ uhauf, F., Rutz, R., 1996. Development of an Active Suspension System for a Mercedes-Benz Coach (O404). IEEE International Symposium on Computer-Aided Control System Design, Dearborn, Mich., 15–17 September. Cumplido-Parra, R.A., Jones, S.R., Goodall, R.M., Mitchell, F., Bateman, S., 2000. High Performance Control System Processor. 3rd Workshop on System Design Automation (SDA 2000), Rathen, Germany, pp. 60–67. Eikerling, H.-J., Hardt, W., Gerlach, J., Rosenstiel, W., 1996. A Methodology for Rapid Analysis and Optimization of Embedded Systems. Engineering of Computer-Based Systems, Friedrichshafen, Germany. Hahn, M., 1999. OMD––Ein Objektmodell f€ ur den Mechatronikentwurf. VDI-Fortschrittberichte, Reihe 20, Nr. 299, VDIVerlag, D€ usseldorf, Germany. Hanselmann, H., 1987. Implementation of Digital Controllers––A Survey. Automatica, vol. 23, No. 1, Great Britain, pp. 7–32. Hardt, W., Kleinjohann, B., 1998. Flysig: Dataflow-Oriented DelayInsensitive Processor for Rapid Prototyping of Signal Processing. 9th IEEE International Workshop on Rapid System Prototyping, Leuven, Belgium. Hardt, W., Kleinjohann, B., 1998. Flysig: Towards High Performance Special Purpose Architectures by Joining Paradigms. 7th NASA Symposium on VLSI Design, Albuquerque, NM. Hardt, W., Rettberg, A., Kleinjohann, B. 1999. The PARADISE design environment. 1st New Zealand Embedded Systems Conference, Auckland, New Zealand. Hardt, W., Kleinjohann, B., Rettberg, A., Kleinjohann, E., 1999. A New Configurable and Scalable Architecture for Rapid-Prototyping Asynchronous Designs for Signal Processing. 12th Annual IEEE International ASIC/SOC Conference, Washington, DC. Hardt, W., Visarius, M., Kleinjohann, B., 2000. Architecture Level Optimization for Asynchronous IPs. 13th Annual IEEE International ASIC/SOC Conference, Washington, DC. Hardt, W., 2000. Integration von Verz€ ogerungszeit-Invarianz in den Entwurf eingebetteter Systeme. Habilitation Thesis, University of Paderborn, Paderborn, Germany.

274

M. Deppe et al. / The Journal of Systems and Software 70 (2004) 263–274

Honekamp, U., Stolpe, R., Naumann, R., L€ uckel, J., 1997. Structuring Approach for Complex Mechatronic Systems. 30th International Symposium on Automotive Technology and Automation––echatronics/Automative Electronics, Florence, Italy, pp. 165–172. Honekamp, U., 1998. IPANEMA––Verteilte Echtzeit-Informationsverarbeitung in mechatronischen Systemen. VDI-Fortschrittberichte, Reihe 20, Nr. 267, VDI-Verlag, D€ usseldorf, Germany. Kalte, H., Porrmann, M., R€ uckert, U., 2000. Rapid-PrototypingSystem f€ ur dynamisch rekonfigurierbare Hardwarestrukturen. AES 2000, Karlsruhe, Germany. Kasper, R., Reinemann, T., 2000. Implementierung sehr schneller Steuer- und Regelalgorithmen f€ ur mechatronische Systeme auf Gatterebene. VDI-Berichte, Nr. 1533, VDI-Verlag, D€ usseldorf, Germany. L€ uckel, J., Hestermeyer, T., Liu-Henke, X., 2001. Generalization of the Cascade Principle in View of a Structured Form of Mechatronic Systems. IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM 2001), Como, Italy. L€ uckel, J., Naumann, R., Rasche, R., 1999. Systematic Design of Crosslinked Mechatronic Systems, Exemplified by a Decentralized Intersection Management. European Control Conference 1999, Karlsruhe, Germany. Meier-Noe, U., Hahn, M., 1999. Entwicklung mechatronischer Systeme mit CAMeL. 3. Workshop Transmechatronik: Entwicklung und Transfer von Entwicklungssystemen der Mechatronik, Krefeld, Germany. Middleton, R.H., Goodwin, G.C., 1990. Digital Control and Estimation. Prentice-Hall, Englewood Cliffs, NJ. Motorola, Inc., 1999. MPC555 Evaluation Board Quick Reference. Nitsch, C., Weiß, K., Steckstor, T., Rosenstiel, W., 2000. Embedded system architecture design based on real time emulation. Rapid Systems Prototyping 2000, Paris, France. Petters, S., Muth, A., Kolloch, T., Hopfner, T., Fischer, F., F€arber, G., 1999. The REAR framework for emulation and analysis of embedded hard real-time systems. 10th IEEE International Workshop on Rapid Systems Prototyping (RSP’99), Clearwater, FL. Slomka, F., Bednara, M., Teich, J., Oberschelp, O., Deppe, M., 2002. Design and implementation of digital linear control systems on

reconfigurable hardware. Pre-selected for publication in EURASIP Journal on Applied Signal Processing. Stolpe, R., Deppe, M., Zanella, M., 2000. Rapid Prototyping von verteilten, hierarchischen Regelungen am Beispiel eines Fahrzeugs mit hybridem Antriebsstrang. Journal it/ti 42 (2), 54–58. Visarius, M., 2001. CAP: Ein Werkzeug zur Abbildung von periodischen Algorithmen auf konfigurierbare Zielarchitekturen. Master Thesis, University of Paderborn, Paderborn, Germany. Xilinx, Inc., 2000. Xilinix––The Programmable Logic. Data Book 2000, San Jose, CA. Zanella, M., Koch, T., Scharfeld, F., 2001. Development and structuring of mechatronic systems, exemplified by the modular vehicle X-mobile. IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM 2001), Como, Italy. Zanella, M., Scharfeld, F., Koch, T., 2001. X-mobile: A mobile vehicle of research and education. RAAD 2001, 10th International Workshop on Robotics in Alpe-Adria-Danube Region, Vienna, Austria. Markus Deppe studied mechanical engineering at the University of Paderborn and received his diploma in 1997. Since then he has been a research assistant at the Mechatronics Laboratory Paderborn. His research area is the distributed simulation and multiobjective-parameter optimization of mechatronic systems. Mauro C. Zanella studied information science in Brazil and acquired a Master’s degree (M.Sc.) in Industrial Information Science. Since 1997 he has worked as a research assistant at the MLaP. The main field of his activities lies in distributed embedded systems. Michael Robrecht studied electrical engineering with the emphasis on computer engineering at the University of Paderborn. From 1995 to 2001 he worked as a research assistant at the MLaP. The emphasis of his activities was on laboratory automation and the rapid prototyping of mechatronic systems in precision mechanics. In 2002 he started working as hardware development manager with iXtronics Ltd. Dr. Habil. Wolfram Hardt was born 09.05.1965 in Germany. He has studied computer science at the University Paderborn, Germany. In 1996 he finished his PHD developing an hardware/software parti-