Development of virtual data acquisition systems based on multimedia internetworking1

Development of virtual data acquisition systems based on multimedia internetworking1

Computer Standards & Interfaces 21 Ž1999. 429–440 www.elsevier.comrlocatercsi Development of virtual data acquisition systems based on multimedia int...

601KB Sizes 0 Downloads 26 Views

Computer Standards & Interfaces 21 Ž1999. 429–440 www.elsevier.comrlocatercsi

Development of virtual data acquisition systems based on multimedia internetworking 1 Giancarlo Fortino 2 , Libero Nigro

)

Laboratorio di Ingegneria del Software, Dipartimento di Elettronica Informatica e Sistemistica, UniÕersita` della Calabria, I-87036 Rende (CS), Italy Received 9 August 1999; accepted 9 August 1999

Abstract This paper proposes a novel approach centered on multimedia internetworking for the development of Distributed Virtual Instruments ŽDVI.. Multimedia internetworking refers to network infrastructures, protocols, models, applications and techniques being currently deployed over the Internet to support multimedia applications, e.g., videoconferencing, video-ondemand, shared workspaces. It is applied to broaden the concept of virtual instrument and enable new measurement patterns leveraging efficiency and interactivity. A DVI is a virtual instrument split into possibly multiple and independent parts, sender and receiver, which are linked by real-time continuous media and control streams. Senders and receivers are built by using open, composable and modular components based on a time sensitive actor framework and glued by multimedia middleware. A prototype is described to demonstrate the potential and the benefits of the proposed approach. q 1999 Elsevier Science B.V. All rights reserved. Keywords: Multimedia internetworking; Distributed Virtual Instruments; IP-multicast; RTP; RSVP; NI-DAQ; Actors

1. Introduction The development and deployment of the IP multicast infrastructure Žthe MBone. w3x as an overlay network on the Internet, have enabled an efficient and scalable mechanism for multipoint data delivery.

)

Corresponding author. Tel.: q0039-0984-494748; fax: q0039-0984-494713; e-mail: [email protected] 1 A preliminary version of this paper was presented at the IEEE Instrumentation and Measurement Technology Conference ŽIMTCr99., Venice, May 1999, Vol. 3, pp. 1863–1867. 2 E-mail: [email protected].

New real-time protocols ŽReal-time Transport ProtocolrRTP Control Protocol ŽRTPrRTCP.. w13x have been introduced and standardized along with resource reservation protocols ŽRSVP. w2x to provide guaranteed quality of service ŽQoS.. A wide range of distributed multimedia application manipulating in real-time audio, video, text, and graphics are currently available. They mainly rely on software-based processing Že.g., software codes, transcoding, and image compression. and network data streaming. Besides, advances in processor, plug-in DAQ boards and standard buses have fuelled the introduction of raw sample streams in desktop computers w1x. Thus,

0920-5489r99r$ - see front matter q 1999 Elsevier Science B.V. All rights reserved. PII: S 0 9 2 0 - 5 4 8 9 Ž 9 9 . 0 0 0 2 5 - 2

430

G. Fortino, L. Nigror Computer Standards & Interfaces 21 (1999) 429–440

new potential sources of multimedia information such as instrumentation, radio frequency and electrical signals can be captured and processed. This paper proposes an approach, mDVI-multimedia-based Distributed Virtual Instruments ŽDVI., which represents an original contribution toward an exploitation of multimedia internetworking and processing facilities to broaden the concept of Virtual Instrument ŽVI. and enable new measurement patterns leveraging efficiency and interactivity. VIs are specialized SW-modules that can be easily interfaced with the system under test through DAQ boards, GPIB standards, VXI buses, etc. DVIs are VIs that are split into possibly multiple and independent parts, sender and receiver, which are linked by real-time continuous media and control streams. The sender, interfaced with the system under test, continuously captures signals, optionally applies compression, creates a media stream and sends it on the network. Media streams consist of time-stamped packets containing the sampled data in the payload field according to the RTP protocol. On the other hand, the receiver, connected to the network, can acquire, process, and display media streams. The payload has to be well-defined, advertised, and possibly standardized so as to enable heterogeneous receivers to use it. Thus, the sender and the receiver are coupled like in a multimedia Internet-based conference. Its basic communication abstraction is the RTP session, which is defined by a collection of transmission sources, receivers, and intermediate agents that participate in the conference. All the session participants share a common multicast channel, over which session information is transmitted. Senders transmit to a multicast group address. Receivers interested in a particular transmission ‘‘tune-in’’ by subscribing to the multicast group in question. This loosely coupled, light-weight, realtime multimedia communication model is known as the Light-Weight Sessions ŽLWS. architecture w3x. This way, a single source of data can be efficiently shared among several users saving network resources. Receivers can have different views of the same source. Different views not only mean different data displaying on a GUI but also different computation and analysis of the received data. In order to have QoS guarantees, RSVP can be used. Bandwidth reservation is essential to guarantee

a minimal level of uninterrupted service throughout the session. Configuration and control of the source can be accomplished by two ways currently deployed over MBone: by a local user through a stand-alone application or by a remote one through an RTSP-like protocol or a WEB-based interface w4x. The time-critical part of mDVI sender and receiver rests on a time-sensitive Actor-based framework in Java w10x which is adequate for building distributed measurement software w6x and multimedia applications w4x. A virtual one-channel oscilloscope featured with a spectrum analyzer has been developed on a network of PCs. Signals are captured by using a National Instruments LAB-PC-1200 board plugged on a Pentium running Windows 95. Hardware independent NI-DAQ driver software w11x is used. Measurement scenarios ranging from real-time distributed computation and monitoring of data sources to control sharing of instruments for teleteaching purposes have been envisaged. The remainder of this paper is as follows. Section 2 summarizes the IP-based multimedia internetworking. Section 3 discusses the proposed architecture for DVI and the mDVI approach. Section 4 describes an experimental prototype. Section 5 relates mDVI to existing work. Final remarks and perspectives of future work conclude the paper.

2. Internetworking multimedia The original Internet service model supported only point-to-point Žunicast. communications. Unicast communication is well-suited for applications that require a communication model where exactly two parties exchange information, e.g., clientrserver applications such as the WWW. However, for those applications requiring multi-point communication, also known as multicast, unicast messaging presents relevant performance drawbacks. In fact, a host that wishes to send a message to n-hosts, has to establish n-connections Žif TCP is used. and duplicate each packet n times. This represents an enormous waste of bandwidth. Moreover, since the number of copies injected into the network by a host group of size n is

G. Fortino, L. Nigror Computer Standards & Interfaces 21 (1999) 429–440

OŽ n 2 ., the use of unicast for multi-point communication scales quite poorly with the number of participants. In order to overcome the performance drawback of the unicast model, multicast communication model ŽIP-multicast. was proposed and developed. This way, an application sends data to multiple receivers by transmitting the data once. In turn, the network infrastructure disseminates the multicast data efficiently, i.e., with minimum copies, to each member of the group, thereby obviating the source’s need to generate multiple copies of the data. The basic service model of IP multicast is to introduce a level of indirection in the addressing, namely the multicast group. In contrast to unicast communication, where hosts address their data to a specified single host or set of hosts, members of a multicast group address their data to a group address Žclass D of IP addresses.. Members join and leave a designated multicast group and address their data to that group. Today, Internet multicast service is deployed as an overlay network, called MBone, within the Internet, containing a collection of multicast-capable regions ŽLAN and multicast-routed networks. that are connected through unicast routes, or ‘‘tunnels’’. The proliferation of multimedia applications over MBone increased the need for a standard protocol for real-time multimedia transport, both to establish a standard for interoperability and furthermore to provide higher-level information, such as timing and naming information, required by this new class of applications. This led to the development of the RTP w13x. RTP is an application-level protocol that provides end-to-end delivery services for data with real-time characteristics. While it is primarily designed to

431

satisfy the needs of multi-party multimedia conferences, it is not limited to that application. Interactive distributed simulation and, mainly, control and measurement applications find RTP applicable. The basic communication abstraction is the RTP session that is based on the LWS architecture. In the IP protocol stack, RTP lies above UDP ŽUser Datagram Protocol.. It also runs over AAL5rATM ŽATM Adaptation Layer type 5rAsynchronous Transfer Mode.. It actually consists of two protocols: RTP for real-time transmission of data packets and RTCP for QoS monitoring and for conveying participants’ identities in a session. The RTP data packet is composed of a header followed by payload data. The main fields in an RTP header ŽFig. 1. are: Ø Timestamp reflects the sampling instant of the first octet in the data packet. It is media specific and is used to provide receiver-based synchronization. Ø Sequence number is incremented by one for each data packet sent. It can be used to detect losses, duplicated and out-of-order packets. Ø Payload type identifies the format of the data payload, e.g., H.261 for video streams, PCM for audio streams, NI-DAQ for measurement streams Žsee Section 4.. Ø Marker (M) signals significant events for the payload, e.g., end of a frame for video, beginning of a talkspurt for audio, end of a measurement data block, etc. Ø Synchronization source provides a mechanism for identifying media sources independent of the underlying transport or network layers. Other fields in the RTP header specify the protocol version ŽV., an extra padding ŽP. to the payload, an

Fig. 1. RTP Header.

432

G. Fortino, L. Nigror Computer Standards & Interfaces 21 (1999) 429–440

extension to the header ŽX., the number of contributing sources ŽCC. and the identifiers of the contributing sources ŽCSRC. to the RTP packet. It is worth noting that RTP itself does not provide any mechanism to ensure timely delivery or other QoS guarantees, but relies on lower layer services to do so. Thus, in order to obtain real-time responsiveness and no packet losses in the Internet error-prone environment, RSVP w2x can be used to set-up both multicast and unicast reserved sessions. It works over IP and assumes the existence of certain mechanisms like packet scheduling and classification into routers. Bandwidth reservation is essential to guarantee a minimal level of uninterrupted service throughout the session. When using RSVP as a QoS signaling protocol, participating end systems establish a closed control loop. The senders inform the network and receivers about their traffic characteristics. The

receivers that send their reservation requests back towards the sender, based on the traffic profiles announced, trigger the actual reservation.

3. A multimedia architecture for DVI A DVI ŽFig. 2. is a virtual instrument split in two possibly independent parts, sender and receiver. They are glued by the multimedia internetworking middleware. The sender can be interfaced with the system under test by DAQ boards, GPIB standards, VXI buses, etc. It is primarily devoted to creating an RTP-based continuous media stream. The basic operations performed by the sender are as follows. Ø Setting. It allows initializing the acquisition process with the user-defined parameters.

Fig. 2. DVI Architecture.

G. Fortino, L. Nigror Computer Standards & Interfaces 21 (1999) 429–440

Ø Capturing. The sender acquires signals coming from the system under test. The flexibility of this operation is heavily affected by the particular acquisition system ŽHW and SW drivers. adopted. Double buffering is required for continuous capturing. Ø Encoding. It is an optional operation in which the sampled data stored in the buffers are coded according to loss-less methods preserving the original information. For example, if the acquired data are real values Že.g., 64 bit., they can be transformed with some acquisition system-dependent formula, into shorter values Že.g., 16 bit.. Ø Packetization. The data are packetized according to the RTP protocol. The maximum transfer unit is 1024 bytes, 12 bytes are reserved for the RTP header, 1012 bytes are available for the payload data. The payload data is divided in a payload-dependent sub-header part, which contains information to correctly interpret the data and the data part itself. Ø Transmission. After opening an RTP session, each RTP packet is sent on to the network. The RTP session can be both multicast and unicast. In the former case, the multicast group should be known ‘‘a priori’’, previously advertised by a rendezvous mechanism, or requested on-demand. The receiÕer, connected to the network, joins the RTP session in order to acquire, process, and render a media stream. Its basic functions are as follows. Ø ReceiÕing. After tuning on the chosen RTP session, the receiver begins to receive the RTP packets. Ø Unpacketization. According to the payload type, the data are extracted from the packet and stored in temporary buffers. Ø Decoding. The data are Žpossibly. decoded by using the information of the payload sub-header. Ø Processing. It is application-dependent, i.e., each virtual instrument handles the data differently. For example, FFT, filtering, fitting algorithms, etc., can be applied. It is worth highlighting that timing information allowing re-synchronizing the data is contained in the timestamp field of the RTP header. Thus, real-time processing is possible. Ø Rendering. The data are displayed according to an application-dependent format on GUI windows. The visualization process can be real-time. The real acquisition process on the DVI sender is moved by the RTP session to the DVI receiver so as

433

to turn into a virtual acquisition process. The Receiving, Unpacketization and Decoding components of the DVI receiver Žsee Fig. 2. form a Virtual-DAQ ŽV-DAQ.. The signal evolves through the DVI architecture in a lifecycle in which assumes intermediate forms according to the following transformations: analog to digitized by the DAQ board; digitized to streamed by the DVI sender; streamed back to digitized by the V-DAQ; digitized to different domain types Že.g., transforming the signal from the time to the frequency domain., by the processing part of the DVI receiver. Indeed, a DVI can admit more sender and receiver parts. For example, a two channel virtual oscilloscope is assembled by two source senders Žone per channel. and two sink receivers. The two generated and temporal coupled media streams require to be synchronized at both the sender and receiver sites. Sender and receiver can be loosely or tightly coupled. Loose coupling implies that sender and receiver are independent, i.e., no direct interaction on the acquisition process setting exists. The sender sets the parameters specified by a local user and starts the RTP measurement session on a multicast group. On the other hand, the receiver can join the multicast group and acquire the measurement streams. In this case, the local user advertises the session parameters, i.e., process acquisition settings and group Žmulticast address and port., by a rendezvous mechanism. Over MBone the advertisement is delivered to a distributed session directory ŽSDR. shared on a known multicast group. A remote user can listen to the SDR, select a particular session, and run the receiver. Other mechanisms can be used such as Session Initiation Protocol ŽSIP. messages w3x, WWWbased SDR, etc. Tight coupling means that sender and receiver are bound by an interaction and control link. In this case, a server is permanently running at the source site waiting for connections. The remote client connects to the server to request, set-up and start the acquisition process. After the server agrees the request and the control connection between the two partners is created, the client issues the set-up request message which contains all the setting parameters. When the server receives the message instantiates the DVI sender by initializing it with the parameters, and

434

G. Fortino, L. Nigror Computer Standards & Interfaces 21 (1999) 429–440

replies to the client. The client then runs the DVI receiver and sends the start message to the server that launches the RTP measurement session. The control connection can be based on RTSP ŽReal-Time Streaming Protocol. w14x, i.e., a text-based protocol like HTTP ŽHyper Text Transfer Protocol. that maintains states. It provides standard Že.g., set-up, play, teardown . and customizable methods Že.g., set_parameter, get_parameter, etc... The sender and receiver components are built by using a time-sensitive actor model w10x which favors time predictability. The model is suited to fulfil the requirements of multimedia systems w4,5x. 3.1. Actor-based multimedia systems A variant of the Actor model is adopted that centers on light-weight actors and a modular han-

dling of synchronization and timing constraints w10x. Actors are finite state machines. The arrival of an event Ži.e., a message. causes a state transition and the execution of an atomic action. At the action termination the actor is ready to process a next message and so forth. Actors do not have internal threads for message processing. At most one action can be in progress in an actor at a given time. Actors can be grouped into clusters Ži.e., subsystems.. A subsystem is allocated to a distinct physical processor. It is regulated by a control machine ŽFig. 3. that hosts a time notion and is responsible of message buffering Ž scheduling . and dispatching. The control machine can be customized through programming. For instance, in w10x a specialization of the control machine for hard real-time systems is proposed, where scheduling is based on messages time-stamped by a time Õalidity window w t min , tmax x

Fig. 3. Time-sensitive actor architecture.

G. Fortino, L. Nigror Computer Standards & Interfaces 21 (1999) 429–440

expressing the interval of admissible delivery times. Message selection and dispatching is based on an Earliest Deadline First strategy. Within a subsystem, actor concurrency depends on message processing interleaÕing. True parallelism is possible among actors belonging to distinct subsystems. A distinguishing feature of the actor model is the modular handling of timing constraints. Application actors are developed according to functional issues only. They are not aware of when they are activated by a message. Timing requirements are the responsibility of RT-synchronizers w9,12x, i.e., special actors which capture ‘‘just sent messages’’ Žincluding messages received from the network. and apply to them timing constraints affecting scheduling. Control machines of a distributed system can be interconnected by a network and real-time protocol so as to fulfill system-wide timing constraints. Implementations of the actor model were achieved in C qq and Java. This work considers Java. 3.2. Modeling multimedia-based data acquisition systems Actors can naturally be used to structure a multimedia-based distributed data acquisition system. Two kind of subsystems specialized to handle the require-

435

ments existing at both the server Ž transmitter . and clientŽs. Ž receiÕer Žs.. sides of the application w6x are introduced. The transmitter side is typically devoted to achieving the multimedia data Žmeasurement samples., e.g., from a data acquisition device or from stored files, and to send it through a network binding to the clientŽs. for the final presentation. Specific timing and synchronization constraints exist and should be managed respectively at the server and client side to ensure quality-of-service parameters. To this purpose both server and client subsystems are equipped with a multimedia control machine with suitable QoSsynchronizers ŽFig. 4.. A QoSsynchronizer is an RTsynchronizer w12x which captures and verifies QoS timing constraints. As an example, the QoSsynchronizer in a client subsystem can perform fine-grain inter-media synchronization Že.g., synchronization between two channels of an oscilloscope.. Bindings, i.e., logical communication channels, connect transmitter and receiver subsystems. Bindings can be point-to-point Ži.e., unicast. and pointto-multipoint Ži.e., multicast.. A binding is created by a bind operation originated from media-actors called Binders. A binder governs the on-going flow of data Že.g., continuous media or control messages. sent into the binding. It hides particular transmission

Fig. 4. Architecture of a single multimedia session composed of two synchronized measurement streams.

436

G. Fortino, L. Nigror Computer Standards & Interfaces 21 (1999) 429–440

mechanisms Že.g., network and transport protocols.. It can also monitor the binding QoS so as to provide information such as throughput, jitter, latency and packet loss statistics. A Streamer is a periodic actor that accesses digital media information through media passive objects ŽMediaFile, MediaDevice, MediaNetSource. and sends it to Binders or Presenters. Presenters are media-actors specialized to render media objects. Fig. 4 portrays a multimedia-based data acquisition system concerned with two synchronized measurement streams over IP-multicast. The Transmitter and ReceiverŽs. are connected by two data streaming bindings. They carry the data of the multimedia session according to the RTPrRTCP protocol w13x. In the case the data streaming binding is multicast, receiver subsystems can arbitrarily join the on-going multimedia session requested by its initiator. Transmitter subsystem is responsible of the capturing process and the enforcement Žby using the RTP header information. of timing constraints upon the media streams to fulfil the requirements of the multimedia presentation. On the remote site, ReceiverŽs. subsystemŽs. control and render the requested multimedia session. At the transmitter site, an actor pair, Streamer and Binder, is instantiated for each media stream. A Streamer captures the media Žmeasurement data. from the devices, encodes, and sends them, as messages, to the Binders. At the receiver site, a mirrored situation exists. A Binder polls the bindings and delivers the read messages to the Presenters for rendering purposes. Rate synchronizers are introduced for timing the acquisition, transmission and reception operations of the media actors.

4. A prototype implementation A virtual one-channel oscilloscope featured with a simple spectrum analyzer has been developed using Java 1.2. The testbed consists of a set of PCs networked by a LAN Ethernet 10 Mbps. Signals, coming from both devices under test and polynomial waveform synthesizers, are acquired by using a National Instruments Lab-PC-1200 board plugged on a Pentium-133 MHz running Windows 95. It is a

completely SW-configurable multi-functional IrO ISA board. The board is a narrow-band device that has a 12-bit resolution ArD, a max sampling rate equal to 100 KSrs, and eight analog input channels. The HW-independent NI-DAQ driver software w11x is exploited. It provides functions for DAQ IrO, buffer and data management Že.g., double buffering mode acquisition.. Since NI-DAQ drivers isolate the application from the NI-DAQ HW products, it can be chosen to define one payload type for NI-DAQ. It should be general enough to describe all the features of the existent NI-DAQ HW products so as to enable an independent DVI receiver to correctly interpret the payload data. A payload type was defined whose sub-header main fields are: DAQ product ID, operation settings Žchannel, gain, etc.., voltage ranges, etc. The prototyped virtual device operates according to the loosely coupled paradigm. The DVI sender can generate media streams Ždata. at a rate up to 1.6 Mbrs. This limit is due to the Lab-PC-1200 board. The DVI Sender controls the DAQ board through an implemented Java class which interfaces to native methods developed in C q q. It has to set the following parameters that characterize the measurement stream: Ø Address Žmulticast or unicast. and Port on which the session is going to be sent. Ø TTL. The scope of the session, e.g., local Ž1., regional Ž16., global Ž127.. Ø Number of points. The points to be acquired. Ø Rate. The acquisition rate. Ø Duration. The duration of the session in ms. Ø Channel. The DAQ board channel. The virtual oscilloscope is split into two parts: the DVI Receiver and the GUI. The DVI Receiver joins the addressrport set by the sender and by using the information contained in the RTP header reconstructs the spatial and temporal characteristics of the stream needed for processing and rendering the stream itself. The GUI of the virtual oscilloscope is portrayed in Fig. 5. It mirrors the HP 54504A front panel and has five functional areas: System Control, Entry, Setup, Menus and Function Selection. Keys in the System Control section cause action when selected. The RrS key toggles the acquisition status of the VOscilloscope. If the VOscilloscope is running Ži.e.,

G. Fortino, L. Nigror Computer Standards & Interfaces 21 (1999) 429–440

437

Fig. 5. The virtual oscilloscope.

continuously acquiring., by pressing the key RrS it becomes stopped and viceversa. The last acquired data is buffered and displayed. The S key allows performing a single acquisition. The Entry panel consists of a multi-function numeric keypad and knob. The keypad is for direct numeric input. The Setup section controls the automatic setup Že.g., autoscale, save and recall setups, etc... Menus

access any of nine separate subsystems: timebase ŽTMB., channel ŽCHA., waveform math and save ŽMATH, SAVE., etc. The MATH menu allows applying selected functions to the acquired signal, e.g., an implemented FFT algorithm. Each key in the Function Section corresponds to a function shown in the displayed menu. In Fig. 6, the VOscilloscope runs the FFT on a received square waveform signal.

Fig. 6. The display shows the spectrum of a square waveform signal continuously acquired.

438

G. Fortino, L. Nigror Computer Standards & Interfaces 21 (1999) 429–440

5. Related work This section summarizes some commercial andror research solutions concerning the development of DVI. Pros and cons are considered with respect to the real-time responsiveness, the interaction patterns and the user benefits. The first solution refers to National Instruments SW products that are the most used in both the academy and industrial world. The second one describes a research at the MIT that is very close to the approach proposed in this paper. 5.1. Remote DeÕice Access (RDA) NI-DAQ for Windows version 6.x includes a feature called RDA w7x. RDA makes it possible to acquire data over a network using a NI-DAQ board installed in a remote computer. RDA uses TCPrIP as the underlying network protocol so that it can be exploited both on LANs and WANs that are based on TCPrIP. RDA is implemented at the NI-DAQ driver level and extends the NI-DAQ model across the network. A virtual instrument application can be achieved using LabVIEW, LabWindows, or Virtual Bench. The virtual instrument runs on a client Žthe RDA Client. whereas the data acquisition devices are located on a server Žthe RDA Server.. A single RDA Client can access remote devices in several RDA servers as well as local ones. A remote device in an RDA server could be used by several RDA clients. There are some considerations to take into account in order to avoid conflicts between RDA clients. For example, if a client A configures Analog Input Channel 3 in group 0, and a client B configures Analog Input Channel 5 on the same remote device and also using group 0, the readings taken by client A will be from channel 5 and not channel 3. RDA relies on Microsoft’s implementation of RPC ŽRemote Procedure Call. that is based on the OSErDCE. The distribution aspects of an application are completely transparent to the user. In fact, when a client from, for example, a LabVIEW application, accesses a certain device and NI-DAQ sees that this device is a remote one, the call is packaged up and sent via RPC to the relevant remote computer. There, the RDA server application receives the call, unpackages it and sends it to NI-DAQ, as a normal local call, that is finally executed.

TCP is not suited for real-time transmission of data w3x. It scales poorly since it is unicast-centered. In addition, it is not possible to build real-time synchronous WANs w8x. RDA being based both on TCP and RPC suffers the above drawbacks. The interaction patterns are limited to a ClientrServer paradigm. The user can transparently connect to several remote devices but has to be very careful to avoid conflicts with other users using the same devices. Moreover, the real-time behavior of the received data, in the usual case that the client has spawned a process on the server which uses double buffering for data acquisition, can be critically affected by GUI operations Že.g., sizing or moving a window. and also by the unpredictable network latencies. Due to the direct and synchronous connection existing between the RDA client and server, the loss of real-timeness can imply a buffer overwrite in the server. The mDVI approach described in this paper does not suffer these drawbacks because the receiverrsender connection is asynchronous and because of the explicit re-synchronization task adopted at the receiver site. Although the sender process depends on double buffering during data acquisition, that process is local and is not disturbed by client interactions. 5.2. Virtual Sample Processing (SpectrumWare) The Virtual Sample Processing ŽVSP. approach w1x is software-centered. Many media handling functions are transferred from hardware to applicationlevel software. The SpectrumWare testbed is used for experimenting with new media types such as ultrasound, infrared and RF signals. The backbone of the prototyping environment is the VuNet, an ATM desk area network. Data are captured by using network-based appliances, with transducers for RF signals. These signals are processed on hosts ŽDEC WSs. distributed around the network. The programming environment is based on the VuSystem, which has already demonstrated the feasibility of a software-based approach to conventional media, including audio, video, text and graphics. The VuSystem supports the flow of temporally sensitive information, i.e., timestamped payloads through input and output ports. It has been extended to embed a variety of new media types. In particular, payload types have been added to handle sampled and complex

G. Fortino, L. Nigror Computer Standards & Interfaces 21 (1999) 429–440

439

Že.g., Fourier transforms of sampled data. valued data streams. The mDVI project described in this paper owes to VSP. However, the design of mDVI was mainly driven by the requirement of deploying virtual devices over Internet, i.e., according to standard network infrastructure, multimedia protocols and the Java programming language. Novel in mDVI is the adoption of an actor model for modular structuring of an application and its operating software. The approach purposely avoids dependencies from hidden policies of an Operating System.

This work partially was funded by the Ministero dell’Universita` e della Ricerca Scientifica e Tecnologica ŽMURST. in the framework of the project ‘‘MOSAICO’’.

6. Conclusions

References

This paper proposes an original approach to the development of DVI over Internet. The approach exploits the new multimedia internetworking solutions and embeds them in the distributed measurement research area. Mechanisms are described in the paper which permit efficient and scalable access to remote sensing and control devices. Current and commercial solutions for DVI ŽInternet developers toolkit for NI LabView and LabWindowsrCVI, DataSocket, RDA, HP-VEE 5.0. are based on TCP which is not suitable for real-time, andror on unicast IP which scales very poorly. Moreover, only http, ftp, email and proprietary protocols Že.g., dstp. are adopted, without taking into account the new emergent standards which have been addressed by the Internet research community. The paper provides a virtual oscilloscope example to demonstrate the flexibility and the potential of the research project. The approach opens to new measurement scenarios such as: Ža. sharing the same measurement streams by clustered hosts for parallel processing; Žb. DVI for teleteaching purposes where the teacher shows in an efficient way to n students how an instrument works; Žc. cooperative instrument control which enables a workgroup to operate at the same real-time on a shared resource. On-going directions of work include: Ø Development of a wide range of virtual devices by using different and more powerful DAQ HW. Ø Implementation of a ‘‘Measurement on-demand’’ server which makes it available controllable RTP measurement sessions both live and archived.

w1x V.G. Bose, A.G. Chiu, D.L. Tennenhouse, Virtual sample processing: extending the reach of Multimedia, Multimedia Tools and Applications 5 Ž3. Ž1997. . w2x R. Braden, L. Zhang, S. Berson, S. Herzog, S. Jamin, Resource Reservation Protocol ŽRSVP. version 1 functional aspects, RFC 2205, IETF, Sept. 1997. w3x J. Crowcroft, M. Handley, I. Wakeman, Internetworking Multimedia, UCL press, 1999. w4x G. Fortino, L. Nigro, An interactive and cooperative videorecording on-demand system over MBone, in: Proc. of SCS EuroMedia ’99, pp. 120–124. w5x G. Fortino, L. Nigro, Modeling, analysis and implementation of actor-based multimedia systems, in: Proc. of Parallel and Distributed Processing Techniques and Applications ŽPDPTA ’99., Las Vegas, June, 1999, pp. 489–495. w6x G. Fortino, D. Grimaldi, L. Nigro, Multicast control of mobile measurement systems, IEEE Trans. Instrum. Meas. 48 Ž1998. 1149–1154. w7x T. Hayles, Developing Networked Data Acquisition Systems with NI-DAQ, NI Application Note 116, 1998. w8x K.G. Shin, P. Ramanathan, Real-time computing: a new discipline of computer science engineering, Proc. IEEE 82 Ž1. Ž1994. . w9x B. Nielsen, S. Ren, G. Agha, Specification of real-time interaction constraints, in: Proc. of 1st Int. Symposium on Object-Oriented Real-Time Computing, IEEE Computer Society, 1998. w10x L. Nigro, F. Pupo, A modular approach to real-time programming using actors and Java, Control Engineering Practice 6 Ž1998. 1485–1491. w11x NI-DAQ, User Manual for PC Compatibles, Version 6.x, National Instruments, April 1998. w12x S. Ren, G. Agha, M. Saito, A modular approach for programming distributed real-time systems, J. Parallel Distrib. Comput., Special issue on Object-Oriented Real-Time Systems, 1996. w13x H. Schulzrinne, S. Casner, R. Frederick, V. Jacobson, RTP: a Transport Protocol for Real-Time Applications, RFC 1889, IETF, Jan. 1996.

Archived sessions are RTP measurement sessions previously stored on the server file system w4x. Ø Running of the implemented DVI on clusters of multicast routed LANs with RSVP support.

Acknowledgements

440

G. Fortino, L. Nigror Computer Standards & Interfaces 21 (1999) 429–440

w14x H. Schulzrinne, A. Rao, R. Lanphier, Real Time Streaming Protocol ŽRTSP., RFC 2326, IETF, Apr. 1998. Giancarlo Fortino was born in 1971 in Italy. He graduated with a degree in Computer Science Engineering in 1995 at the University of Calabria. He is currently pursuing his PhD studies in Computer Science at the Department of Electronics Informatics and Systems Science of the University of Calabria. His main interests include modeling, design and implementation of distributed applications, including multimedia synchronization and cooperative mechanisms, like videoconferencing and on-demand applications, and measurement architectures on top of Internet.

Libero Nigro was born in Italy in 1953. He earned his degree in Electrical Engineering in 1978 at the University of Calabria. He is currently an Associate Professor of Computer Science at the Department of Electronics, Informatics and Systems Science of the University of Calabria. His research interests include objectoriented software engineering, distributed real-time systems, parallel simulation, measurement and multimedia systems. Prof. Nigro is a member of ACM and IEEE.