Signal Processing: Image Communication 28 (2013) 87–96
Contents lists available at SciVerse ScienceDirect
Signal Processing: Image Communication journal homepage: www.elsevier.com/locate/image
Virtual world control system using sensed information and adaptation engine Sang-Kyun Kim a, Yong Soo Joo a, Minho Shin a,n, Seungju Han b, Jae-Joon Han b a b
Myongji University, 116 Myongjiro Choin-gu, Yongin-si, 449-728, Republic of Korea Samsung Advanced Institute of Technology, Mt. 14-1, Nongseo-dong, Giheung-gu, Yongin-si Gyeonggi-do, 446-712, Republic of Korea
a r t i c l e i n f o
abstract
Available online 6 November 2012
The sensed information (e.g., temperature, humidity, light intensity, gas intensity) obtained from a user’s vicinity plays an important role in the generation of environmental changes, the main character’s reactions, and in storyline changes in the virtual world. In this paper, a method is proposed to transform sensed information from real world to standardized XML instances and to control virtual world objects. The sensed information from the real world is transformed into XML instances standardized by MPEG-V Part 5 (Data formats for Interaction Device) schemes and then delivered to a Real-to-Virtual (RV) adaptation engine. The engine generates another XML instance standardized by MPEG-V Part 4 (Virtual world object characteristics) that describes the control information of avatars and objects in the virtual world. The system is demonstrated by a real-world mock-up equipped with a sensor set and a virtual space realized in Second Life. & 2012 Elsevier B.V. All rights reserved.
Keywords: MPEG-V Sensed information RV adaptation engine Virtual world object characteristics Virtual object control
1. Introduction Popularized by scientific movies and justified by recent advances in technology, virtual reality and virtual worlds are becoming real rather than part of one’s imagination. Connecting the human brain to a virtual world (as depicted in the 1999 movie The Matrix) and transferring human mind to an avatar (as in the 2009 movie Avatar) are well-known imaginable technologies relevant to virtual reality. Such technologies, however, do not exist yet. Instead, researchers now focus on immersive technologies such as Virtual Cocoon1 or 3D graphics, both of which enable deep immersion of the user into a virtual world. Fig. 1 shows Virtual Cocoon, the primary delivery mechanism of virtual reality developed by researchers in the University of York and University of Warwick in UK [1]. Virtual Cocoon is a helmet-like device containing a
n
Corresponding author. Tel.: þ82 31 330 6786; fax: þ82 330 6967. E-mail address:
[email protected] (M. Shin).
0923-5965/$ - see front matter & 2012 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.image.2012.10.006
highly dynamic screen, 3D audio headphones, and various actuators that can deliver smell, taste, temperature, humidity, and wind to the user’s senses. It provides an environment for the user to experience a virtual world with more reality. 3D games are the typical applications that realize virtual worlds using 3D graphics technology. For example, ‘‘Second Life’’ allows users to project themselves onto virtual characters (called an avatar). Through the avatar, the user can live a virtual life; communicate with others, perform the activities of their daily lives, and own virtual assets such as houses and other property. In massively multi-player online role-playing games (MMORPG) such as ‘‘World of Warcraft’’ or ‘‘Lineage’’, users can operate their characters in a virtual world and cooperate with others to fulfill missions. The 3D games immerse users in a virtual world by providing fictional environment that can be experienced only in the imagination. Another body of research focuses on the effective control of the objects and surroundings in a virtual world. ‘‘Nintendo Wii’’ provides sensor-based virtual world
88
S.-K. Kim et al. / Signal Processing: Image Communication 28 (2013) 87–96
Fig. 1. Cocoon for virtual reality experience [1].
control technology. It exploits accelerometers and movement sensors to synchronize the motion of the game character with the real motion of the gamer. ‘‘Kinect’’ by Microsoft, on the other hand, uses camera and real-time vision processing to recognize user movements to control an avatar or an object in the virtual world according to the movement. Brain waves are also considered a possible means of controlling the virtual world [2,3]. These virtualworld control technologies aim to provide an interface for interaction between users, objects and environments in a virtual world. A previous research [4] presented a use case of physical exercise for the elderly in the Metaverse project. This study showed how to connect real-world devices such as biosensors to virtual worlds and demonstrated the transporting of virtual objects from Second Life to a customized virtual world. Although this study reported the use of a standardlike format, there was no clear explanation of the format presented. In other work [5], a virtual travel use case in the Metaverse project was presented. They created scripts in a specific language (Linden Script) to animate or handle virtual world objects. Because, however, state-of-the-art technologies thus far aim at proprietary products, there is no standard way to represent the data of sensors in the real world and of virtual objects in a virtual world. In other words, there is no standard way to connect the data from sensors and the virtual world objects. Since 2008, the MPEG international standardization group has been working on a standard known as MPEG-V (ISO/IEC 23005) [6]. The MPEG-V standard aims to define the data exchange interfaces between the real world and a virtual world. In particular, Part 4 [7] defines the XML schema for the information about avatars and virtual objects, and Part 5 defines the XML schema for sensed information. Part 7 [8] provides reference software that implements the basic functionalities for generating, parsing, and modifying XML instances according to the XML schemas defined in each part. Based on the MPEG-V standard, a virtual-world control system was proposed by Han et al. [9]. Their work focuses on controlling the facial expression of an avatar according to the facial expression of the user, as recognized by a camera. The system is implemented based on MPEG-V Part 2, 4, and 5 (ISO/IEC 23005-2, 4, 5).
In this paper, we propose a virtual-world control system that utilizes sensed information and an adaptation engine based on MPEG-V Part 4, 5, and 7 (ISO/IEC 230054, 5, 7). Our work differs from prior work [4,9] in that the system proposed here can be applied to generic problems while prior projects only focus on specific problems, e.g., facial expressions. For example, the proposed virtualworld control system can be applied to build a facility management system that visualizes in 3D graphics the manufactured system using sensed information. Another possibility is a health service that informs the user of his or her health status, via a 3D avatar, and guides the user toward desirable workout patterns [10]. The important characteristics of the proposed system are described in detail in Section 3.2. The reminder of this paper is organized as follows. In Section 2, we describe the metadata for sensed information and the virtual world object characteristics. In Section 3, we explain the architecture of the proposed virtual world control system. Section 4 discusses the implementation and results of the proposed system, and Section 5 concludes the paper and discusses related future work. 2. Metadata for sensed information and virtual world object characteristics This section explains how the MPEG-V standard defines the XML schemas for describing the sensed information and virtual world object characteristics used for the virtual world control system proposed in this paper. 2.1. Metadata for sensed information MPEG-V Part 5 [6] defines the XML schema for describing Sensed Information (SI) and Device Commands. The purpose of this XML schema is to provide necessary information about the interactions between the real world and the virtual world. The schema consists of the Interaction Information Description Language, the Sensed Information Vocabulary (SIV), and the Device Command Vocabulary. Fig. 2 shows the description scheme of IIDL-metadata. To date, the sensed information types defined in the SIV of MPEG-V Part 5 include light, ambient noise, temperature, humidity, distance, atmospheric pressure, position, velocity, acceleration, orientation, angular velocity, angular acceleration, force, torque, pressure, motion, and intelligent camera sensors. More sensors will be added in the future. Sensed information acquired from real-world sensors can be delivered to the virtual world without modification or can be transformed or modified into different information by the RV Adaptation Engine before it arrives in the virtual world. For example, a SI can be converted into a virtual world object (VWO) which can then control the virtual world. The SI types used in this paper are described below. The current syntax and semantics of the metadata for MPEG-V projects are based on the XML schema. However, in this paper an EBNF (Extended Backus–Naur Form)-like
S.-K. Kim et al. / Signal Processing: Image Communication 28 (2013) 87–96
89
Fig. 2. XML description scheme of IIDL metadata.
overview of standardized sensed information is provided due to the lack of space and the verbosity of XML [11].
SensedInfoBaseType provides the topmost type of the base-type hierarchy which each individual instance of sensed information can inherit. The TimeStamp provides the time information at which the sensed information is acquired. As defined in MPEG-V Part 6 (ISO/IEC 23005-6) [12], there is a choice from among the three timing schemesof absolute time, clock tick time, and the delta of the clock tick time. The sensedInfoBaseAttributes describes a group of attributes for the sensed information as follows.
The attribute id is a unique identifier for identifying individual sensed information. The sensorIdRef references a sensor device that has generated the information included in this specific instance of sensed information. The linkedlist describes the multi-sensor structure that consists of a group of sensors in a way such that each record contains a reference to the ID of the next sensor. The groupID is an identifier for a group multi-sensor structure to which this specific sensor belongs. The term activate describes whether the sensor shall be activated. A value of ‘true’ means the sensor shall be activated and ‘false’ means the sensor shall be deactivated. The priority describes a priority for sensed information with respect to other sensed information sharing the same point in time when the sensed information becomes adapted. A value of ‘one’ indicates the highest priority and larger values indicate lower priorities. The default value of the priority is ‘one’. For more than one instance of sensed information with the same priority, the order of the process can be determined by the adaptation engine itself.
LightSensorType is a tool for describing sensed information with respect to a light sensor. The value describes the sensed intensity of the light with respect to its lux. This attribute can be used to represent ‘‘White’’ when the
light sensor senses ‘‘RGBW’’. In addition, unit specifies the unit of the sensed value if a unit other than the default unit is used as a reference to the classification scheme term provided by UnitCS as defined in A.2.1 of MPEG-V Part 6 [12]. The unit attributes from this point on in this paper will share the same meaning. Color describes the list of colors sensed by the light sensor either as a reference to a classification scheme that uses the mpeg7:termReferenceType defined in 7.6 of ISO/IEC 15938-5:2003 or as an RGB value. The CS which may be used for this purpose is the ColorCS defined in A.2.2 of ISO/IEC 23005-6. The term colorValue describes the sensed values of a color sensor with respect to color space models. The attribute of model describes the color model of the sensed values from a color sensor.
AmbientNoiseSensorType is a tool for describing sensed information using an ambient noise sensor. The term lifespan describes the duration used to measure the information based on the timestamp. The unit of lifespan is the internal clock count. The term value describes the sensed value of the ambient noise with respect to the decibel (dB) level.
The TemperatureSensorType is a tool for describing sensed information with respect to a temperature sensor. The value describes the sensed value of the temperature with respect to the Celsius (1C) scale.
HumiditySensorType is a tool for describing sensed information with respect to a humidity sensor. The term value describes the value sensed by the humidity sensor with respect to percentage (%).
AccelerationSensorType is a tool for describing sensed information with respect to an acceleration sensor.
90
S.-K. Kim et al. / Signal Processing: Image Communication 28 (2013) 87–96
Fig. 3. XML description scheme of VWOC metadata.
Acceleration describes the value of the acceleration sensor in a three-dimensional vector with respect to m/s2. When the axis is 1, only X is used. When the axis is 2, X and Y are used. When the axis is 3, X, Y, and Z are used. The term axis is the number of axes measurable by the acceleration sensor. The axis value is 1, 2, or 3. The default value is 3. 2.2. Metadata for virtual world object characteristics The information about the VWO can be described by virtual world object characteristic metadata (VWOC-metadata), as defined in MPEG-V Part 4 (ISO/IEC 23005-4) [4]. The XML schema for VWOC-metadata describes the information of the VWOs. Fig. 3 shows the VWOC-metadata description scheme. VWOCInfo, the root element, can describe both avatars and virtual objects by containing the AvatarList and VirtualObjectList elements. Each list can represent the corresponding element (Avatar for AvatarList and VirtualObject for VirtualObjectList) in the form of a list. The VWOC types used in this paper are described in the following sections. Again, EBNF is used for brevity.
VWOCInfoType provides basic structure that the VWO characteristics information description should follow through the root element. TAvatarList is an optional wrapper element that serves as a placeholder for the list of avatar characteristics information. VirtualObjectList is an optional wrapper element that serves as a placeholder for the list of virtual object characteristic information.
VirtualObjectListType is a wrapper element type which allows multiple occurrences of virtual object characteristic information. VirtualObject specifies the description of the virtual object characteristic information.
VirtualObjectBaseType provides a characteristic description of an individual virtual object. VWOBaseType describes common attributes and elements in both avatars and virtual objects.
Identification describes the identification of the VWO. Description contains a description of the VWO. VWOC describes a set of characteristics of the VWOs. BehaviorModelList describes a list of the behavior models associated with the VWO. The id attribute is a unique identifier for identifying individual instances of VWO information.
Appearance contains one or more resource link(s) to appearance(s) file(s) describing the visual and tactile elements of the object. Animation contains a set of metadata describing pre-recorded animations associated with the object. HapticProperty contains a set of high-level descriptors of the haptic properties defined by the VWOHapticPropertyType of the VWO. VirtualObjectComponents contains the list of virtual objects which are concatenated to the virtual object as components. 3. Overview of the virtual world control system In this section, we describe the architecture of the proposed virtual world control system. 3.1. Architecture of the virtual world control system Fig. 4 shows the overall architecture of the proposed virtual world control system. The system consists of three modules: the sensed information creation module (SICM) generates and delivers sensed information metadata from the information acquired from sensors, following MPEG-V Part 5. The adaptation engine module (AEM), converts received sensed information metadata into VWO metadata. The virtual world control module (VWCM) controls the virtual world based on the VWO metadata. SICM comprises Sensors, which generate information about the real world (e.g., temperature, humidity, and vibration sensors); the Sensing Data Acquirer, which acquires information from sensors; and the Sensed Information Metadata Creator (SI-Creator), which converts sensed information into a standard data format and delivers it to the AEM.
S.-K. Kim et al. / Signal Processing: Image Communication 28 (2013) 87–96
91
Fig. 4. Architecture of the proposed virtual world control system.
Fig. 5. A real-world environment mock-up w/sensors (a) and its virtual world implementation by Second Life (b).
Sensors from different manufacturers generate sensed information with different representation and sampling methods. To address this incompatibility issue, the Sensing Data Acquirer implements different sampling methods for each sensor, and SI-Creator converts the sensed information of different formats into a standard data format so that sensed information, even from different sensor manufacturers, is consistently represented. AEM comprises the sensed information metadata parser (SI-Parser), which parses the SI-metadata received from SI-Creator; the RV Adaptation Engine, which generates VWO information from the parsed sensed information; and the virtual world object information metadata creator (VWO-Creator), which converts VWO information received from the RV Adaptation Engine into a standard data format and transfers the data to the VWCM. The RV Adaptation Engine is designed to generate VWO control information based on the sensed information acquired from the real world. The actual implementation method may change according to the application system. VWCM consists of the virtual world object information metadata parser (VWO-Parser), which parses VWO metadata received from the AEM; the virtual world object controller (VWO-Controller), which controls the virtual world according to the parsed VWO information. The implementation of the virtual world may differ for different application systems, and the different virtual world implementations cause the control methods of the VWOs to be different. Therefore, VWO-Controller is designed to be able to generate different control commands for different virtual worlds from the parsed VWO information.
3.2. Important characteristics of the virtual world control system The virtual world control system proposed in this paper has the following properties. First, the proposed system uses standard data formats for the information sensed from various sensors (e.g., temperature, humidity, light-level, or vibration). The use of a standard data format for the sensed information allows the building of application systems that are independent of their underlying sensor devices. The proposed SICM can easily be adopted by a sensor management system that acquires multiple instances of sensor data, generates standard format sensor data, and transfers the data to other consumption modules. Second, the proposed system employs an adaptation engine (i.e., AEM) which converts the acquired sensed information into VWO information, which can express the status information of VWOs (avatar, object, or surroundings). This design can support different methods of controlling the virtual world when a different adaptation engine is employed; depending on the application, the system needs to adopt the most appropriate implementation of the adaptation engine. For instance, the proposed AEM can be easily installed in a server that can analyze sensor data and produce standard VWO metadata. Third, the proposed system controls the virtual world through the VWO information obtained from the adaptation engine. Because the virtual world is controlled by the standardized VWO metadata, the system can be built independently of the actual implementation of the virtual
92
S.-K. Kim et al. / Signal Processing: Image Communication 28 (2013) 87–96
world. The proposed VWCM can be easily adopted by any types of consumer devices that connect to the virtual world. The proposed architecture is easily reusable in other applications. A similar type of system architecture was applied to build a health avatar application. One study [10] presents a health avatar system and its application scenario to obtain personal health data from biometric sensors attached to exercise equipment and converts the
data into a standard metadata format for interface sensors and virtual world health avatar applications. This system utilizes the biometric metadata from sensors to visualize virtual world health avatars, allowing users to recognize their health condition without difficulty. In the system, the SICM is installed in a process module of the exercise equipment, the AEM is installed in a data analysis server, and the VWCM is installed in a mobile device.
Fig. 6. Graphic user interface of the sensed information creation module.
S.-K. Kim et al. / Signal Processing: Image Communication 28 (2013) 87–96
4. Implementation of the virtual world control system In this section, we provide the details of our implementation of the proposed system and discuss the test results of the system.
4.1. Implementation To implement the proposed virtual world control system, we first built a mock-up model of the real world as shown in Fig. 5-(a). Based on the mock-up model, we implemented a corresponding virtual world as shown in Fig. 5-(b), using ‘‘Second Life.’’ ‘‘Second Life’’ is open-source software consisting of servers and clients. We built our virtual world by implementing our own servers and then utilizing the client programs provided by ‘‘Second Life’’ to connect to the servers. We prepared light, temperature, humidity, sound, and vibration sensors to collect the sensed information of the real world. This information is used to control the virtual objects in the virtual world, which was constructed based on the ‘‘Second Life’’ architecture. For the virtual world control system, we implemented all three modules, SICM, AEM, and VWCM, in JAVA language under the Microsoft Windows XP SP3 system. Fig. 6 shows the graphical user interface (GUI) of Sensed Information Creation Module; this module collects sensor data from sensors and then converts the data into a standard data format. This GUI allows for real-time monitoring of the sensor data as well as the sensed
93
information metadata converted from the sensor data according to MPEG-V Part 5. The window in Fig. 6-A deactivates when the sensors are connected with the SICM. The window in Fig. 6-B contains a message box to express the status of the system along with an exit button. The window in Fig. 6-C contains sensor enabling buttons. When a sensor is disabled (e.g., off), the data from the sensor is ignored. The window in Fig. 6-D shows the sensed data from the sensors in real time. The window in Fig. 6-E presents XML descriptions following the schemas from MPEG-V Part 5. Fig. 7 shows AEM GUI. The RV Adaptation Engine parses and analyzes SI-metadata received from the SICM and then creates VWO information based on the analysis results. To map SI to VWO information, we defined a mapping table (shown in Table 1). Based on this table, the adaptation engine can create the corresponding VWO information, which will then be used to control the VWOs. The generated virtual object information is converted to VWO metadata and then transferred to the VWCM. The adaptation engine GUI allows the user to monitor incoming sensed information and outgoing VWO information in real time. The window in Fig. 7-A confirms the connection between the SICM and the RV Adaptation Engine with TCP/IC socket communication. When the connection is established, the window is deactivated. The window in Fig. 7-B contains a message box to express the status of the system along with an exit button. The window in Fig. 7-C presents the sensed data in a standard XML description format as received from the SICM. The window in Fig. 7-D presents the sensed data parsed from
Fig. 7. Graphic user interface of the adaptation engine module.
94
S.-K. Kim et al. / Signal Processing: Image Communication 28 (2013) 87–96
Table 1 Mapping table between the SI and VWO information. Type
Temp
Humidity
Light Noise Vibration
Sensed Information
Type
Sensor value
Unit
Status
0–19 20–26 27–34 35–N 0–49 50–69 70–N 0–149 150–N 0–35 36–N 0–200 201–N
1C
Cold Cool Warm Hot Dry Moist Wet Dark Light Off On Off On
%
lux dB Freq.
the standard XML description. In the window in Fig. 7-E, a user can assign thresholds for each instance of sensed data. The initial values of each threshold are presented in Table 1. In the window in Fig. 7-F, a user can control the delay time of virtual world control commands. This allows the control commands to be transmitted and displayed 7 with a sufficient amount of time. The window in Fig. 7-J presents the virtual world descriptions generated by the AEM, which are described with the standard XML format 8 presents a from MPEG-V Part 4. The window in Fig. 7-J more intuitive description of the virtual world control metadata, which will be rendered in the virtual world. Table 1 presents a mapping table between the SI and VWO information. The data from a temperature sensor is classified into four steps (e.g., cold, cool, warm, and hot) and converted to one of three steps of a virtual world air conditioner (off, first level, and second level). The data from a humidity sensor is classified into three steps (dry, moist, and wet) and converted to the appearance description of virtual world plants. When the data from a light sensor is greater than or equal to 150 lux, the lamp in the virtual world is switched off. Otherwise, it is switched on. When the decibel data from an ambient noise sensor is larger than 35 dB, background music in the virtual world is turned on. Otherwise, it is turned off. Finally, when the data from the vibration sensor (e.g., a one axis accelerometer) is greater than 200, the siren in the virtual world begins flickering. Finally, the VWCM parses the VWO metadata received from the AEM, and then creates and dispatches device commands to control the virtual world according to the received VWO-metadata. To control the virtual world implemented by ‘‘Second Life,’’ VWO-Controller submits control messages to the virtual world in order to execute the VWO control commands, which are pre-defined in Linden Script Language. 4.2. Results We confirmed that the proposed system can transfer data between modules in real time. Fig. 8 illustrates screen shots of Second Life when applying the implemented virtual
VWO Information Status
Aircon. Aircon. Aircon. Aircon. Plant Plant Plant Lamp Lamp BGM BGM Siren Siren, Lamp
Off Off 1st level 2nd level Yellow, Shrink (small) Light green, Enlarge (medium) Green, Enlarge (big) ON OFF OFF ON OFF ON, Flicker
world control system. Fig. 8-(a) shows the response of the virtual world (i.e., the fluorescent lamp is turned on) when the light sensor in the real world reports a light level below the threshold (o150 lux), with the status of lighting thus becoming ‘‘Dark’’, according to Table 1. In Fig. 8-(b), vibration with a level exceeding 200 as reported by the vibration sensor in the real world causes the siren to sound and the lamp to begin flickering in the virtual world. Our implementation and test of the proposed system showed that we can control the status of VWOs in real time according to the sensed information acquired from the real world. Fig. 9 shows an example of the XML instance of the sensed information metadata of a light sensor, as generated by the SICM. The metadata indicates that the sensor ID is ‘‘LS_01’’, the light level is 21 lux and the data acquisition time is ‘‘12:50:05’’ in absolute time. For brevity, we omitted the description of the name spaces in this example as well as in the following examples. Fig. 10 shows an example of the XML instance of VWOC-metadata. This metadata corresponds to the case shown in Fig. 8-(a), where low lighting in the real world caused the creation of VWO information that represents the turning on a fluorescent lamp object in the virtual world. 5. Conclusion and future work In this paper, we proposed a virtual world control system that employs sensed information and an adaptation engine based on the standards in MPEG-V Part 4, 5, and 7. Our implementation and tests results of the proposed system showed that we can control VWOs in real time according to the sensed information acquired from the real world. Unlike the other proprietary systems or products, the proposed system can easily be adopted and reused by similar types of applications. The system is easily modifiable, extendable, and independent of different underlying sensors or virtual worlds. These attributes are possible due to the use of MPEG-V standard specifications and related reference software. The proposed virtual world control system can be improved by allowing it to generate VWO information
S.-K. Kim et al. / Signal Processing: Image Communication 28 (2013) 87–96
95
Fig. 8. Test result of a virtual world control system.
Fig. 9. An XML instance of a light sensor.
Turn on Fig. 10. An XML instance of VWOC metadata.
control can be delivered back to the gamer through various actuators (such as fans or vibration chairs) to stimulate the gamer’s senses, improving the immersive impact on the gamer. Such technology can also be applied to education; timely and proper sensible feedback triggered by the students’ physical condition can promote the educational effect of online classes.
Acknowledgments This work was supported by the Global Frontier R&D Program on (Human-centered Interaction for Coexistence) funded by the National Research Foundation of Korea grant funded by the Korean Government (MEST) (NRF-M1AXA003-2011-0028363). This work was also supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2010–0003212). References
based on object information in the virtual world. In other words, instead of creating arbitrary VWO information (based on a mapping table), the Adaptation Engine can obtain information about relevant VWOs and then cause changes to the virtual objects accordingly. We plan to examine this approach in the future. We also plan to implement a VR adaptation engine which can realize interaction from the virtual world to the real world. In particular, more research on VR adaptation engines is necessary in the gaming area; when a gamer makes changes in the status or movements of VWOs (or characters) in the game space, the feedback of the gamer’s
[1] A. Chalmers, D. Howard, C. Moir, Real virtuality: a step change from virtual reality. in: Proceedings of the 2009 Spring Conference on Computer Graphics (SCCG ‘09) 2009, 9–16. [2] A. Le´cuyer, F. Lotte, R.B. Reilly, R. Leeb, M. Hirose, M. Slater, Brain, computer interfaces, virtual reality, and videogames, Computer 41 (10) (2008) 66–72. [3] G. Pfurtscheller, R. Scherer, Brain-computer interfaces used for virtual reality control. in: Proceedings of the ICABB-2010, October 2010. [4] Marco Otte, Loren Roosendaal, Johan F. Hoorn, Teleportation of objects between virtual worlds:use case: exer-gaming, Journal of Virtual Worlds Research 4 (3) (2011). [5] Jose´ Manuel Cabello, Jose´ Marı´a Franco, Antonio Collado, Jordi Janer, Samuel Cruz-Lara, David Oyarzun, Albert Armisen, Roland Geraerts,
96
[6]
[7]
[8]
[9]
S.-K. Kim et al. / Signal Processing: Image Communication 28 (2013) 87–96
Standards in virtual worlds virtual travel use case metaverse1 project, Journal of Virtual Worlds Research 4 (3) (2011). ISO/IEC FDIS 23005-5. Information technology—Media context and control—Part 5: Data Format for Interaction Devices, August 2010, ISO Publication. ISO/IEC FDIS 23005-4. Information technology—Media context and control—Part 4: Virtual World Object Characteristics, August 2010, ISO Publication. ISO/IEC CD 23005-7. Information technology—Media context and control—Part 7: Reference Software and Conformance, August 2010, ISO Publication. S. Han,J.J. Han, Y. Hwang, J.B. Kim, W.C. Bang, J.D.K. Kim, C. Kim, Controlling virtual world by the real world devices with an MPEG-V framework, in: Proceedings of the IEEE International Workshop on
Multimedia Signal Processing (MMSP)-2010, October 2010, pp. 251–256. [10] Sang-Kyun Kim, Jonghoon Chun, Hyunmin Park, Dongseop Kwon, Younghee Ro, Interfacing sensors and virtual world health avatar application, in: Proceedings of the 5th International Conference on New Trends in Information Science and Service Science (NISS), 2011, Vol. 1, October 2011, pp. 21–25. [11] Christian Timmerer, Jean Gelissen, Markus Waltl, Hermann Hellwagner, Interfacing with Virtual Worlds, 2009 NEM Summit, St Malo, France, 2009. [12] ISO/IEC CD 23005-6. Information technology—Media context and control—Part 6: Common types and tools, August 2010, ISO Publication.