Available online at www.sciencedirect.com Available online at www.sciencedirect.com
Available online at www.sciencedirect.com
ScienceDirect
Procedia Computer Science 00 (2018) 000–000 Procedia Computer Science (2018) 000–000 Procedia Computer Science 15500 (2019) 433–440
www.elsevier.com/locate/procedia www.elsevier.com/locate/procedia
The 14th International Conference on Future Networks and Communications (FNC) The 14th International Conference on Future Networks and Communications (FNC) August 19-21, 2019, Halifax, Canada August 19-21, 2019, Halifax, Canada
IoT IoT Avatars: Avatars: Mixed Mixed Reality Reality Hybrid Hybrid Objects Objects for CoRe Ambient Intelligent Environments for CoRe Ambient Intelligent Environments Yiyi Shaoaa , Nadine Lessioaa , Alexis Morrisa,∗ Yiyi Shao , Nadine Lessio , Alexis Morrisa,∗ a OCAD University, 100 McCaul St., Toronto, ON, M5T 1W1, Canada University, 100 McCaul St., Toronto, ON, M5T 1W1, Canada
a OCAD
Abstract Abstract The Internet of Things (IoT) continues its growth, adoption, toward ubiquitous usage but is not without the inevitable communiThe bandwidth Internet of challenge. Things (IoT) continues its growth, adoption, toward usage not without theof inevitable communication Human-computer-interaction in this spaceubiquitous must account forbut theismultiple facets human-in-the-loop cation bandwidth challenge. Human-computer-interaction in this space must account for the multiple facets of human-in-the-loop considerations in IoT, yet current mechanisms are at present limited by display dimensions and unclear indicators. Mixed Reality considerations IoT, yet mechanisms are at present limited by display dimensions andother unclear indicators. (MR) may be a in solution to current this human communication bandwidth problem, as smart glasses and head mounted Mixed displaysReality could (MR) may be a solution to this human communication bandwidth problem, as smart glasses and other head mounted could provide an ideal interface platform for IoT human-computer interaction, while handheld mobile MR can be used asdisplays a testbed. To provide an ideal interface platform for IoT human-computer interaction, while handheld mobile MR can be used as a testbed. To bring MR interfaces to the IoT, this work contributes; i) a new IoT-Avatar architectural framework; ii) a bi-directional communicabring MR interfaces to the IoT,system this work i) a new IoT-Avatar architectural a bi-directional communication approach between an IoT andcontributes; a virtual avatar character representation; andframework; iii) an earlyii) descriptive exploration of how tion between an IoT system andapplications a virtual avatar character representation; and iii) an earlyenvironments. descriptive exploration of how suchapproach systems could be explored in future toward MR interfaces in ambient intelligent such systems could be explored in future applications toward MR interfaces in ambient intelligent environments. c 2019 2019 The The Authors. Authors. Published Published by by Elsevier Elsevier B.V. B.V. © c 2019 The Authors. by Elsevier B.V. This is an open accessPublished article under the CC BY-NC-ND BY-NC-ND license license (http://creativecommons.org/licenses/by-nc-nd/4.0/) (http://creativecommons.org/licenses/by-nc-nd/4.0/) This is an open access article under the Conference CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/) Peer-review responsibility ofthe the Program Chairs. under responsibility of Conference Program Chairs. Peer-review under responsibility of the Conference Program Chairs. Keywords: Internet-of-things; Avatars; Mixed Reality; Agents; Context-awareness; Human-Computer Interaction; Ambient Intelligence; Keywords: Internet-of-things; Avatars; Mixed Reality; Agents; Context-awareness; Human-Computer Interaction; Ambient Intelligence;
1. Introduction 1. Introduction The internet of things (IoT) is poised to connect people with the smart-devices that are increasingly becoming The internet of environment things (IoT) [4], is poised to connect with the smart-devices thattoare increasingly becoming embedded in their enabling them, as people users and information consumers, access these device propembedded in their environment [4], enabling them, as users and information consumers, to access these device properties and interact with their functions. The IoT domain has seen steady research and technological advances, and is erties and interact with their functions. The IoT domain has seen steady research and technological advances, and is approaching viable mainstream usage [4], including advances in wearable technologies, embedded systems, wireless approaching viable mainstream usage [4], including advances in wearable technologies, embedded systems, wireless sensor networks, body area networks, domain application frameworks, and also relevant social factors, like privacy sensor networks, body area domain also relevant social factors, like privacy and security. Alongside IoTnetworks, developments, theapplication landscape frameworks, of computer and interfaces has also become ubiquitous and and security. Alongside IoT developments, the landscape of computer interfaces has also become ubiquitous and mobile, and is now more immersive, engaging, complex, data-intensive, and information rich. Further, with artificial mobile, and is now more immersive, engaging, complex, data-intensive, and information rich. Further, with artificial intelligence and machine learning, engaging experiences and digital platforms are growing in ubiquity [4]. People intelligence and engage machine learning, engagingand experiences andare digital growing in ubiquity [4]. People now commonly with such interfaces devices, but oftenplatforms distractedare from their active tasks, as there is a now commonly engage with such interfaces and devices, but are often distracted from their active tasks, as there is a ∗ ∗
Corresponding author. Tel.: +1-416-977-6000 ; fax: +1-416-977-6006. Corresponding Tel.: +1-416-977-6000 ; fax: +1-416-977-6006. E-mail address:author.
[email protected] E-mail address:
[email protected]
c 2019 The Authors. Published by Elsevier B.V. 1877-0509 c 2019 1877-0509 Thearticle Authors. Published by Elsevier B.V. This is an open access under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/) 1877-0509 © 2019 Thearticle Authors. Published by Elsevier B.V. This is an open access under the Conference CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/) Peer-review under responsibility of the Program Chairs. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/) Peer-review under responsibility of the Conference Program Chairs. Peer-review under responsibility of the Conference Program Chairs. 10.1016/j.procs.2019.08.060
434 2
Yiyi Shao et al. / Procedia Computer Science 155 (2019) 433–440 Y. Shao, N. Lessio, and A. Morris / Procedia Computer Science 00 (2018) 000–000
deluge of incoming information and notifications demanding attention [11]. Displays that support changes in user situational context, and that minimize attention costs are needed. This is evident in domains such as mission critical and time-sensitive systems, and environments that demand user attention (i.e., when driving a vehicle, navigating through a busy location, or monitoring important events in a host of industrial settings, like air-traffic control). IoT systems to aid these domains to make them safer offers important avenues of research [1], and would potentially provide benefits in the form of improved situational understanding and response time. However, the design of IoT interfaces that bridge the socio-technical gaps between distributed device technologies and their users, while improving engagement and situational understanding of the IoT system itself, remains a significant research challenge [4]. This work recognizes and proposes to add to the literature on the use of immersive technologies and hybridized physical and virtual objects as a potential solution – effectively providing IoT devices with virtual representations, or IoT Avatars, which can provide in-situ interactions with humans-in-the-loop across the life-cycle of IoT situations. This is becoming increasingly practical, as the field of immersive technologies advances, in tandem with both the artificial intelligence and multi-agent systems paradigms, applied to the IoT, leading to the inevitable combination of these core concepts toward improved IoT interaction. The specific goals of this work are two-fold, namely to develop a visual mixed reality (MR) architectural framework for accessing the information dimensions and nuances of physical objects in the IoT, and to explore this toward future ambient intelligent environments and IoT objects. This addresses the following objectives, namely: i) to develop an architectural framework for hybridized IoT physical and virtual objects; ii) to develop an engaging representation of an IoT-enabled object for future interaction uses (via an early concept IoT-Avatar); iii) to develop a bi-directional communication approach between both IoT and visualization systems; and iv) to design a proof-of-concept hybrid object toward future investigations of how IoT-Avatars may be employed to help improve the relationship between humans-in-the-loop and IoT objects in the environment. This leads to three overall contributions; i) a new IoT-Avatar architectural framework; ii) a bi-directional communication approach between an IoT system and a virtual avatar character representation; and iii) an early descriptive exploration of how such systems could be explored in future applications. This extends the authors’ work on context-driven IoT design considerations (including privacy and security) [10] and highlights immersive interfaces for the future IoT. This section has covered the primary motivation for merging IoT interfaces with immersive and information rich approaches. Section 2 provides a background on IoT and HCI needs, dashboards, and mixed reality for IoT. Section 3 presents an IoT Avatar architecture based on [10] and presents a design for a Plant-Avatar proof-of-concept. Section 4 provides a discussion of related work and future investigations. Lastly, Section 5 summarizes the research. 2. Background Concepts Over the last decade, the IoT has seen a shift toward being less focused on the low-level technology challenges, and more focused on experience design and how humans interact with connected devices [13]; it is not uncommon now for users to interact with multiple interfaces, platforms, and systems in a single day. In fact one of the biggest challenges facing user experience designers with the IoT is how to evoke consistent user experiences across multiple devices [13]. This brings with it unique design challenges in advancing beyond traditional information dashboards toward immersive and contextually relevant interface designs. Currently how a user interacts with the IoT can be dependent on different factors pertaining to the devices themselves. For example, a user could be indirectly interacting with an array of embedded sensors in the form of a connected kitchen product like a cutting board [7], or with a camera that tracks movement such as a connected security or gaming device [7]. Users routinely engage with the IoT through product and service interfaces such as mobile apps, or web applications for controlling home automation devices [13]. Recently there has been increased adoption of “screenless” control such as the use of voice interfaces [6], and the use of freely accessible mashup style platforms for users to “program” their own specialized interactions between different objects [6]. Mashups are often personalized, situational, short-lived, non business-critical applications developed using familiar web development tools and technologies [2], and currently users engage with these to help fill the gaps around getting IoT devices, often from different manufacturers, to communicate with one another. This stacking of connected interactions with both people and other objects, is described as a consumer-object assemblage where the addition of networking connectivity, previous unrelated objects and products will now work together as assemblages through a process of ongoing interaction [6]. Smart objects, unlike their predecessors possess properties that make them active as agent actors rather than being passively driven by what consumers do to them
Yiyi Shao et al. / Procedia Computer Science 155 (2019) 433–440 Y. Shao, N. Lessio, and A. Morris / Procedia Computer Science 00 (2018) 000–000
435 3
[6]. For example, an Amazon Alexa 1 operates as both an interface, but also a data gatherer, where as a Nest 2 could operate as an actuator, but also as a monitoring system. As a result such devices and products are also now services [13] wherein the user’s experience is not only shaped by the device itself, but by the complex systems that shape the whole service [13]. As the IoT expands, the needs of visualizing its underlying systems have also increased. The growth in home automation applications for example, has increased the need for remote control interfaces. Meanwhile wearables, and automated devices like the Nest have increased the need to see usage over time to observe and learn from behaviour patterns. As homes and personal space have become a place embedded with information technology, there exists a need to visualize the overall state of the system, or to monitor spaces for activity, or security. To address these needs, the IoT research has incorporated middleware applications; including platforms as a service (PaaS) mobile applications or services that allow users to monitor their environment or self-create programs or routines for automated systems, and which provide users with familiar ways to interact with IoT systems. These middleware applications cater to different experience levels, and allow much customization; this ranges from open-source maker platforms that allow users high degrees of flexibility to commercial consumer applications that allow less flexibility, such as grouping devices and triggering custom routines. Hence, the challenges around creating IoT dashboards and visualizations are numerous, and tend to cross over into multiple domains. User experience in this domain aims to address the challenge of making a service seem cohesive across many different devices, but must account for the devices themselves also adding new challenges [13]. These devices can, for instance, have limited capabilities, or may be subject to specific network protocols. There is also the challenge of asynchronous system development, which can be important when working with a web-of-things toolkit where real-time information transfer is essential [13]. Lastly, there is also the question of complexity, and how to tackle what kind of display is right for the situation at hand. Conventional IoT Dashboard Interfaces: Currently many IoT dashboards operate as mobile or web applications, as in Figure 1(a). As many users carry mobile devices, and have access to computers, development using web related tools means having more common resources and developers to draw upon, and people are now accustomed to interacting with screen displays [13]. Many home automation systems exist in this sphere such as commercial applications like Apple Homekit 3 and Google Home 4 . Such commercial system dashboards allow users to synch, integrate, and manage multiple consumer IoT devices in their homes, and are very reminiscent of early web2.0 systems, like iGoogle 5 , which acted as a personal homepage of personalized widgets that users could interact with. Open-source versions of these dashboards come in many different flavours, such as HADashboard 6 , which can be run locally on a Raspberry Pi 7 , and integrate with both consumer and maker-themed IoT devices. The design of these are widget based, and allow users to customize which devices they would like in their dashboards. They are usually built across multiple web-development frameworks, or deployed natively on mobile phones. In some cases they also have tie-ins with voice interfaces, such as the Google Home Assistant 8 which has its own companion application for controlling smart lights, and other IoT devices with voice commands, while also enabling users to receive text and image notifications through their phones. Dashboards can be very useful for providing quick access to a limited number of IoT devices, but face scale challenges when dealing with large numbers of devices [13]. Developers have adapted to this constraint by providing users a way to automate systems by grouping devices and functionality into patterns and modes [13]. For example settings such as “vacation” or “arriving home”, which when triggered will cause multiple things to occur. This is similar to to the mashup platforms mentioned earlier, as its provides users a way to program and customize their interactions with devices. Other ways dashboards are commonly used, is to monitor activity feeds to help aggregate and watch patterns over time. Adafruit IO provides 9 good examples of this in which one can track sensors over time. 1 2 3 4 5 6 7 8 9
Amazon Alexa: https://developer.amazon.com/alexa Nest: https://nest.com/ Homekit: https://www.apple.com/ca/ios/home/ Google Home: https://www.blog.google/products/home/new-home-app/ iGoogle: https://en.wikipedia.org/wiki/IGoogle HADashboard: https://www.home-assistant.io/docs/ecosystem/hadashboard/ Raspberry Pi: https://www.raspberrypi.org/ Google Home Assistant: https://assistant.google.com/ Adafruit IO: https://io.adafruit.com/
436 4
Yiyi Shao et al. / Procedia Computer Science 155 (2019) 433–440 Y. Shao, N. Lessio, and A. Morris / Procedia Computer Science 00 (2018) 000–000
The web based dashboard has several graphical representations such as feeds, dials, and switches, that help users understand at a glance what is happening in their system. Some of the limitations of this kind of dashboard approach are generally related to the devices they are displayed on. Mobile phones are not always available, and computers may not always be accessible [13]. There is also the issue of power, and not always having access to a device that is able to display a complex system. If the dashboard is hosted in the cloud as a platform service, there is also the issue of connectivity; should the internet not be available users can not access the service. In the case of commercial dashboards like HomeKit, it may be the case that not many IoT devices are un-supported, as manufactures only support certain devices, leaving users with less customization than may be desired. Further issues arise in the opposite case, of having too many options. Since it is common for each consumer IoT device to have its own application dashboard, there is significant overlap and redundancy. While the maker-themed or open-source options address this, scenarios arise where a user can only use one application with a particular device or suite of devices; which is like having multiple remotes for an entertainment system rather than a universal remote, hence unique challenges remain for IoT dashboards. Conventional Video-Game Dashboard Interfaces: Much of the study of dashboard and information design in 3D is seen in video game design. In video games information visualization is presented to the user in many ways, as is necessary for continuing and advancing through the game [16]. However, since games are considered an immersive experience, game interface designers prefer interfaces that are minimally-intrusive and task relevant [16], as a player is usually focused on a very specific area of the screen, where the central task or action is happening [14]. Since players must also be informed of important off screen events, items, or approaching danger, these information displays tend to operate in the periphery [14]. There are different types of techniques for periphery display in games, but one of the most common displays is the Heads Up Display (HUD); a standard technique used in first-person games [16]. Other display techniques include docking stations of action buttons, inventory, and edge enhancements (as in realtime strategy games), and icon based docks, and mini maps (as in massively multiplayer online (MMO) games) [16]. HUDs have stylistic and influential tie-ins into VR, AR and other immersive applications beyond games, which may be useful when considering the IoT. These HUDs, as in Figure 1(b), generally deal with showing immediate threats, and quantitative information like health, or weapon choice, ammunition levels [16], but can also be used to get contextual information from items within the player’s center of attention, such as showing where resources are, or getting the health levels of a fellow player or enemy [16]. HUDs also have the option of operating non-intrusively in one’s peripheral vision. As HUD designs can suffer from periphery crowding [14] most game interface designers balance between keeping them minimal but informative [16], via rendering techniques like silhouetting and color coding [16]. Overlays displaying contextual information, such as the name of a landmark, can also be presented visually in-world, to help players orientate themselves in 3D environments [16]. In more practical applications, as in transportation, HUDs are used to improve operator situation awareness while the vehicle is being operated [8]. These are most common in military and commercial aviation at present, but are also making their way into general aviation and personal vehicles [8]. These displays typically present information like the speedometer, GPS, and in some cases even directions. More recently, for vehicles, consumers can now purchase devices to convert a phone display into a temporary HUD, with products such as Hudwayglass 10 . Such applications of HUDs are meant to enhance and inform the driver of the vehicle situation, and may also be relevant to IoT-related information presentation. Mixed Reality for the IoT: Mixed Reality (MR) refers to a “subclass of Virtual Reality (VR) related technologies that involve the merging of real and virtual worlds” and has been explored across a spectrum of immersive devices [9]; including augmented reality (AR)and augmented virtuality (AV). Some current applications for mixed reality include gaming, entertainment, collaboration, and contextual interfaces. These depend on the display and technology being used; for example, the Hololens 2 11 , with its focus on enterprise, present multiple example applications built around collaboration, and information sharing (in the conventional Windows operating system) that are relevant to IoT interfaces. Also, from the mobile device domain, many applications currently apply MR for way-finding and gaming, with the most famous example being Pokemon Go 12 . As virtual items could be representative of a physical real world object, there are multiple design parallels between gaming HUDs and speculative, or experimental visualizations for 10 11 12
Hudwayglass: http://hudwayglass.com Hololens2: https://www.microsoft.com/en-us/hololens PokemonGo: https://www.pokemongo.com/
Yiyiand Shao et al. / /Procedia (2019)000–000 433–440 Y. Shao, N. Lessio, A. Morris ProcediaComputer ComputerScience Science155 00 (2018)
(a) IoT Dashboards
437 5
(b) Video-Game Dashboards
Fig. 1. IoT traditional dashboards versus Dashboards common to 3D games as an inspiration for AR dashboards as part of future IoT interfaces.
mixed reality. For example, in [3], a contextual interface is designed for in-home devices, that shows temperature, if pointed at a thermostat, or humidity and water levels, if pointed at a plant. This is similar to in-game HUDs that display floating health bars for enemies or friends during combat situations, or floating name tags identifying people or structures. Parallels can also be drawn when looking at navigation, this is most noticeable in driving and flying simulations, toward real-world MR dashboards 13 . Hence it is worth investigating what interface designs apply when considering MR for the IoT, and a detailed exploration is left for future research. 3. CoRe Architectural Framework for IoT Avatars CoRe (Contextual Reality) and IoT perspectives are proposed in [10], and the CoRe architecture reflects a focus on developing ambient intelligence systems for merging multiple perspectives. This includes specifically, mixed reality visualizations, IoT objects, computer vision and ML sensors, agent system logic, and networking middleware. The CoRe avatar system aims to provide the interface front-end to IoT environment objects; toward the coming landscape where the environment leverages more smart devices, and where high-fidelity smartglass hardware is ubiquitous. This work expands on the virtual, ambient, and informational context perspectives in CoRe, as in Figure 2(a). Here, the virtual perspective refers to 3D object models and agent behavior components being presented to humans-in-the-loop of IoT environments, and the graphical systems to support this (such as Unity3D). The ambient perspective refers to the IoT objects and components physically available within the local environment and the interaction between controllers, display viewers (head-mounted and hand-held augmented reality displays), and platform services available for interaction with these devices. The informational context perspective refers to the dynamics and contextual properties of data for IoT devices, including states of controllers and sensors, dashboard display mechanisms, and the overall presentation of interfaces, and interaction with content available to humans-in-the-loop. It is noted that this work does not address low level details of multiuser and multiagent collaboration, detailed agent system designs, inferencing approaches, or low-level networking in CoRe, as these will be explored in future research into dynamic applications. The exploration of CoRe avatar systems is a central theme, with a focus on the design, development, and high-level interaction with such systems as an early step toward more extensive IoT avatar system deployments. Here, the IoT Avatar is a virtual object for anchoring object information, and for engaging interactions with the object alongside object manipulations, inspired by video game characters and HUD concepts. The kinds of relationships addressed by an IoT avatar system accounts for Human-to-Human interactions, Humanto-agent interactions, Agent-to-Agent interactions with other IoT-enabled devices (or services), and Agent-to-Human 13 Video-Game dashboard examples in Figure 1(b): Top Left: Overwatch, Blizzard Entertainment, 2016. Top Right: The Elder Scrolls V: Skyrim, Bethesda, 2011 - Floating Health Bar Mod (image link). Bottom Left: Frontier Pilot Simulator, RAZAR s.r.o., 2018. Bottom Right: Forza Horizon 3, Microsoft Studios, 2016.
438 6
Yiyi and ShaoA. et al. / Procedia Computer Science 155 Y. Shao, N. Lessio, Morris / Procedia Computer Science 00(2019) (2018)433–440 000–000
(a) CoRe Architecture [10]
(b) Avatar Agent Interactions
(c) Avatar Architecture
(d) Avatar Network Framework
Fig. 2. Presents an initial proof-of-concept IoT avatar system and its use case scenario. Highlights the connectivity, and points toward the implementation of more immersive, and complex agent system behaviors, and also multi-agent approaches.
interactions, as shown in Figure 2(b). It assumes that humans in the IoT loop make use of either head-mounted display (HMDs) or handheld devices (HHDs) as primary devices. Also, interaction is with mixed reality (MR, or XR) enabled IoT devices in the environment. The conventional IoT stack model (e.g., hardware, networking, and application logic) is conceptualized as extending from low level hardware (Motors, LEDs, etc.), to software (Servers), to Agent Logic, to Agent Avatars at the highest level. Likewise, core communication involves MR presentation to humans-in-the-loop via visual interfaces, Avatar character, animations, notifications, and other indicators. Core communication initiated by the user’s system are via anchored interactions (which may be through codes and image markers), or possibly via computer vision gestures, and other measures from sensor sources; in addition to interface manipulations via controllers. Specific MR components would involve Scene components such as 3D Objects, 2D/3D Widgets, Lights, Sounds, Player Controllers(such as Cameras); alongside 3D MR Frameworks like Unity3D, Unreal, or Web XR. Interaction methods may include 2D and 3D buttons, selections, sliders, labels, and potentially hand tracked gestures. The MR IoT-enabled Avatar system architecture, as in Figure 2(c), consists of four key components: i) a Unity3D based Avatar system for AR development and runtime deployment, ii) a CoRe network server with a web-of-things inspired communication framework, iii) one or more IoT-enabled objects or embedded devices, and iv) specialized sensors, actuators, radios, or other custom display elements. The Unity Avatar System consists of 3D models of the Avatar and the scene components, behavior scripts for the model, widget canvases for display of data elements, an AR camera, and server communication scripts. This involves practical network connectivity foundations, as in Figure 2(d), based on: A peripheral server for connecting with core IoT devices (such as the IoT Avatar viewer, or a monitor, or phone); Server based forms for
Yiyi Shao et al. / Procedia Computer Science 155 (2019) 433–440 Y. Shao, N. Lessio, and A. Morris / Procedia Computer Science 00 (2018) 000–000
439 7
Fig. 3. Hybrid MR Plant IoT Avatar early proof-of-concept design, anchored to an existing IoT object.
JSON data transmission; and General Purpose I/O (GPIO) for connections between the peripheral server physical embedded device and any connected sensors, servos, or LEDs. The peripheral server (PS) is operating as an isolated access point (Soft AP mode) and is not connected to the wider CoRe network, nor to the internet. The PS broadcasts a wifi network that CoRe devices can connect to in client mode using a password. The PS runs a simple server that serves up a web form, and JSON which acts as the end point of a small rest API. The webform is accessible to clients by a standard URL (e.g., http://196.168.4.1/servo). A client can then send a POST method that is the equivalent of pressing submit on a form, to execute the servo. The PS which is a small microcontroller (Adafruit Feather Huzzah) with GPIO (with the servo in an “on” state). It then updates the JSON (e.g., hosted at http://196.168.4.1/ambientroom) to indicate that the servo is running. Other CoRe devices connected to the PS can run a GET method and retrieve the status of the servo. When the servo is toggled to “off” the JSON will update to reflect this change. The JSON endpoint also has information regarding the temperature, ambient light, and humidity of the room which is accessible to connected CoRe clients again, by using a GET request. To begin exploring the utility of the IoT Avatar concept, a simple proof-of-concept IoT Plant-Avatar, as in Figure 3, has been considered as an early design, where an overall question is whether having a more engaging mechanism to interface with the plant metrics and communication with persons in the environment via an MR interface can be a helpful engagement mechanism. Ultimately, this will form the basis for a realtime sensory presentation of plant data and states via its Avatar, and of interaction with the backend plant-IoT device via Avatar widgets. This early system design can be extended to multiple kinds of IoT objects. In this instance, plant needs involve moisture, light, temperature, and even the amount of personal attention. As these would be detectable via IoT system sensors, they can form the basis for presentation of an MR representation Avatar for the plant, which would be visible to the person, ideally via a head-mounted display. This is left for future work, but an early conceptualization is shown, via a handheld MR display, to consider how a hybrid IoT object with an avatar overlay may accentuate an otherwise limited dashboard visualization approach. This early system design will have future evaluation versus such approaches. 4. Discussion and Future Work This section provides a discussion of the early IoT avatar system concept in the context of related research, relying on existing works to indicate the potential evaluation directions of such systems. Augmented reality can provide both service providers and consumers an ideal interface with additional information for IoT applications, as in [15], where it has been shown that service providers can use augmented reality to debug and repair IoT devices with real-time Quality of service (QoS) factors, while for consumers, such augmented reality approaches help to reduce the barrier to interact with such devices in a context-aware manner. This concept can be extended further and will become more practical as head-mounted devices become ubiquitous. Other researchers have considered the smart city scenario, where mixed reality interfaces can help with managing heterogeneous and dynamic services, and indicate that mixed reality can improve the human-computer interaction in a more engaging and entertaining way by incorporating gamification [12]. Further, there are similar approaches combining IoT systems with mixed reality avatars, as in [12], where an AR based avatar game system is introduced, which helps to promote environmental context information; wherein the avatar can be triggered through AR, and the appearance is affected by the real-time air quality data from the ekoNET
440 8
Yiyi Shao et al. / Procedia Computer Science 155 (2019) 433–440 Y. Shao, N. Lessio, and A. Morris / Procedia Computer Science 00 (2018) 000–000
service. It is noted that more tutorials and additional explanation are needed when incorporating avatar with the game concept [12]. Likewise, research in [5], also combine mixed reality avatars with IoT applications for a notification concept called Ambient Bot [5]. Ambient Bot is designed for HMD users, and provides daily information, with gazebased interaction. The researcher highlight that this approach can allow the user develop an intimate relationship with the avatar, enriching their daily routine. They further conducted a user study, where the results show that virtuality such as gamification can be useful to provide the virtual context from IoT devices, and also presents possibilities for encouraging human interactions [5]. Together, these lend credibility to the IoT Avatar concept, and similar studies will be the future iterations of the system presented in this work will aim to explore these impacts on IoT system interfaces. 5. Summary As the IoT becomes ubiquitous the demands on its human-computer interface increases, however the main themes of research do not adequately account for interaction designs that are adaptive, flexible, and engaging. This research has explored the design of an architectural framework for hybrid mixed reality objects, with IoT Avatars toward better explainability and engagement with IoT-enabled objects and their computing components. It contributes a new IoT-Avatar architectural framework; a bi-directional communication approach between an IoT system and a virtual avatar character representation; and an early design scenario for how such systems could be applied in future applications. Evaluating the proposed IoT avatar concept in detail remains an avenue for future research, toward improved interaction with IoT objects in ambient intelligent environments. Acknowledgements This work acknowledges funding by the Tri-Council of Canada under the Canada Research Chairs program. References [1] Bernal, G., Colombo, S., Al Ai Baky, M., Casalegno, F., 2017. Safety++: Designing iot and wearable systems for industrial safety through a user centered design approach, in: Proceedings of the 10th International Conference on PErvasive Technologies Related to Assistive Environments, ACM. pp. 163–170. [2] Blackstock, M., Lea, R., 2012. Iot mashups with the wotkit, in: 2012 3rd IEEE International Conference on the Internet of Things, IEEE. pp. 159–166. [3] Chang, I.Y., 2018. Augmented reality interfaces for the internet of things, in: ACM SIGGRAPH 2018 Appy Hour, ACM. p. 1. [4] Gubbi, J., Buyya, R., Marusic, S., Palaniswami, M., 2013. Internet of things (iot): A vision, architectural elements, and future directions. Future generation computer systems 29, 1645–1660. [5] Gushima, K., Nakajima, T., 2017. A design space for virtuality-introduced internet of things. Future Internet 9, 60. [6] Hoffman, D.L., Novak, T.P., 2017. Consumer and object experience in the internet of things: An assemblage theory approach. Journal of Consumer Research 44, 1178–1204. [7] Kranz, M., Holleis, P., Schmidt, A., 2009. Embedded interaction: Interacting with the internet of things. IEEE internet computing , 46–53. [8] Liu, A., Jones, M., et al., 2017. A Preliminary Design for a Heads-Up Display for Rail Operations. Technical Report. United States. Federal Railroad Administration. [9] Milgram, P., Kishino, F., 1994. A taxonomy of mixed reality visual displays. IEICE TRANSACTIONS on Information and Systems 77, 1321–1329. [10] Morris, A., Lessio, N., 2018. Deriving privacy and security considerations for core: An indoor iot adaptive context environment, in: Proceedings of the 2nd International Workshop on Multimedia Privacy and Security, ACM. pp. 2–11. [11] Okoshi, T., Nozaki, H., Nakazawa, J., Tokuda, H., Ramos, J., Dey, A.K., 2016. Towards attention-aware adaptive notification on smart phones. Pervasive and Mobile Computing 26, 17–34. [12] Pokric, B., Krco, S., Drajic, D., Pokric, M., Rajs, V., Mihajlovic, Z., Knezevic, P., Jovanovic, D., 2015. Augmented reality enabled iot services for environmental monitoring utilising serious gaming concept. JoWUA 6, 37–55. [13] Rowland, C., Goodman, E., Charlier, M., Light, A., Lui, A., 2015. Designing connected products: UX for the consumer Internet of Things. ” O’Reilly Media, Inc.”. [14] Tilford, B., . Perceiving without looking: Designing huds for peripheral vision. gamasutra.com . [15] White, G., Cabrera, C., Palade, A., Clarke, S., 2018. Augmented reality in iot, in: ICSOC Workshops. [16] Zammitto, V., 2008. Visualization techniques in video games., in: EVA Electronic Information, the Visual Arts and Beyond, London, UK.