Exploring multi-user interactions with dynamic NFC-displays

Exploring multi-user interactions with dynamic NFC-displays

Pervasive and Mobile Computing 9 (2013) 242–257 Contents lists available at SciVerse ScienceDirect Pervasive and Mobile Computing journal homepage: ...

2MB Sizes 24 Downloads 83 Views

Pervasive and Mobile Computing 9 (2013) 242–257

Contents lists available at SciVerse ScienceDirect

Pervasive and Mobile Computing journal homepage: www.elsevier.com/locate/pmc

Exploring multi-user interactions with dynamic NFC-displays Gregor Broll a,∗ , Eduard Vodicka b , Sebastian Boring c a

DOCOMO Euro-Labs, Munich, Germany

b

Ludwig-Maximilians-Universität München, Munich, Germany

c

Department of Computer Science, University of Calgary, Calgary, Canada

article

info

Article history: Available online 7 November 2012 Keywords: Mobile interaction Near Field Communication Dynamic NFC-displays Physical user interfaces Multi-user interaction

abstract Near Field Communication (NFC) is an emerging technology for touch-based mobile interactions with single- and multi-tagged objects. Although the latter may allow for simultaneous and collaborative interactions, most prototypes were not designed for multiple users and were only evaluated with single-user interactions. In this paper, we investigate the design, usability and user experience of multi-user interactions on dynamic NFC-displays. These interactive surfaces use a grid of NFC-tags for the direct manipulation of projected application user interfaces. In two user studies with three prototypes for multi-user interaction, we evaluated the performance of dynamic NFC-displays, interactions among users and the interplay between mobile devices and large displays. © 2012 Elsevier B.V. All rights reserved.

1. Introduction During the last few years, Near Field Communication (NFC) [1] has become a popular technology for touch-based mobile interactions with real world objects. This wireless technology allows for exchanging data over short distances and provides a standard for the integration of passive Radio Frequency Identification (RFID) technology [2] into mobile devices – most prominently mobile phones. NFC-enabled mobile devices can (1) exchange data between each other, (2) emulate tags during interactions with reading terminals and (3) read/write passive tags that can be attached to almost any object – making NFC a highly versatile tagging technology. Due to the short operating distance of NFC (about 3 cm) NFC-devices and -tags have to touch or be very close to each other to exchange data. This physical, touch-like interaction is adopted by an increasing number of applications like mobile payment (e.g., Google Wallet [3]) or ticketing (e.g., Touch & Travel [4]). These and other use cases for mobile interaction with NFC, like information retrieval, access control or home care, have also been investigated by projects and field trials like SmartTouch [5] or Cityzi [6]. Mobile Computing research uses NFC to investigate mobile interactions with different kinds of tagged objects. The small size of NFC-tags and their simple assembly allow a more flexible and unobtrusive tagging of everyday objects with single and multiple tags than other technologies, especially visual markers whose noticeable design affects the visual appearance of tagged objects. The simplest tagged objects feature a single tag that serves as a physical hyperlink [7] and reduces interaction sequences on mobile devices, e.g., opening a website, to touching a single tag. The next generation of tagged objects has multiple tags that represent different application features or options, e.g., for ordering tickets [8] or controlling a multimedia player [9]. These multi-tagged objects can serve as physical user interfaces (UI) that complement mobile application UIs and extend them to the real world. Typical single- and multi-tagged objects like posters (e.g., [10,8]) or maps (e.g., [11,12]) are static and only partially tagged. Dynamic NFC-displays on the other hand have completely interactive physical UIs that



Corresponding author. E-mail addresses: [email protected], [email protected] (G. Broll), [email protected] (E. Vodicka), [email protected] (S. Boring). 1574-1192/$ – see front matter © 2012 Elsevier B.V. All rights reserved. doi:10.1016/j.pmcj.2012.09.007

G. Broll et al. / Pervasive and Mobile Computing 9 (2013) 242–257

243

Fig. 1. Users interact with the projected application UI of a dynamic NFC-display by touching the tags of its physical UI with NFC-enabled mobile phones.

consist of a grid of NFC-tags and can be used for the direct manipulation of dynamic application UIs that are projected onto them [13]. In order to interact with the contents of these application UIs, users can touch the tags of the subjacent physical UI with NFC-enabled mobile devices (Fig. 1). The larger NFC-based physical UIs become, the more space they provide for multiple users who can interact with them simultaneously, either independently or collaboratively. However, most examples for mobile interactions with NFC-based physical UIs were not explicitly designed for multiple users and most evaluations within this area only focused on singleuser interactions. In previous work, we have evaluated a multi-player game on a dynamic NFC-display and showed that this technology allows for multi-user interactions despite notable inaccuracies regarding the recognition of tags [14]. In this paper, we extend this preliminary work to investigate multi-user interactions with dynamic NFC-displays in more detail. We aim to better understand (1) the design of multi-user interactions with dynamic NFC-displays, (2) the usability of this specific technology, and (3) the user experience during joint and parallel interactions in multi-user scenarios. In the following sections, we introduce dynamic NFC-displays in more detail before we discuss them in the context of related work. In order to evaluate multi-user interactions with this technology, we have designed and implemented three prototypes for information retrieval, information sharing and scheduling. An initiatory usability study with single users showed a general appreciation for the interaction with dynamic NFC-displays despite rather high error rates regarding the recognition of NFC-tags which seem to depend on application type and task complexity. The following study with groups of users showed that while users liked multi-user interactions in general, they seemed to prefer parallel and independent interactions to simultaneous interactions together with other users. And while users did not care too much about the privacy of publicly displayed information and public interactions with it, they appreciated means to accomplish these interactions on their own. 2. Dynamic NFC-displays Dynamic NFC-displays are an approach to touch-based mobile interaction with large screens that combines the visual output capabilities of the latter with the physical, touch-like interaction between NFC-enabled mobile devices and NFCtagged objects. The basic setup of dynamic NFC-displays was first introduced by Vetter et al. [15] and later refined by Hardy et al. [13]. It comprises a server that runs applications and projects their GUIs onto a vertical grid of NFC-tags that serves as the physical UI (Fig. 2). Other than multi-tagged posters or maps, the physical UIs of dynamic NFC-displays are completely covered with NFC-tags and are thus fully interactive. These tags are not linked to specific items of information, such as options for ticketing or points-of-interest (POIs) on a map. They only indicate their own two-dimensional position within the grid. They are decoupled from output and can be dynamically mapped to different UI elements of different applications. Users can manipulate projected application UIs by touching the tags of the subjacent physical UI with NFC-enabled mobile phones. These phones read tags and return their 2D position in the grid to the application server, for example via Bluetooth. The server receives tag input events from mobile phones and dynamically updates the application UI accordingly. During interactions with dynamic NFC-displays, NFC-enabled mobile phones serve as smart pointing devices. Their touch-like interactions with the tags of the physical UI allow direct and precise interactions with text, pictures, links or widgets of application UIs. These phones also support many other interactions with dynamic NFC-displays that can be applied to mobile interactions with large displays in general: They allow for the identification of different users and thus the personalization of their interactions. They can give multimodal feedback (e.g., visual, audio, haptic) during interactions, provide additional input through controllers, keys or touch-screens, and implement different interaction techniques

244

G. Broll et al. / Pervasive and Mobile Computing 9 (2013) 242–257

Fig. 2. Basic setup of a dynamic NFC-display including a server, a projector, a physical UI with a grid of NFC-tags and one or several mobile devices.

(see [16,13,15]). They are also context-aware, connected to different wireless networks, can store data and exchange it with displays. Finally, mobile phones are highly personal devices that can complement large displays by showing details or private information while the latter can give a public overview of information. Previous work on a multi-player game on dynamic NFC-displays showed that this technology allows for usable, touchbased mobile interactions with large displays for multiple users [14]. This work also demonstrated how dynamic NFCdisplays push the limits of mobile interaction with NFC, especially regarding the accuracy of tag recognition. These limits result from the properties of NFC-enabled mobile phones, NFC-tags and their assembly in physical UIs. For example, the input resolution of dynamic NFC-displays depends on the size of NFC-tags (about 4 cm), NFC-enabled mobile phones and targets on application UIs. On the one hand, smaller tags can increase the input resolution of NFC-displays and thus enable more precise interactions with small targets. On the other hand, mobile devices inherently occlude tags and targets while interacting with them, making their selection the more difficult the smaller they are. This ‘‘fat phone’’ problem is amplified by the fact that mobile devices that feature NFC are often smart phones with large bodies. In order to interact with targets that are on the same tag, for example POIs on a map, Hardy et al. [13] suggest enlarging the selected area on the NFC-display or showing targets on the mobile device and using its controller or keys to select them. Another solution is overlapping tags that can increase the input resolution but also cause errors during the tag recognition [14]. Another limitation results from the fact that currently available NFC-enabled mobile phones can only read one tag at a time and thus cannot recognize multiple (occluded) tags at the same time. Reading a tag also causes a short, but noticeable delay of about 0.5 s, which prohibits the continuous reading of tags and limits the use of continuous interaction patterns or gestures [16]. In this paper, we continue our preliminary work on multi-user interaction with dynamic NFC-displays [14] in order to investigate their design, usability and user experience in greater detail. For that purpose, we build on the basic design of dynamic NFC-displays (see Fig. 2) instead of using alternative designs or technologies for touch-based mobile interactions with interactive surfaces, like PhoneTouch [17], tagged displays [18], horizontal physical UIs or a grid of RFID-readers [19]. This approach allows us to compare new results with previous work on dynamic NFC-displays in general and multi-user interaction in particular. 3. Related work Our work on multi-user interaction with dynamic NFC-displays relates to research on mobile interactions with NFCtagged objects, public displays and other interactive surfaces for single and multiple users. 3.1. Mobile interactions with NFC-tagged objects Mobile interaction with NFC/RFID-tagged objects has considerably evolved during the last few years. In 1999, Want et al. [20] tagged books, documents and business cards with RFID-tags to link them with electronic documents, URLs or email addresses. In order to access and use these digital resources, users simply touched the tags with an RFIDreader that was attached to a tablet computer. Later, RFID and NFC were adopted for physical selection [21] and physical mobile interactions [22] that use mobile devices for physical interactions with tagged objects in order to facilitate mobile interactions with associated digital resources. Examples include games [23], access control, ticketing [10], Bluetooth

G. Broll et al. / Pervasive and Mobile Computing 9 (2013) 242–257

245

setup [24], information retrieval in museums [25], product recommendation [26] or retail applications [27]. While these and other examples take advantage of the simple interaction with single-tagged objects to facilitate specific application features, they neglect tagged objects as means for further physical interactions. The increasing availability of NFC-enabled mobile phones and small, cheap NFC-tags has leveraged the development of multi-tagged objects. While single-tagged objects reduce application features to physical hyperlinks [7], multi-tagged objects spread features for printer maintenance [28], ticketing options [8], menu items [29], POIs on maps [12], or multimedia-player controls [9] on physical objects and allocate them to multiple tags. That way, multi-tagged objects can serve as physical UIs that complement mobile application UIs, extend them to the real world and allow more physical interactions than single-tagged objects. They also enable multi-tag interactions that accumulate or combine different tags, for example to place an order in a restaurant or to apply different actions to POIs on a map (e.g., [11,30]). Broll and Hausen [11] also showed that interactions on multi-tagged objects can be carried out faster than interactions that are split between tagged objects and mobile devices or are mostly carried out on the latter. Multi-tagged objects like posters or maps are only partially tagged and thus only partially interactive. They are also static, so that users must rely on their mobile devices for dynamic feedback. For the next step in the evolution of NFC-based physical UIs, Vetter et al. [15] and Hardy et al. [13] have created and refined dynamic NFC-displays that use a completely interactive grid of NFC-tags as a physical UI. Users can touch its tags with NFC-enabled mobile phones to manipulate projected application UIs that are dynamically updated according to these interactions. In a similar approach, RamírezGonzález et al. [31] have built an interactive NFC panel that also uses a grid of NFC-tags to manipulate application UIs. In order to match the functional volume of applications like interactive maps or digital pinboards, dynamic NFC-displays allow for new interaction techniques like path selection, polygon selection, bounding-box selection, pick-and-drop or shape-based input (e.g., [16,13,15]). Hardy et al. [13] also showed that interactions between mobile devices and dynamic NFC-displays perform worse than interactions with a touch screen, but much better than remote interactions that use the joystick of a mobile phone to control a cursor on a display. In [32], they also compared interactions on a dynamic NFC-display and a static poster with a grid of tags regarding workload and user preferences, pointing out the advantages and disadvantages of both technologies. Most of these examples for NFC-based mobile interaction with multi-tagged objects have not been explicitly designed for multiple users, although they may technically support parallel or joint usage, e.g., with different sessions for different users. And while NFC-based physical UIs may provide enough space for multiple users at the same time, most evaluations have only investigated interactions with single users. Nevertheless, our evaluation of a real-time multi-player Whack-aMole game for dynamic NFC-displays showed that this technology is suitable for multi-user interactions and applications, despite a considerable average failure rate of 31.4% regarding the correct recognition of tags [14]. In this paper, we build on these preliminary results to further investigate multi-user interactions with dynamic NFC-displays. 3.2. Mobile interactions with large displays Researchers motivate dynamic NFC-displays as a touch-based approach to mobile interaction with large displays (e.g., [14,13]). This topic investigates combining mobile devices and large displays for their mutual benefit. The latter provide ample screen space but often lack interaction capabilities. Mobile devices, on the other hand, suffer from small screens but provide various input features like keypads, controllers, touch screens or sensors that can be used for interactions with large displays (e.g., [33,34]). Some approaches to mobile interaction with large displays use SMS [35], email [36], or Bluetooth [37] to transfer data from mobile devices to displays. Other approaches use mobile devices as remote controls for interactions on large displays. In Sweep [38], the mobile phone behaves like a mouse-like controller: users can control a remote pointer by moving the phone. Boring et al. [34] have compared techniques (e.g., sensing through the phone’s joystick, the camera, and the accelerometers) for controlling pointers on remote displays and verified Ballagas et al.’s approach [38]. Point & Shoot [38] allows users to take a picture of content they intend to interact with. In C-Blink [39], a camera attached to the large display detects and recognizes a mobile phone’s display and maps it to a cursor on the large display. Touch Projector further extends these approaches to allow for accurate interactions at varying distances [40] and on displays of varying sizes [41]. Dynamic NFCdisplays support direct, touch-screen-like interactions between mobile devices and screens as opposed to using them as remote controls for large displays. On the other hand, this directness does not allow for interactions with large displays at-a-distance and constrains the overall size, especially the height, of dynamic NFC-displays. Other approaches to mobile interaction with large displays are based on touch-sensitive screens that can be better compared to dynamic NFC-displays. For example, users can upload pictures, videos or other data from their mobile devices to touch screens (e.g., [42]) or interactive tabletops (e.g., [43]), which allow users to manipulate on-screen items by touching them with their fingers. While dynamic NFC-displays use mobile phones as pointing devices for direct interactions with specific items, touch screens usually do not support such direct interactions between displays and mobile phones but rather use the latter for user identification or personal data storage. Touch screens and tabletops also have a higher input resolution than dynamic NFC-displays and allow users to interact with smaller targets. PhoneTouch [17] combines the advantages of touch screens and dynamic NFC-displays. It detects touch-events between mobile phones and interactive surfaces by matching their independent measurements of these events, e.g. through computer vision (tabletop) or accelerometers (mobile phones), according to timestamps. That way, users can interact with items on tabletops or other interactive surfaces

246

G. Broll et al. / Pervasive and Mobile Computing 9 (2013) 242–257

by touching them with mobile phones or their fingers. However, PhoneTouch requires all devices involved in an interaction to detect touch-events, making this technology more expensive, more complex and harder to deploy in the real world than dynamic NFC-displays. While dynamic NFC-displays consist of passive NFC-tags that are read by NFC-enabled mobile phones, other interactive surfaces follow the opposite approach and comprise active RFID-readers that detect RFID-tagged objects. Olwal and Wilson [44] for example correlate data from computer vision and an RFID-reader to identify and track tagged objects on a tabletop surface. DataTiles [45] uses a grid of RFID-readers behind an LCD screen to detect tagged, transparent tiles that are put on the screen. Caretta [19] projects application UIs onto a tabletop with a grid of 20 × 24 RFID-readers that can recognize tangible objects with passive RFID-tags that users put on it. Caretta can also recognize tagged PDAs that are used like a stylus to touch fields of the grid, for example to share data that users have prepared on their personal mobile devices. While these approaches use similar technologies as dynamic NFC-displays, they feature a lower input resolution (e.g. [45,19]) or only use RFID as an auxiliary technology [44]. Equipping interactive surfaces with active RFID readers is also more expensive and harder to deploy than dynamic NFC-displays. 3.3. Multi-user interactions Research on multi-user interaction often extends research on (mobile) interaction with large displays and other interactive surfaces. This section gives an overview of different approaches to multi-user interaction that have influenced our own work on mobile interaction with dynamic NFC-displays for multiple users. In order to support multi-user interaction, interactive surfaces have to be able to differentiate input from multiple users while they interact with the same surface at the same time. While dynamic NFC-displays take advantage of the direct contact between mobile devices and NFC-tags for that purpose, large displays or tabletops follow other approaches: Dynamo [46] uses input from multiple mice to let users interact simultaneously on interactive surfaces. Users can also create private workspaces on the shared surface. DiamondTouch [47] is a tabletop system that identifies users by sensing an electric signal that runs through them. Eriksson et al. [48] use motion detection with mobile phone cameras to let multiple users control colored cursors on a shared display. Cheverst et al. [49] explored different means to identify users who interact with a display at the same time, e.g., IDs, colors or complementary GUIs on mobile devices. TouchProjector [40] allows multi-user interaction with displays at a distance, using mobile devices for video-based interaction with display contents. PhoneTouch [17] also supports multi-user interactions by correlating touch-events from a shared tabletop and individual end user devices according to timestamps. Using interactive surfaces together with others is new for most people. Therefore, the design of multi-user interactions has to consider social interactions between users as much as interactions between users and displays. Brignull et al. [50] identified three stages of interaction with public displays and two thresholds between them that have to be crossed. The first threshold lies between peripheral and focal awareness. First, users only see a display as part of their environment and then focus their attention on it. Next, they have to cross the threshold to the actual interaction with the display. A multi-user display should be designed to help users cross these thresholds, for example it should be clear from a certain distance what the display is for so that more people can become interested in it. When multiple users interact with a large display, they have to consider different social issues as there are no protocols on how to use these new shared interaction spaces. Russel et al. [51] found out that users who share a display have to establish rules on how to use it at the same time. Some users prefer working together whereas others choose a leader who does most of the interaction while the others are watching and making comments. Morris et al. [52] focused on the lack of predefined protocols. They identified conflicts that can arise when an interactive surface is used by multiple users and proposed different solutions. They introduced policies according to which objects could be public or private for specific users. Users could also have ranks defining who is allowed to do what. The evaluation of CityWall [53] confirms the idea of different thresholds and the assumption that users work out basic social rules. The authors noticed that people were using the system in teams or in rivalry with each other. Occasional conflicts were managed in different ways, either in cooperation or by withdrawal, when one user simply moved away from the screen. Because the display was not big enough for everybody, users had to develop a system of turn-taking. Mostly, the users were able to recognize the appropriate moment to step up to the display. However, not everybody was able to anticipate this moment and some users felt awkward when occupying screen space when other users were waiting. 4. Use cases for multi-user interaction Dynamic NFC-displays can be used for various use cases in private (e.g., at home), semi-public (e.g., in companies), and public spaces (e.g., in train stations, malls, arcades), including information retrieval, interactive advertisements, maps or games. In order to investigate multi-user interactions with this technology, we implemented three use cases that focus on different aspects of interactions among multiple users. The design of these use cases is affected by several aspects regarding the basic technology of dynamic NFC-displays or the characteristics of mobile interaction with large displays:

• Directness: The direct interaction between mobile devices and dynamic NFC-displays allows precise and personal interactions with on-screen items. With dynamic NFC-displays, users can easily see what their fellow users are doing,

G. Broll et al. / Pervasive and Mobile Computing 9 (2013) 242–257

247

Fig. 3. NFC Newsboard with a shared interaction space.









which facilitates the coordination of individual interactions and collaborations. Interactions at a distance make it harder to distinguish actions of individual users and lack social protocols (see also [41]). On the other hand, this directness requires a short distance between users and NFC-displays, which restricts their height and makes interactions at a distance impossible. Number of users: The direct interaction with dynamic NFC-displays and the constraints regarding their size also limit the number of users who can interact with the same display simultaneously. This can be partially compensated by wider physical UIs, providing more space for more users. Distribution of displays and interaction spaces: Users can interact with each other on the same dynamic NFC-display or across multiple, spatially separated displays. Users who interact with the same display can either share the whole screen or work on separated areas on the display in parallel. Separation and interplay of mobile and physical UIs: Mobile interactions with interactive surfaces often split application UIs into complementary UIs on mobile devices and surfaces. This separation has a considerable effect on their interplay and their roles during the interaction process, the allocation of features and content, the focus of interaction and privacy. For example, large displays can give a public overview of information while mobile devices can serve as personal displays that show details or private information. Interactions among users: Multi-user interaction is also affected by how users interact with each other. They can interact with the same display jointly or independently. They can avoid each other, collaborate, share data or compete with each other. During these interactions, users may follow different social protocols, depending on the location of the display, cultural background or relations with other users who may be family, friends or strangers.

4.1. Information retrieval with the NFC Newsboard The NFC Newsboard provides an overview of news headlines that are ordered by category (‘‘News’’, ‘‘Sports’’, ‘‘Business’’). It can be used at public spaces like train stations or airports and was designed for multiple users who share the same display simultaneously but interact with it independently and in parallel. First, users connect with the Newsboard by touching it with their NFC-enabled mobile phones. Then, they can touch a headline to open the full article. They can touch buttons on the physical UI for each article to close it on the large screen or to download it to their mobile devices. That way, users can read articles either on the public display or on their personal mobile devices while on the go. We have created two designs for the NFC Newsboard. The first design shows lists of available headlines for each of the categories ‘‘News’’, ‘‘Sports’’ and ‘‘Business’’ (Fig. 3). All users have to share the same display and can interact with it independently and simultaneously. In order not to interfere with other users, they can explicitly agree on the order of interaction or can implicitly follow social protocols and wait for opportune moments to interact, as they would do when interacting on a large interactive surface, such as a tabletop. The second design divides the NFC Newsboard into four separate panels that provide the same content. Each panel has tabs for all categories of headlines (Fig. 4). Users can touch panels to connect with them and to switch between their different tabs. By touching a fourth tab, users can filter the headlines according to personal interests from a profile stored on the mobile device. Although this design only supports a maximum of four users at a time, it does allow individual and personalized interactions with private panels, reducing interferences between users who can interact independently and in parallel.

248

G. Broll et al. / Pervasive and Mobile Computing 9 (2013) 242–257

Fig. 4. NFC Newsboard with separate panels.

a

b

Fig. 5. Users can collect information about animals from tagged leaflets (a) and upload it to the ItemBrowser on a dynamic NFC-display to share it with others (b).

4.2. Sharing information with the ItemBrowser The ItemBrowser was designed to support information sharing among multiple users. In a possible use case, school children visit a museum, exhibition or zoo and retrieve information from tagged exhibits using their mobile phones. Later, the children can present the information they collected on a dynamic NFC-display and use it to share this information with others. We simulated this museum scenario by putting up tagged leaflets about animals. Each leaflet had three NFC-tags that users can touch to obtain text information about an animal, to see a picture of it, or to open a link on the mobile device (Fig. 5(a)). The focus of this use case lies on sharing information between and collaboration among multiple users. They can exchange items by touching other NFC-enabled devices or by uploading them to the (shared) dynamic NFC-display (Fig. 5(b)). The ItemBrowser UI on the display comprises a shared area in the middle, up to four private areas on both sides and a toolbar at the bottom (Fig. 6). In order to upload items from mobile devices to the display, users can touch one of the private areas, where only their respective owners can manipulate items. In order to share items, owners have to move them to the public area of the display, where everybody can use them. This separation of private and public display areas gives users control over their data. Items on the display can be opened to show their details, e.g., a text, a picture or a URL (Figs. 5(b) and 6). Each opened item has four options: close, delete, download, or move. The last option is similar to Hardy et al.’s Pick-andDrop [13] and allows users to move items by seemingly picking them up with their mobile devices and dropping them at another place on the display. The same options can also be selected from the phone menu or the toolbar at the bottom of the display. Users can touch these options with their mobile devices to pick them up and touch an item to apply the selected option.

G. Broll et al. / Pervasive and Mobile Computing 9 (2013) 242–257

249

Fig. 6. The physical UI of the ItemBrowser with open and closed items in the public area (middle), the private areas (left and right) and the toolbar (bottom).

Fig. 7. Scheduling with the NFC-Doodle.

4.3. Scheduling with the NFC-Doodle The third use case emulates Doodle (www.doodle.com), an online scheduling service. The prototype was designed for collaboration among multiple users who can use this application in the same or in different, remote places to schedule meetings or other events together. The application UI on the dynamic NFC-display resembles a calendar and shows timeslots for different events (Fig. 7). Users can touch the UI and its options with their mobile devices to browse the calendar, to switch from the monthly overview to a more detailed weekly view, to highlight events and their dates, to add new ones, or to confirm the participation in suggested dates. They can create dates for an event by touching timeslots in the weekly view and add details with a complementary form on the mobile UI. This use case supports collaboration between users, who can use the NFC-Doodle to schedule meetings either on the same or across different NFC-displays. The NFC-Doodle server manages the consistency of the system as it updates all displays upon changes on any of them. 5. Technical setup and implementation The hardware setup for the three use case prototypes implements the basic setup for dynamic NFC-displays (Fig. 2) and is the same as for the Whack-a-Mole game [14] against which we compare our results for multi-user interactions. The physical UI is composed of 6 by 4 reconfigurable tiles (Fig. 8). Each tile is 30 × 21 cm and contains 5 by 8 tags (Fig. 9(a)), resulting in

250

G. Broll et al. / Pervasive and Mobile Computing 9 (2013) 242–257

Fig. 8. Setup of the dynamic NFC-display, including the ceiling-mounted short throw projector and the physical UI that is composed of 960 NFC-tags on 24 tiles and covered by a blank sheet of paper.

Fig. 9. Backside of a tile for the physical UI with 8 × 5 overlapping NFC-tags (a) and the second dynamic NFC-display for NFC-Doodle with a grid of 20 by 15 NFC-tags and an LCD projector in front of the physical UI.

a physical UI with a total size of 164 cm × 69.5 cm (total size smaller than the sum of tile sizes due to overlapping tiles) and a total number of 960 NFC-tags. We used Trikker-UL CL42 NFC-tags from Top Tunniste [54] for the physical UI. They have a round shape, which is not perfect for building a closed grid of NFC-tags, because adjacent tags would either have gaps between them or overlap to form a fully interactive surface. Touching gaps or overlapping tags with an NFC-enabled mobile phone can cause reading errors, since currently available devices can only read one tag at a time. Nevertheless, we decided to use the round tags after a comparison with rectangular tags showed that the latter were less reliable. In order to get a fully interactive physical UI, we tested different configurations of tags to determine the degree to which tags can overlap but are still recognizable. The diameter of the NFC-tags is 42 mm and we ended up with a grid spacing of 35 mm, resulting in an interactive surface that is completely covered with NFC-tags. The assembled physical UI with its overlapping tiles was covered with a sheet of paper to provide a smooth surface that does not indicate the position of single tags. The physical UI is complemented by a ceiling-mounted short throw projector that projects the application UIs of the different prototypes onto the physical UI from above the users, producing less shadow than projectors in front of the physical UI (Fig. 8). We have also set up another, smaller dynamic NFC-display to run a second instance of NFC-Doodle and to allow users to schedule events across different displays. Its setup comprises a regular LCD projector and a physical UI with a grid of 20 by 15 adjacent, non-overlapping NFC-tags, resulting in an interactive area of 90 cm by 67.5 cm (Fig. 9(b)). The setup also features two laptops that run Java SE application servers for each of the prototypes as well as several NFC-enabled Nokia 6212 phones that run Java ME clients for the interaction with the physical UI and the manipulation of application UIs. The laptops and the phones are connected via Bluetooth. The implementation of the prototypes is based on the MULTITAG-framework [32] that handles the communication between servers and clients and supports the creation of complementary UIs.

G. Broll et al. / Pervasive and Mobile Computing 9 (2013) 242–257

251

6. Usability study In order to investigate multi-user interaction with dynamic NFC-displays, we conducted two complementary user studies with our prototypes. The first study was carried out with single users to assess the overall performance, usability and user acceptance of mobile interaction with dynamic NFC-displays. The second study was carried out with groups of users to evaluate their behavior during the interaction with NFC-based interactive surfaces together with other users. 6.1. Study setup and experimental design In order to evaluate the usability of the prototypes and to assess the interaction with dynamic NFC-displays in general, we conducted a study with 11 students and researchers from our lab (9 male, 2 female, average age 28). They rated their expertise with computers and mobile phones with high median values (4 and 4) on a Likert-scale from 1 (‘‘none’’) to 5 (‘‘excellent’’). 10 participants had heard about NFC before the study, but only 6 had used it. We used a within subjects design for the study, so that each participant tested each prototype. The order of prototypes was counterbalanced with a Latin Square design. Before a participant tested a prototype and carried out different tasks with it, the investigator gave a short introduction to its features, which were then covered by the following tasks:

• Newsboard: The participants had to test both designs of the Newsboard. On the shared panel, they had to read articles, download two of them to their mobile phones and have a look at them there. Next, the participants had to set the profile on their mobile phone, apply it to the private panel by touching it and read filtered articles. • ItemBrowser: The participants had to collect information items from tagged leaflets, browse through them on the mobile device and upload any four of them to a private area on the NFC-display (Figs. 5(a) and 6). There, the participants had to open the content of the items, move four of them to the public area of the display and use options to delete and download items. Finally, the participants had to share items with the investigator via NFC by holding their phones against his. • NFC-Doodle: The participants had to browse the calendar to participate in two dates of an existing event, create a new event with different dates and finally select the date with the most participants for another event. After the participants had tested a prototype, they had to rate its usability with a subset of the IBM Post-Study System Usability Questionnaire [55]. After the participants had tested all three prototypes, they filled out a final questionnaire about mobile interaction with the dynamic NFC-display in general. During the study, the participants were recorded on video. In a post hoc video analysis, we counted the number of errors during the interaction with the dynamic NFC-display to assess the error rate. Errors occurred when a participant missed a tag or when the mobile phone did not detect a tag. 6.2. Results Fig. 10 summarizes the median results of the usability questionnaires for the three prototypes. The participants had to indicate their agreement with selected statements from the IBM Post-Study System Usability Questionnaire [55] on a Likertscale from 1 (‘‘strongly disagree’’) to 7 (‘‘strongly agree’’). The results show that all prototypes were well received in general and got above average ratings for all criteria. The Newsboard got the best ratings in total, followed by the ItemBrowser and the NFC-Doodle. While the participants appreciated the usability and the interaction design of the prototypes in general, they also thought that the recognition of tags on the physical UI was not always accurate and the transfer of data between phones and the display via Bluetooth was sometimes slow. They also thought that the prototypes should give better feedback during the interaction. The NFC-Doodle often received bad ratings because the participants had to switch their attention from the display to their mobile devices, e.g., to type the name of a new event. Due to such macro-attention-shifts [56], the participants often did not know what to do next and when to switch between the UIs. The final questionnaire evaluated the interaction with the dynamic NFC-display across all three prototypes, using a LikertScale from 1 (‘‘strongly disagree’’) to 5 (‘‘strongly agree’’). The median values in Fig. 11 show that the interaction with the physical UI was easy to learn and easier to carry out than regular mobile interactions. It also made the applications easy to use. The participants were undecided whether the interaction with the physical UI produced many errors and whether the recognition of NFC-tags was accurate. Nevertheless, they thought that the recognition of tags was fast enough for the tested applications. The average number of errors during the interaction with the tags of the NFC-display was 21% but varied between the prototypes −26% for the ItemBrowser, 23% for the NFC-Doodle and 13% for the Newsboard. An explanation for this variation of error rates between the different prototypes could be the varying task complexity. For example, the task for the ItemBrowser included the more complex manipulation of on-screen items while the task for the Newsboard only required the activation of on-screen options. The average error rate of 21% across all three prototypes is still much lower than the average error rate of 31% that we measured during another study with a multi-player game on the same dynamic NFCdisplay [14]. However, this previous study assessed the interaction with a competitive real-time game that required users to react as quickly as possible, provoking a higher number of errors. These results indicate that the error rate depends on the type of application and the complexity of tasks. Applications that put stress on their users and require them to interact with the NFC-display as quickly as possible, provoke higher error rates than applications that allow users to interact with the NFC-display at their own pace.

252

G. Broll et al. / Pervasive and Mobile Computing 9 (2013) 242–257

Fig. 10. Overall usability of the use case prototypes (median values, Likert-scale from 1 = ‘‘strongly disagree’’ to 7 = ‘‘strongly agree’’).

Despite the constraints regarding the recognition of tags, the participants did not ask for indications of their positions on the physical UI to facilitate the interaction with them. According to the participants, advantages of mobile interactions with the dynamic NFC-display are the overview on the large display, the intuitive interaction with it and the possibility to download items from the public display to personal mobile devices. On the other hand, the inaccuracy of the interaction between mobile phones and tags was still seen as the greatest disadvantage. 7. Multi-user study After the first study had evaluated the general usability of the prototypes and dynamic NFC-displays, we focused on multi-user interactions in the second study. 7.1. Study setup and experimental design We conducted the study in our lab with 15 participants (10 male, 5 female, average age 28.3) from inside and outside our lab. They rated their expertise with computers and mobile phones with high median values (4 and 4) on a Likert-scale from 1 (‘‘none’’) to 5 (‘‘excellent’’). 11 participants had heard about NFC before the study, 9 had already used it and 7 had also participated in the first study. In order to evaluate multi-user interactions, we used five groups with three participants each. We organized heterogeneous groups that mixed participants from inside and outside our lab, who did and did not know the prototypes (7/8) and who did and did not know each other (11/4). We used a within subjects study design so that all participants in all groups tested all three prototypes, whose order was counterbalanced with a Latin Square design. The participants in each group had to interact with a prototype at the same time and carry out different tasks. Before a group tested a prototype, the investigator gave an introduction to its features. We reused the tasks from the first study and modified some of them to stimulate interactions among the participants.

G. Broll et al. / Pervasive and Mobile Computing 9 (2013) 242–257

253

Fig. 11. General assessment of mobile interaction with the dynamic NFC-display across all three prototypes (median values, Likert-scale from 1 = ‘‘strongly disagree’’, to 5 = ’’strongly agree’’).

• Newsboard: The tasks for the shared-display design had all participants from a group read and download different articles so that they had to manage their interactions. The tasks for the interaction with the private panels remained the same for all participants. • ItemBrowser: The tasks for this prototype remained the same and all participants of a group interacted with it in the same way. • NFC-Doodle: Two participants of a group used the big dynamic NFC-display and one participant used the smaller one. Both displays were in the same room. Each participant had to create an event with different dates, select dates for the events of the other participants and finally select the date of their own event that was preferred by the other participants. That way, a participant had to wait for the other participants in order to continue his own interactions. After a group had tested one of the three prototypes, its members had to fill out a custom questionnaire and indicate their agreement with different statements about interacting with the prototype and with other users on a Likert-scale from 1 (‘‘strongly disagree’’) to 5 (‘‘strongly agree’’). Again, the participants were recorded on video to analyze their behavior among each other after the study. 7.2. General results The results in Fig. 12 show how the participants experienced interactions with other users across all prototypes. Again, all three applications were well received and most participants would use them in public places except for the NFC-Doodle, probably because of the handling of private information. While most participants would use the ItemBrowser and the NFC Newsboard with private panels together with known and unknown people alike, they were more reluctant to use the shared Newsboard and especially the NFC-Doodle together with strangers than with friends or colleagues. Other results also showed that the participants liked (certain aspects) of multi-user interaction less when using these two prototypes instead of the ItemBrowser or the paneled Newsboard. They thought that they could not interact with the shared Newsboard or the Doodle as freely as they wanted to, preferred to interact with the NFC-display on their own and regarded the simultaneous interaction with other people as annoying. An explanation for these results could be the higher number of interactions among participants that the shared Newsboard and the NFC-Doodle elicited. Using the ItemBrowser or the paneled Newsboard, the participants could interact with (parts of) the dynamic NFC-display on their own most of the time. Despite their exposition in front of the large display, the participants did not feel awkward during the interaction with it and were mostly undecided whether their privacy was disturbed. While the participants did not feel too much observed, they were more aware of what the other participants were doing, especially during the collaboration task with the Doodle. Finally, the participants thought that the ItemBrowser and the NFC-Doodle, which required them to interact with one another intentionally, make it easy to connect with other people. The video analysis showed that the participants hardly hesitated to use the physical UIs together. At first, the interactions were more chaotic and disturbed by collisions as the participants had to figure out how to coordinate their interactions with the three prototypes. These collisions usually happened by accident or were provoked for fun in rare cases. After the participants had figured out how the prototypes worked, they developed strategies to coordinate their work among each

254

G. Broll et al. / Pervasive and Mobile Computing 9 (2013) 242–257

Fig. 12. General assessment of multi-user interaction (median values, Likert-scale from 1 = ‘‘strongly disagree’’, to 5 = ‘‘strongly agree’’).

other to avoid collisions and to carry out their tasks. In most cases, the participants only worked with the display area directly in front of them and waited for others to finish their task before disrupting them or asking them to switch positions. In some cases, participants began to compete for screen space and interacted with objects in their neighbors’ interaction area to gain more screen space for themselves. Such interactions were usually accompanied by short discussions. Other communication between the participants during the tasks was uncommon. Most of the time, the participants were focused on their tasks and tried to complete their work without having to bother with the other members of the team. They talked and interacted with each other only when collisions occurred or when they were asked to collaborate for a task. In some situations, participants helped each other with technical difficulties with the mobile phones or with the prototypes. 7.3. Results – ItemBrowser Apart from the general evaluation of multi-user interactions, the participants also assessed features of the single prototypes. While the public area of the ItemBrowser was clearly seen as useful for collaboration and sharing information (median = 5), the private areas did not increase the trust in the application (median = 2) or the feeling of privacy (median = 2) for the participants. They argued that data is not private anymore when everybody can see it on a display. Therefore, 9 out of 15 participants preferred to upload their data to the public area of the display instead of the private areas. The participants also clearly preferred to interact with the public display (11 votes) instead of using direct phone-to-phone interaction (4 votes) to share information with other people. Regarding the allocation of application features between the large display and mobile phones, the participants preferred the display for an overview of items (9 votes) and for sharing them (10). On the other hand, they preferred mobile devices for showing details of items (8) and starting or closing the application (9/13). The participants wanted to have these features on

G. Broll et al. / Pervasive and Mobile Computing 9 (2013) 242–257

255

either the display or the mobile device, but not on both of them – a third option that got hardly any votes. The participants equally liked the display and the phone for selecting actions (7 votes). More specifically, they preferred to apply actions to items by selecting actions from the phone menu (7) rather than from item options (5) or the taskbar (4). 7.4. Results – Newsboard Fig. 12 showed that the Newsboard with private panels often received better ratings than the shared Newsboard and was seen as more user-friendly. These results are confirmed by the fact that all but one participant preferred this layout over the shared display design. The participants clearly thought that the filtering of articles according to personal interests (median = 5) was useful and that the private panels increased their feeling of privacy (median = 4). Regarding the allocation of application features, the participants preferred the large display only for an overview of articles (11 votes). The more private mobile devices were preferred for showing details about articles or reading them (8), setting up personal profiles (15), filtering articles according to them (10) as well as starting and closing the application (9/12). The participants also preferred to read articles on the phone (7) than on the public display (4) or on both (5). Some of them explained that they preferred to read in private or felt bad about occupying the large screen. 7.5. Results – NFC-Doodle While the participants agreed that the NFC-Doodle can help them schedule events (median = 4), 11 of them preferred the original Doodle online service to the NFC-Doodle. Similar to the results for the ItemBrowser, the participants preferred the large display for showing an overview of the calendar (12 votes), available events (13) and event details (9). The mobile phone on the other hand was preferred for more private actions like participating in an event (9), setting up different dates for own events (8), selecting a final date for an own event (9) or critical actions like starting (11) and closing (13) the application. This study also confirmed problems from the first study regarding attention shifts during the interaction with the NFC-Doodle. Again, it was not always clear when the participants had to switch between the public display and the mobile device to carry out a certain interaction (median = 3). 8. Discussion The goal of this paper was to investigate the design, usability and user experience of mobile interactions with dynamic NFC-displays for multiple users. For that purpose, we have designed, implemented and evaluated three prototypes for multi-user interaction with this technology: a Newsboard with shared and private panels for simultaneous but independent information retrieval, an ItemBrowser for informal information sharing and an NFC-Doodle for collaborative scheduling. The initiatory usability study with single users showed that the interaction with the prototypes and the dynamic NFCdisplay was appreciated in general, although the recognition of NFC-tags was not always accurate and had an error rate of 21% across all prototypes. This number is well below the error rate of 31.4% that we measured during the evaluation of a multi-player game with the same technical setup [14]. All of these results indicate that the error rate depends on the type of application and the complexity of tasks. While the game required its users to hit targets in real time, the three multi-user prototypes did not have time constraints and let users interact at their own pace. Despite these high error rates, the subjective perception of interactions with dynamic NFC-displays was again good (see [14]), except for unannounced attention shifts. The second study then focused on multi-user interactions across all three prototypes. It showed that while multi-user interactions were appreciated in general, users seemed to prefer parallel and independent interactions to simultaneous interactions together with other users. This observation was confirmed by the video analysis of multi-user interactions which showed that the participants of the study preferred to work on the dynamic NFC-display on their own without unnecessary interactions with other users. In the beginning, the interactions with the dynamic NFC-display were more chaotic and the users collided with each other accidentally as they figured out how to interact with the prototypes. As the participants learned and mastered these interactions, they developed strategies and talked to each other to coordinate interactions, to avoid collisions and to carry out their tasks more effectively. The second study was also interesting regarding the roles of mobile devices and dynamic NFC-displays. The latter was the focus of attention and the participants of the study preferred it for giving an overview of data or for sharing it across all prototypes. Mobile devices on the other hand had a more auxiliary role and were preferred for showing private information and details or carrying out more critical functionalities like starting and stopping an application or setting up a user profile. The results regarding the need for privacy are mixed and seem to depend on the applications. In general, the participants did not seem to be too concerned about (the loss of) privacy, being observed by other users or observing them. The private areas of the ItemBrowser did not increase the trust or the feeling of privacy of the participants, who stated that information on a public display could not be regarded as private anymore. On the other hand, the participants of the study preferred the private panels of the Newsboard which allowed them to use it in parallel but independent of each other. In line with the other results, the participants may not care too much about privacy once information is shown on a public display, but they appreciate the means to interact with the dynamic NFC-display on their own. However, these results about public interactions and privacy are limited as they were assessed during a laboratory study.

256

G. Broll et al. / Pervasive and Mobile Computing 9 (2013) 242–257

9. Conclusion This paper showed that dynamic NFC-displays are suitable for touch-based mobile interactions with large displays for multiple users. However, the results can hardly be generalized for any (touch-based) mobile interaction with large displays and are only valid for the very specific technology of dynamic NFC-displays. The purpose of this technology is not to challenge other technologies for interactive surfaces, but to push the limits of what is technically possible with a simply tagging technology such as NFC, to serve as a prototyping technology and to anticipate technologies for more direct and physical interactions between mobile devices and large displays. Despite its shortcomings, this technology can provide preliminary results to inform the design and development of future, touch-based mobile interactions with large displays for multiple users. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36]

R. Want, Near field communication, IEEE Pervasive Computing 10 (3) (2011) 4–7. R. Want, An introduction to RFID technology, IEEE Pervasive Computing 5 (1) (2006) 25. Google Wallet website. www.google.com/wallet/ (accessed 30.09.12). Touch and Travel website. www.touchandtravel.de/ (accessed 30.09.12). T. Tuikka, M. Isomursu (Eds.), Touch the Future with a Smart Touch, VTT, 2009. Cityzi website. www.cityzi.fr/ (accessed 30.09.12). J. Schwieren, G. Vossen, Implementing physical hyperlinks for mobile applications using RFID tags, in: Proc. of IDEAS‘07, IEEE Computer Society, Washington, DC, 2007, pp. 154–162. G. Broll, E. Rukzio, M. Paolucci, M. Wagner, A. Schmidt, H. Hussmann, Perci: pervasive service interaction with the internet of things, IEEE Internet Computing 13 (6) (2009) 74–81. I. Sánchez, J. Riekki, M. Pyykknen, Touch & control: interacting with services by touching RFID tags, in: Proc. of IWRT 08, June 12–13, 2008. H. Ailisto, M. Isomursu, T. Tuikka, J. Häikiö, Experiences from interaction design for NFC applications, Journal of Ambient Intelligence and Smart Environments 1 (4) (2009) 351–364. G. Broll, D. Hausen, Mobile and physical user interfaces for NFC-based mobile interaction with multiple tags, in: Proc. of MobileHCI’10, ACM, NY, USA, 2010, pp. 133–142. D. Reilly, M. Rodgers, R. Argue, M. Nunes, K. Inkpen, Marked-up maps: combining paper maps and electronic information resources, Personal Ubiquitous Computing 10 (4) (2006) 215–226. R. Hardy, E. Rukzio, Touch & interact: touch-based interaction of mobile phones with displays, in: Proc. of MobileHCI’08, ACM, New York, NY, 2008, pp. 245–254. G. Broll, R. Graebsch, M. Scherr, S. Boring, P. Holleis, M. Wagner, Touch to play—exploring touch-based mobile interaction with public displays, in: Proc. of NFC2011, Hagenberg, Austria, February 22, 2011. J. Vetter, J. Hamard, M. Paolucci, E. Rukzio, A. Schmidt, Physical mobile interaction with dynamic physical objects, Demo at MobileHCI’07, Singapore, September 9, 2007. G. Broll, W. Reithmeier, P. Holleis, M. Wagner, Design and evaluation of techniques for mobile interaction with dynamic NFC-displays, in: Proc. of TEI’11, ACM, New York, NY, USA, 2010, pp. 205–212. D. Schmidt, F. Chehimi, E. Rukzio, H. Gellersen, PhoneTouch: a technique for direct phone interaction on surfaces, in: Proc. of UIST’10, ACM, New York, NY, USA, 2010, pp. 13–16. K. Seewoonauth, E. Rukzio, R. Hardy, P. Holleis, Touch & connect and touch & select: interacting with a computer by touching it with a mobile phone, in: Proc. of MobileHCI’09, ACM, New York, NY, 2009, pp. 1–9. M. Sugimoto, K. Hosoi, H. Hashizume, Caretta: a system for supporting face-to-face collaboration by integrating personal and shared spaces, in: Proc. of CHI’04, ACM, New York, NY, USA, 2004, pp. 41–48. R. Want, K.P. Fishkin, A. Gujar, B.L. Harrison, Bridging physical and virtual worlds with electronic tags, in: Proc. of CHI’99, ACM, New York, NY, 1999, pp. 370–377. H. Ailisto, L. Pohjanheimo, P. Välkkynen, E. Strömmer, T. Tuomisto, I. Korhonen, Bridging the physical and virtual worlds by local connectivity-based physical selection, Personal Ubiquitous Computing 10 (6) (2006) 333–344. E. Rukzio, K. Leichtenstern, V. Callaghan, P. Holleis, A. Schmidt, J. Chin, An experimental comparison of physical mobile interaction techniques: touching, pointing and scanning, in: Proc. of Ubicomp 2006, 2006, pp. 87–104. P. Coulton, O. Rashid, W. Bamford, Experiencing ‘touch’ in mobile mixed reality games, in: Proc. of the 4th International Conference in Computer Game Design and Technology, Nov 15–16, Liverpool, UK, 2006. T. Salminen, S. Hosio, J. Riekki, Enhancing bluetooth connectivity with RFID, in: Proc. of PERCOM’06, IEEE Computer Society, Washington, DC, USA, 2006, pp. 36–41. C. Santoro, F. Paternò, G. Ricci, B. Leporini, A multimodal mobile museum guide for all, in: Proc. of Mobile Interaction with the Real World, vol. 24, 2007, pp. 21–25. F.von Reischach, D. Guinard, F. Michahelles, E. Fleisch, A mobile product recommendation system interacting with tagged products, in: Proc. PerCom’09, 2009. F. Resatsch, S. Karpischek, U. Sandner, S. Hamacher, Mobile sales assistant: NFC for retailers, in: Proc. MobileHCI’07, vol. 309, 2007, pp. 313–316. J. Riekki, T. Salminen, I. Alakärppä, Requesting pervasive services by touching RFID tags, IEEE Pervasive Computing 5 (1) (2006) 40–46. J. Häikiö, A. Wallin, M. Isomursu, H. Ailisto, T. Matinmikko, T. Huomo, Touch-based user interface for elderly users, in: Proc. of MobileHCI’07, vol. 309, ACM, New York, NY, 2007, pp. 289–296. D. Reilly, M. Welsman-Dinelle, C. Bate, K. Inkpen, Just point and click? using handhelds to interact with paper maps, in: Proc. of MobileHCI’05, vol. 111, ACM, 2005, pp. 239–242. G. Ramírez-González, M. Muñoz-Organero, C.D. Kloos, Á.C. Astaiza, Exploring NFC interactive panel, in: Proc. of Mobiquitous 2008, Dublin, Ireland, July 21–25, 2008. R. Hardy, E. Rukzio, P. Holleis, M. Wagner, Mobile interaction with static and dynamic NFC-based displays, in: Proc. of MobileHCI’10, ACM, New York, NY, USA, 2010, pp. 123–132. R. Ballagas, J. Borchers, M. Rohs, J.G. Sheridan, The smart phone: a ubiquitous input device, IEEE Pervasive Computing 5 (1) (2006) 70–77. S. Boring, M. Jurmu, A. Butz, Scroll, tilt or move it: using mobile phones to continuously control pointers on large public displays, in: Proc. of OZCHI’09, vol. 411, ACM, New York, NY, 2009, pp. 161–168. A. Ferscha, S. Vogl, Pervasive web access via public communication walls, in: Friedemann Mattern, Mahmoud Naghshineh (Eds.), Proc. of Pervasive’02, Springer-Verlag, London, UK, 2002, pp. 84–97. T. Paek, M. Agrawala, S. Basu, S. Drucker, T. Kristjansson, R. Logan, K. Toyama, A. Wilson, Toward universal mobile interaction for shared displays, in: Proc. of CSCW’04, ACM, New York, NY, USA, 2004, pp. 266–269.

G. Broll et al. / Pervasive and Mobile Computing 9 (2013) 242–257

257

[37] K. Cheverest, A. Dix, D. Fitton, C. Kray, M. Rouncefield, C. Sas, G. Saslis-Lagoudakis, J. Sheridan, Exploring bluetooth based mobile phone interaction with the hermes photo display, in: Proc. of MobileHCI’05, ACM, New York, NY, USA, 2005, pp. 47–54. [38] R. Ballagas, M. Rohs, J.G. Sheridan, Sweep and point and shoot: phonecam-based interactions for large public displays, in: CHI’05 Extended Abstracts on Human Factors in Computing Systems, CHI EA’05, ACM, New York, NY, USA, 2005, pp. 1200–1203. [39] K. Miyaoku, S. Higashino, Y. Tonomura, C-Blink: a hue-difference-based light signal marker for large screen interaction via any mobile terminal, in: Proc. of UIST’04, ACM, New York, NY, USA, 2004, pp. 147–156. [40] S. Boring, D. Baur, A. Butz, S. Gustafson, P. Baudisch, Touch projector: mobile interaction through video, in: Proc. of CHI’10, ACM, New York, NY, 2010, pp. 2287–2296. [41] S. Boring, S. Gehring, A. Wiethoff, A.M. Blöckner, J. Schöning, A. Butz, Multi-user interaction on media facades through live video on mobile devices, in: Proc. of CHI’11, ACM, New York, NY, USA, 2011, pp. 2721–2724. [42] J.F. McCarthy, B. Congleton, F.M. Harper, The context, content & community collage: sharing personal digital media in the physical workplace, in: Proc. of CSCW’08, ACM, New York, NY, USA, 2008, pp. 97–106. [43] A.C. Wilson, R. Sarin, BlueTable: connecting wireless mobile devices on interactive surfaces using vision-based handshaking, in: Proc. of GI’07, ACM, New York, NY, USA, 2007, pp. 119–125. [44] A. Olwal, A.D. Wilson, SurfaceFusion: unobtrusive tracking of everyday objects in tangible user interfaces, in: Proc. of GI’08, Canadian Information Processing Society, Toronto, Ont, Canada, 2008, pp. 235–242. [45] J. Rekimoto, B. Ullmer, H. Oba, DataTiles: a modular platform for mixed physical and graphical interactions, in: Proc. of CHI’01, ACM, New York, NY, USA, 2001, pp. 269–276. [46] S. Izadi, H. Brignull, T. Rodden, Y. Rogers, M. Underwood, Dynamo: a public interactive surface supporting the cooperative sharing and exchange of media, in: Proc. of UIST’03, ACM, New York, NY, USA, 2003, pp. 159–168. [47] P. Dietz, D. Leigh, DiamondTouch: a multi-user touch technology, in: Proc. of UIST’01, ACM, New York, NY, USA, 2001, pp. 219–226. [48] E. Eriksson, T. Riisgaard Hansen, A. Lykke-Olesen, Reclaiming public space: designing for public interaction with private devices, in: Proc. of TEI’07, ACM, New York, NY, USA, 2007, pp. 31–38. [49] K. Cheverest, A. Dix, D. Fitton, C. Kray, M. Rouncefield, G. Saslis-Lagoudakis, J. Sheridan, Exploring mobile phone interaction with situated displays, in: PERMID Workshop Pervasive 2005, 2005. [50] H. Brignull, S. Izadi, G. Fitzpatrick, Y. Rogers, T. Rodden, The introduction of a shared interactive surface into a communal space, in: Proc. of CSCW’04, ACM, New York, NY, USA, 2004, pp. 49–58. [51] D.M. Russell, C. Drews, A. Sue, Social aspects of using large public interactive displays for collaboration, in: Gaetano Borriello, Lars Erik Holmquist (Eds.), Proc. UbiComp’02, Springer-Verlag, London, UK, 2002, pp. 229–236. [52] M.R. Morris, K. Ryall, C. Shen, C. Forlines, F. Vernier, Beyond ‘‘social protocols’’: multi-user coordination policies for co-located groupware, in: Proc. of CSCW’04, ACM, New York, NY, USA, 2004, pp. 262–265. [53] P. Peltonen, E. Kurvinen, A. Salovaara, G. Jacucci, T. Ilmonen, J. Evans, A. Oulasvirta, P. Saarikko, It’s mine, don’t touch!: interactions at a large multitouch display in a city centre, in: Proc. of CHI’08, ACM, New York, NY, USA, 2008, pp. 1285–1294. [54] ToP Tunniste website: http://www.toptunniste.fi (accessed 30.09.12). [55] J.R. Lewis, IBM Computer Usability Satisfaction Questionnaires: Psychometric Evaluation and Instructions for Use, Technical Report 54.786, IBM Corporation, Boca Raton, FL, USA, 1993. [56] P. Holleis, F. Otto, H. Hussmann, A. Schmidt, Keystroke-level model for advanced mobile phone interaction, in: Proc. of CHI’07, ACM, New York, NY, USA, 2007, pp. 1505–1514.