Mutual awareness in collaborative design: An Augmented Reality integrated telepresence system

Mutual awareness in collaborative design: An Augmented Reality integrated telepresence system

Computers in Industry 65 (2014) 314–324 Contents lists available at ScienceDirect Computers in Industry journal homepage: www.elsevier.com/locate/co...

2MB Sizes 0 Downloads 56 Views

Computers in Industry 65 (2014) 314–324

Contents lists available at ScienceDirect

Computers in Industry journal homepage: www.elsevier.com/locate/compind

Mutual awareness in collaborative design: An Augmented Reality integrated telepresence system Xiangyu Wang a,b, Peter E.D. Love c, Mi Jeong Kim b,*, Wei Wang d a Curtin-Woodside Chair Professor for Oil, Gas & LNG Construction and Project Management & Co-Director of Australasian Joint Research Centre for Building Information Modelling (BIM), Curtin University, Perth, WA 6845, Australia b Department of Housing and Interior Design, Kyung Hee University, Seoul, Republic of Korea c School of Civil and Mechanical Engineering, Curtin University, Perth, WA 6845, Australia d University of Sydney, Sydney, NSW 2008, Australia

A R T I C L E I N F O

A B S T R A C T

Article history: Received 27 November 2010 Received in revised form 5 April 2013 Accepted 18 November 2013 Available online 30 December 2013

Remote collaboration has become increasingly important and common in designers’ working routine. It is critical for geographically distributed designers to accurately perceive and comprehend other remote team members’ intentions and activities with a high level of awareness and presence as if they were working in the same room. More specifically, distributed cognition places emphasis on the social aspects of cognition and asserts that knowledge is distributed by placing memories, facts, or knowledge on objects, individuals, and tools in the environment they work. This paper proposes a new computermediated remote collaborative design system, TeleAR, to enhance the distributed cognition among remote designers by integrating Augmented Reality and telepresence technologies. This system can afford a high level externalization of shared resources, which includes gestures, design tools, design elements, and design materials. This paper further investigates how this system may affect designers’ communication and collaboration with focus on distributed cognition and mutual awareness. It also explores the critical communication-related issue addressed in the proposed system, including common ground and social capitals, perspective invariance, trust and spatial faithfulness. ß 2013 Elsevier B.V. All rights reserved.

Keywords: Collaborative design Communication Awareness Distributed cognition Telepresence Augmented Reality

1. Introduction With recent developments and applications of computermediated design software tools that utilize the Internet, a plethora of opportunities and challenges have emerged that are able to accommodate communication and collaboration issues in remote collaborative design. Traditional Face-to-Face (F2F) interaction is being replaced by computer-mediated human interaction. This trend is providing new opportunities and tools to support ComputerSupported Collaborative Work (CSCW), especially for remote designers to work together virtually in a distributed environment. However, CSCW can hinder important features of F2F communication that people are used to and rely upon for enhanced perception and cognition. For instance this is limited support for gaze perception and awareness when using teleconferencing technologies. Telepresence focuses on the interaction with live, real objects and places instead of virtual presence in a simulated environment. CSCW tools, for example, may be rejected by users if work

* Corresponding author at: Department of Housing and Interior Design, Kyung Hee University, Seoul, Republic of Korea. E-mail address: [email protected] (M.J. Kim). 0166-3615/$ – see front matter ß 2013 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.compind.2013.11.012

processes and tasks subject to change. Thus, the higher degree of telepresence is provided, the closer feeling to what the designers have been familiar with will be achieved. This may increase users’ acceptance toward adopting such technology into their regular design processes. This paper presents a computer-mediated remote collaborative design system that can be used to enhance the distributed cognition among remote designers by integrating Augmented Reality and telepresence technologies. Traditionally, cognition has been explained in terms of information processing at the level of the localized individual. Distributed cognition emphasizes the distributed nature of cognition phenomena and focuses on the interaction of a person with tools, objects and other persons [13,28]. The proposed system has been named TeleAR system. Technologically, this system consists of a camera and a tabletop Augmented Reality component. The camera component is built with the focus on each individual’s presence to others and tabletop is coupled with the camera component provides an Augmented Reality environment for remote collaboration. Augmented Reality has been extensively explored in collocated single user or collaborative context, for example, in the areas of individual design [36,37,39], collaborative design [38,40–42], collaborative learning [43], etc. The paper commences with a review of related

X. Wang et al. / Computers in Industry 65 (2014) 314–324

work and then describes and discusses how the proposed TeleAR system can affect designers’ communication and collaboration in remote locations. 2. Related work There have been a number of studies that have examined the nature of collaborative work using technology. For example, Distributed Designers’ Outpost [9] employs real Post-it notes as interactive media. The remote digital notes are synchronized with real local notes on a vertical display for collaboration. A novel approach of increasing efficiency in remote collaborative design in this system is the adoption of two mutual awareness mechanisms: (1) transient ink and (2) blue shadow [9]. The transient ink is used to convey position information of specific notes, which are added, removed or moved by a remote user while the blue shadow is used to provide simulated feedback of the approximated location of the remote user [9]. The simulated shadow is limited in its fidelity of approximation. For instance, it is difficult to distinguish shadows of two or more designers when they overlap. Tuddenham and Robinson [34] also developed a tabletop system, which supported remote and mixed-presence collaboration by visualizing distance designers’ arms as shadows on the tabletop. In their experiments, three designers created poetry by moving and reorienting the words simultaneously. Real video streaming has been explored to bring richer experience as compared to voice only media such as telephone. For instance, ClearBoard [16,17] was one of the early implementations that leveraged real-time video textures for simulating traditional whiteboard-aided design. One of its key contributions was that it allowed eye contact through video by using a half mirror polarizing film projection screen. The screen can be rapidly switched between the transparent and light scattering state by varying the control voltage. Thus, during the design experiments, together with the collaborative digital whiteboard drawing, a user’s reflected image was also captured by a camera behind the screen and transmitted to another. Blue-C [10] adopted and further investigated this idea of using half mirror projection screen, which was referred as ‘‘active panel’’ in their work. It introduced a spatially immersive environment for telepresence by combining simultaneous acquisition of multiple video streaming and 3D stereo projection. It provided a 3D portal that can facilitate collaborative design via body languages like gestures and movements. One limitation of the systems mentioned here is that they focused on one to one communication environment. As compared against ClearBoard and Blue-C, which placed cameras behind the screens and manipulated the transparency of the screens to capture the images of the users, DigiTable [7] drilled a hole in the wood-screen and used a spy camera peeping through for video communication. Another important feature introduced by DigiTable was the masks of the detected objects, which can be hands, arms, or any object above the table. The masks were synchronized among geographically distributed users’ tabletops to be the remote embodiments so that the local user can maintain an awareness of each other’s actions. These efforts mentioned above used various technologies to place the camera in a natural location, where the remote user can see and be seen by the local user as if they were communicating in a F2F manner. The location and orientation of the camera embody the eyes and the direction of gazes of the remote user. There is possible misalignment between the actual scene and the perceived one via projection screen (captured by the camera). Projection screen and camera are the intermediate components for externalizing one’s presence, while in F2F, they are embodied into the human eyes, resulting in a seamless natural situation for others’ perception. Therefore it is critical to understand the sensitivity of

315

human eyes through quantitatively measuring the differences between the actual gaze and the camera-embodied one. In video auditorium research, Chen [6] conducted experiments to determine how accurately human perceive eye contact and adopted the results in the Video Auditorium design. He properly designed and designated areas to be just below the cameras, being as the ‘‘teleprompter’’, where the user can send his or her gaze to the intended students during a class. 3. Issues in remote collaboration Tuddenham and Robinson [35] suggested that many remote tabletop projects have been inspired by co-located tabletop research including Single Display Groupware (SDG). Collaborative Virtual Environments (CVEs) tend to mimic physical F2F environments to reduce the cognitive load in a user’s brains. For example, the artificial shadows mentioned above afforded information about illumination and approximated positions of others, which is akin to F2F communication. However, due to current technical limitations, not all sensible stimuli can be replicated by network devices and computers. Senses of smells and tastes are two examples that computers cannot directly support. Some issues and limitations arise due to these limitations, which leads to a gap between F2F collaboration, for example, Single Display Groupware (SDG) [33] and remote collaboration like Mixed Presence Groupware MPG [34]. Spatial faithfulness is a concept that may be used to plug this prevailing gap. Nguyen and Canny [25] defined three levels of spatial faithfulness:  Mutual spatial faithfulness, which simultaneously enables each observer to know if they are receiving attention from other observers/objects or not;  Partial spatial faithfulness, which provides a one-to-one mapping between the perceived direction and the actual direction (up, down, left or right) to the observers; and  Full spatial faithfulness, which is an extension to partial spatial faithful systems. It provides a one-to-one mapping to both observers and objects. As shown in Fig. 1, F2F collaboration provides full spatial faithfulness while most remote collaboration systems only support mutual or partial spatial faithfulness. Hence, these systems cannot precisely preserve clues like directions and orientations to each user. Consequently, perspective invariance [30,31] may arise which may interfere with the correct awareness of the environment. This phenomenon occurs when images and streaming videos of remote users are taken from inconsistent angles, from which the local users perceive these images or videos. Consequently, this might lead to misunderstandings as illustrated in Fig. 1(b). Suppose three users are using remote collaboration groupware platforms for a design task. They would like to perceive each other as if they were talking F2F. Typically, a camera, which faces the local user, is used at each site to capture the streaming videos of the local user for both remote users. Additionally, two monitors or projection screens facing local user are used to display the videos received from the corresponding remote user. When A looks at their front direction, both audiences B and C will have the feeling that A is watching them even they have different virtual positions in the CVE. However, what they would expect and recognize in their brains is similar to that represented in Fig. 1(a), in which everyone perceives others’ gaze correctly. The perspective invariance would cause a false impression to the users and lead to distorted mental space of the working environment. It requires additional cognitive process to map the direction faithfully in their minds.

316

X. Wang et al. / Computers in Industry 65 (2014) 314–324

Fig. 1. Overview of various collaboration system setups: (a) co-located F2F environment, (b) MPG system.

Another issue in remote collaboration systems is the sense of touch. Many traditional remote collaboration systems support conventional PC and imply the keyboard/mouse approach as the Human–Computer Interaction (HCI) method. However, other novel approaches have emerged to support different interaction methods and ameliorate user experience. Tangible and simultaneous user interactions are two typical types of novel interactions that can be supported in remote collaboration [17]. Experiments have demonstrated that Tangible User Interfaces (TUIs) changes designers’ spatial cognition, and the changes of the spatial cognition are associated with creative design processes in 3D design [17]. That is, tangible interaction provides a richer sensory experience of digital information as it off-loads designers’ cognition, thereby promoting visuospatial discoveries and inferences [21]. Designers work together to design goods and products. The final products can be tangible, for instance, dresses, furniture, and buildings. They can also be intangible, for instance, ideas, poetry, and music. Both forms of the collaboration can benefit from TUI. Simultaneous user interaction is another interaction paradigm that remote design collaboration systems should support. Unlike remote training and education systems where a small portion of users (trainers and teachers) dominate interactions, remote collaboration systems require and value each designer’s participation. Remote collaboration in the context of this paper pertains to synchronous collaboration, where users can communicate back and forth simultaneously. 4. TeleAR: Augmented Reality-based telepresence system 4.1. Theory and model One of the issues involved in Computer-Mediated Communication (CMC) is awareness, more specifically, workspace awareness

for designers to understand their collaborator’s up-to-the-moment interaction with a shared workspace [12]. Design collaborations rely on what and how much they can perceive from the common workspace in order to make effective responses to common design tasks. Those responses perceived by others can further induce new responses accordingly. Due to this iterative nature in collaborative design activities, Gutwin and Greenberg [11] came up with a conceptual workspace awareness framework that linked environment, knowledge, exploration, and action. They stated that workspace awareness is maintained through a perception–action cycle containing these four elements. They also suggested mechanisms that aimed to maintain each of them so as to compensate the information and features lost in CMC as compared to those in F2F communication. The psychobiological model by Kock [22] proposed a reasonable explanation of human’s favor of F2F communication based on Darwinian evolution theory. The model claimed that naturalness decreased during CMC when using media like video-conferencing, Internet chat, email, and so on, which can lead to higher cognitive efforts. The F2F medium, as the highest degree of naturalness, is found at the center of the media naturalness scale that required the least cognitive efforts. This inspires that an ideal CMC system should be configured with as many features as inherent in F2F communication, no more and no less, rather than merely focusing on media richness. Previous studies have shown how different elements of F2F communication can be realized or simulated in CMC, for example, VoIP for voice and sound, video streaming for vision, keyboard and mouse for control and input, and various technologies mentioned in the previous section for mutual presence (e.g., transient ink and blue shadow in Distributed Designers’ Outpost, masks in DigiTable and so on).

X. Wang et al. / Computers in Industry 65 (2014) 314–324

Previous work in design team interaction provides a wellestablished understanding of group interaction modes. It is apparent that certain elements of physical interaction cannot be mimicked with simple audio and video communication. To facilitate the flow of conversation in group discussions, the element of spatial interaction is critical. Spatial interaction, which is embedded in real space, refers to the full-body interaction in space, embracing the richness of human senses and skills, thereby acquiring communicative and performative function [3,14,28]. This CMC system presented in this paper addresses workspace awareness issue by combining two technological components: telepresence and Augmented Reality. The TeleAR system can enable and promote different levels of workspace awareness by addressing some issues found in collaborative environments. More details of these issues will be carried out after presenting some technical setup. 4.2. Technical setup As shown in Fig. 2, this TeleAR system consists of identical gears dispersed over three locations. Each location connects to the other two via IP network and is equipped with two components: telepresence component and Augmented Reality component. This system is envisaged to enhance awareness in collaborative design context. What are described in the following sub-sections are several key components of the interface: 4.2.1. Telepresence component Multiple cameras and full-scale vertical displays are used as interfaces to immerse a local physical designer into a shared environment with remote virtual designers. Each remote designer is represented by one dedicated camera and display that face the

317

local physical designer directly (see Fig. 2). The corresponding remote designer is located in the virtual shared environment. The TeleAR system is explained using the following collaboration scenario, which involves three geographically distributed designers A, B, and C who collaborate by means of the TeleAR system. They take pre-defined seats, which are evenly positioned around the tabletop in the virtual shared environment. Take A as a common reference point and suppose A is at twelve o’clock direction, then B is at eight o’clock and C is at four o’clock direction (Fig. 2). A’s image is transmitted to B on B’s twelve o’clock direction display through A’s camera at eight o’clock direction. In the meanwhile, C views A’s image on C’s twelve o’clock direction display, which is captured from A’s four o’clock camera. Similarly, B and C would see A, C and A, B by means of the corresponding displays and cameras. Chen [6] suggested that human can hardly feel the difference between the actual gaze direction and the captured one if the camera is placed on top of the display while the visual angle between the camera and the eye on the display is less than 58. This research result justifies mounting the displays and cameras in the system so that high fidelity of eye contact and gaze direction can be preserved. The background can be subtracted from the video streaming using software toolkit like OpenCV [2,3] to cut down distraction. For the system performance, bandwidth and latency concerns, Mpeg2 compression, and RTP (Real-time Transport Protocol) can be implemented in the prototype. The video and audio interface in the TeleAR facilitates the exchange of synchronized audio/video. The synchronization of multimedia channels is essential considering the fact that there are often random delays which are unavoidable to network. It is critical to have integration between video and audio communications for effective distributed interaction process.

Fig. 2. Conceptual overview of the TeleAR system with a vision telepresence component and an Augmented Reality tabletop component for design activities.

318

X. Wang et al. / Computers in Industry 65 (2014) 314–324

Fig. 3. (1) Traditional method vs. (2) TeleAR system (Initial Implementation).

4.2.2. Shared Augmented Reality whiteboard The shared Augmented Reality whiteboard on top of the horizontal tabletop is very innovative as it re-constructs a natural office environment where participants can freely communicate digitalized visual information (e.g., sketches, images, 3D virtual solid models) of designs (Figs. 3 and 4). The shared AR whiteboard well fulfills the information-sharing requirement that is one of the critical success factors for collaborative workspace. The horizontal tabletop system can be coupled with the telepresence component in remote collaborative design with a focus on events that happen on the design tabletop. The contours of one designer’s arms can be identified and rendered with real-time texture before being transmitted to other designers’ tabletops. Rather than shadows as implemented in Distributed Tabletops [35], designers can instantly see more details about what others are doing and how they do it. Hence they would be expected to perceive more explicate or implicate cues for efficient communication. Those cues included gestures, sound, gaze, face expression, body language. For example, when one designer put their arm out for handshaking, the other not only see this movement from the display, but also expect to see their hand on the tabletop or even shake hands visually on the tabletop (Fig. 3d–f). Apart from that arm contour feature, the tabletop is also expected to support spatial interaction through gesture manipulation such as object

movement, resizing, and re-orientation. Interaction with interactive spaces exploits intuitive human spatial skills in everyday life, thus having the potential to employ full-body interaction [14]. OpenCV [8] can be integrated together with the software toolkit ARToolkit for implementation.

Fig. 4. Conceptual System Architecture and Implementation of TeleAR Shared Space.

X. Wang et al. / Computers in Industry 65 (2014) 314–324

4.3. System integration by considering critical issues This section further explains how the different aspects of the design of the TeleAR address the following certain critical issues. 4.3.1. Common ground and social capital Carroll et al. [4] described a framework for enhancing teamwork activity awareness and efficiency including four facets, namely common ground, communities of practice, social capital and human development. It was argued in their paper that most research had been focused on common ground level, which underpinned other facets by providing common, casual communication channels among collaborators. The system is based on a scenario where three geographically distributed designers are supported for collaborative design. As shown in Fig. 3d–f, the telepresence component is adopted as an important enabler for common ground and social capital of workspace awareness. Designers can see each other through the video display, in which case body language, attire and face expression can be visualized to be as natural as those that appear to be in F2F communication. Moreover, the location of each display is carefully chosen to embody the virtual point where the embodied designer would be standing in the shared workplace. Due to this consideration, it is required for these three users to take predefined fixed seats. The virtual locations/representations of those seats should not overlap with each other. One critical reason behind this type of arrangement is to facilitate the use of certain communicative gestures, such as pointing to another designer, as they would do in F2F communication. This means it does not matter how to arrange the seating of the designers (with the reasonable number of designers subject to the space availability) and the seating arrangement can be a line or around a large table, etc. According Scott et al. [31] this fulfills the first guideline: natural interpersonal interaction. Designers will not only notice the action, but also know who performed the action directly from the video display. This would also achieve higher level of awareness than other methods, such as the name list used in online chat programs. Fig. 3 describes how TeleAR works comparing to collaboration methods through tele-conferencing. Fig. 3a–c at the left column shows three designers using traditional collaborative methods through tele-conferencing; Fig. 3d–f at the right column illustrates them using TeleAR, which adopts AR technology to collaborate with each other on the design task. With traditional methods, each designer can see the other two designers’ video images and talk to each other, as shown in Fig. 3a–c. However, they would not know who other designers are currently looking at. They draw on a piece of paper individually based on their discussion, and can show other designers the drawing through the camera. The design process is different by using TeleAR. The three designers will have an Augmented Reality supported, desktop based shared design environment, as shown in Fig. 3d–f. The system is physically distributed in three geographically separated rooms. The TeleAR adopts AR technology to represent virtual objects in the shared design space. The TeleAR also offers the separation between private work zone and public work zone (see Fig. 4). Fig. 4 depicts high level system architecture for an overview on the system. This concept originates from the notion of public and private spaces. When a person works in their office or home, this location is considered to be a private area where they would not want to be disturbed by incoming events. However, once an individual leaves the confines of a private space to, for example, sit around a conference table for meetings; they have entered a public space. In the private work zone, people can focus on their own assigned duties with certain level of privacy just as we work in front of the

319

personal computer. The work that has been done in the private zone can then be transferred into the public zone for further discussion with other remote collaborators. In the public work zone, everyone accesses and modifies the same set of design information, The modifications that have been agreed upon then will be sent back to each one’s private work zone for further working. The ARToolKit (www.artoolkit.com) is adopted for implementation. It is Augment Reality software library that enables virtual objects appearing to physical world in real time using pre-defined markers. It uses video tracking libraries to calculate the real camera position and orientation that are related to physical markers in real time. Applications, which involve the overlay of virtual imagery on the real world, could be developed based on ARToolKit. [20]. According to the basic principles innate within the ARToolKit, the tracking works in following steps [20,23]:  The camera captures video streaming of real world and sends it to the computer;  Software on the computer searches through each video frame for any square shapes;  If a black square is found, the software uses some mathematics to calculate the position of the camera relative to the black square;  Once the position of the camera is known, a computer graphics model is drawn from that same position;  This model is drawn on top of the video of the real world and therefore appears stuck on the square marker;  The final output is shown on the display (either a monitor or a head-mounted-display), so when the user looks at/through the display they see graphics overlaid on the real world. Fig. 5 denotes the visualization process of ARToolKit: In the TeleAR, digital markers are adopted instead of paperbased markers. Those digital markers can be created and moved around on the touch screen (Fig. 6). On the left side, a digital marker is shown on the screen; users can move this marker around by touching and dragging it. At the meanwhile, a virtual object is drawn on the marker, as shown on the right side in Fig. 6: The TeleAR setup in each room also includes two monitors with cameras attached. The cameras in this TeleAR are used in the following three functions:  Tele-conferencing: The term of tele-conferencing here is different from traditional definitions. In traditional methods, cameras are usually placed in front of the users to capture their face images. In TeleAR, two cameras are placed separately at 2 o’clock

Fig. 5. ARToolKit visualization process (adapted from Ref. [20]).

320

X. Wang et al. / Computers in Industry 65 (2014) 314–324

Fig. 6. Digital marker on the tabletop.

and 10 o’clock positions, with two individual monitors. Those monitors show the other two designers separately; for instance, as shown in Fig. 3d, when designer A (the person sitting in front of TeleAR) is looking at designer B (whose video image is shown on the left monitor), designer B would see A looking at her from her side, and designer C would see A looking at B on his side. By this mean, each designer would know who each other are looking at and talking to, therefore a better communication environment between designers can be provided.  Capture digital markers: The software 3ds Max, OpenSceneGraph (OSG) and OSGART developed by HITLab are used to support virtual objects modeling. The ARToolKit provides transformation matrices for each marker it recognizes in the camera frame. 3ds Max provides a convenient environment to build up 3d models and those models are then transferred to OSG scene by the osgExp plug-in. OSGART combines transformations generated by ARToolkit and virtual scenes generated by OSG. Each designer has a tabletop with touch screen, where they can move and edit on those digital makers, as shown in Fig. 3d–f. Local scene information such as virtual 3D models, as shown in Fig. 3d–f, and their positions in each designer’s desktop can be gathered by TeleAR. All data captured from designer side are transferred through network to SQL database on the server. The central server has a database, which stores all data from each designer’s room and mergers them together. The shared Augmented Reality scene, which is supported by merged data is then sent to each designer’s side for synchronization.  Capture positions and gestures of designers’ hands: The cameras can also be used as optical sensors for human hands position tracking in the TeleAR. This type of sensors can track the current positions of designers’ hands in their local rooms; therefore an accurate location of the virtual hand of each designer in the shared design environment can then be determined for animation. The camera can also sense designers’ fingers gestures. By collecting data of positions and gestures of fingers and positions of virtual objects, it can give the result whether the fingers have ‘‘touched’’ the virtual objects, or what kind of operations the fingers are doing on those objects. For instance, as shown in Fig. 3d, designer A can see a virtual hand with pink color (which represents designer B) trying to grab the virtual TV, and a virtual hand with blue color (which represents designer C) editing on the virtual couch.

4.3.2. Perspective invariance In traditional F2F meeting, each participant in the meeting has his/her own unique perspective defined by his/her position. However, conventional video conferencing systems usually only have one single camera. That single video stream is then shared by all remote participants with a single view display. No matter what angle the remote participants view the display from, they will all take on a shared and incorrect perspective defined by the position of the camera. This problem will always be present wherever there are multiple participants looking at a shared single-view display.

Separate cameras and displays are used to accommodate the perspective invariance issue [5,36] which typically exists in single camera setup video conferencing systems. In those systems, the position of the camera determines what the receivers will see and share, regardless their intended view angles and locations. For instance, when designer B is looking directly to A, the camera at B’s twelve o’clock direction will capture it and send B’s gaze to A, as demonstrated in Fig. 1b. Without a doubt, A would properly perceive this as in F2F communication. For C on the other hand, if the same image is transmitted instead of the one captured by the camera at B’s four o’clock direction, C will perceive B’s gaze just like A does and thinks B is looking at himself/herself. That issue would cause confusion in communication. In site-to-site communication mode, this issue is hidden and will not cause any confusion or misunderstanding since only one remote party is involved and normally the camera and display are located perfectly to embody the remote party, which can be one user or a group of users. In the case of one-to-many communication mode, this might also be a desired feature, for example, in TV programs or teaching scenarios, the hosts or the teachers actually want themselves seen as looking directly to the audiences or students. However in other cases as remote collaborative design, where each member’s participation and contribution is equally important, unlike the previous one-tomany broadcast situations, this will become an obstacle for effective communication. The TeleAR was designed to support gaze awareness for multiuser video conferencing. The TeleAR supports multi-party conferencing by providing a camera/display surrogate that occupies the space that would otherwise be occupied by a single remote participant. As each person has his/her own individual camera and each remote participant is represented by his/her own screen, it is possible to direct gaze or gesture at a particular participant and have the recipient register it correctly. Multiple cameras were used to capture unique perspectives for each participant. If the cameras are arranged so that the geometry of a F2F meeting is preserved (e.g., 3 persons sitting around a round table), then each person will see a unique and correct perspective providing full spatial faithfulness for all participants in the meeting. Fig. 3 compares what each of the three local participants would see between the TeleAR and non-directional video conferencing when the two remote participants gaze toward participant A (Fig. 3a–c). Fig. 3a provides a view from position 1, Fig. 3b, position 2, and Fig. 3c, position 3. The Fig. 3a–c are what are viewed using a non-directional video conferencing system. Fig. 3d–f are the view seen the TeleAR. As can be seen, the left column of images in Fig. 3 demonstrates perspective invariance; that is, from all viewing positions, it appears that the remote participants are looking one position to the left. Through solving the perspective invariance issue, workspace awareness may be improved at the common ground and social capital level. It is suggested that the improvement in the perspective may aid and maintain common ground level of workspace awareness by making 3-way communication more natural for the designers. They can sense more information about others’ gazes, which is limited or misleading in one single camera setup. Social capital in maintaining workplace awareness can be defined as the ‘‘accumulation of the social benefits of past social interactions in order to mitigate conflict and other risks in future interactions’’ [4]. It can be trust, mutual understanding, shared values and behaviors, and cultures that team members develop and share over time during the design tasks and daily life. Gutwin and Greenberg [11] observed that when using tabletop collaborative systems designers invariably turned their heads to observe each other’s work. This can be interpreted as their intentions to

X. Wang et al. / Computers in Industry 65 (2014) 314–324

understand others’ skills, beliefs, and personality to achieve higher levels of trust, form through improved mutual understanding and shared common values. This observation can take place between anytime between designers during their tasks that require collaboration. What can be observed from designers watching each other and how can they help collaboration design in the context of distributed cognition is essential. Distributed cognition theory [27–30] states that human cognition does not only take place in brains, but also interrelate with other human and objects in the working environment. Thus, cognition can be carried through the interactions among a number of humans and technological devices. Observation, as one of the design activities in collaboration environments plays an important role in social communication [26]. Firstly, by observing the way others behave, people rationalize others’ behavior to understand what they are doing. Through understanding behaviors, people can further infer or reason the intent/modification behind those behaviors and even predict what others would do next. The accurate inferences of such intents can create a higher level of shared understanding and trust. Secondly, supplementary task-dependent processes are carried out to facilitate or restrain others’ work based on what has been observed and how the observer feels about it. The latter is further based on the observer’s existing knowledge and experience. Hence, designers develop social capitals and possibly new knowledge [4] from observation. 4.3.3. Trust and spatial faithfulness Trust is another important component of social capital. Hossain and Wigand [15] proffered that successful cooperative work requires high level of social trust among participants. They suggested that trust can mitigate the negative factors in communication, such as emotional fear, awkwardness, complexity, and uncertainty in cognition. An experiment undertaken by Nyguen and Canny [25] demonstrated that the natural gaze interaction enabled multiple cameras to reinforce the spatial faithfulness, which is the extent to which a system can preserve spatial relationships such as up, down, left, and right [24]. Consequently this resulted in better trust among team members. The system proposed in this paper addresses this concern as it supports social capital by promoting trust, which is improved by better spatial faithfulness, among the design team. According Nguyen and Canny [24] spatial faithfulness is enabled by multiple cameras and displays to help observe gaze and gestures in social activities. Back to Fig. 2, each designer can know if someone is looking at their (mutual spatial faithfulness), feel which direction others are looking at (partial spatial faithfulness) and precisely tell the current object of someone else’s visual attention (full spatial faithfulness). In addition to the telepresence component, the tabletop Augmented Reality component is devised to enable design activities awareness. The round tabletop shown in Fig. 3d–f. provides a platform to serve multiple designers by slightly re-allocating the seats so that they are evenly distributed around the circle. Since each participant’s contribution is equally important in collaboration design, the round table and the arrangement for evenly distributed seats would reflect this concern and make designers feel more comfortable to communicate with each other both physically and emotionally. Besides, the idea of shadow representation of arms [35] is adopted in the tabletop component and enhanced with real-time texture from remote designer’s arms so that others can easily identify who is performing and how he/she is doing the design task. This feature further enhances the common ground awareness and natural interpersonal interaction for efficient remote collaboration [31,32].

321

Augmented Reality technology is adopted here to convey one’s arms and hands activities to others through reconstructed 2D video streaming overlaid on tabletop system. Both vertical display and horizontal tabletop are used as source of awareness but with different focus and situations. The vertical display provides an easy and direct interface for identifying remote users, together with their faithful gaze and posture while the tabletop enhanced with augmented arms is for more detailed awareness that is specific to the design activities. These two independent features are logically connected for supporting awareness in remote collaborative design. For instance, when a designer begins to communicate, they would typically try to obtain a big picture of the other and then look into the other designer’s eyes instead of their hands, so the vertical display would be more favorable for initializing a conversation. The non-verbal cues, such as facial expressions, gaze directions, and gestures viewed from the display, are preserved to help maintain turn-taking protocols and pointing actions, which are underpin facilitators found mostly in F2F communication. On the other hand, when they come to a common understanding of the task and begin to emphasize on the design process, such as poetry editing or interior design, they would typically turn their attention to the tabletop system and focus on the details of design objects. In line with distributed cognition theory, the vertical display tends to support human–human interaction while the horizontal tabletop supports human–object interaction. Thus, the system allows socially distributed cognitive activities across individuals and objects. Both of these two awareness mechanisms work independently and cohesively at different levels of remote collaboration. They afford functions that are designed with respect to these system guidelines proposed by Scott et al. [31]: natural interpersonal interaction, accessing shared physical and digital objects, flexible user arrangement, and simultaneous user interactions. 5. TeleAR evaluation This section presents a preliminary evaluation of the earlystage prototype of TeleAR. Future work will be focused on the extensive evaluation of TeleAR in all aspects. This preliminary experiment was devised and conducted on evaluating TeleAR in order to gain an understanding of potential issues involved in this system. The revealed issues can be further used to improve the future prototypes. Before the start of the actual experiment, all the participants familiarized themselves with TeleAR. They were assigned enough time to practice how to use the platform. The main findings were summarized in the following paragraph. Gestures, such as facial expressions, fine motor movements and bodily postures and distances, were noted throughout the experiment and analyzed. Noteworthy, all participants made extensive eye contact with each other. Facial expressions were displayed, though not particularly expressive of any notable emotion. Hand gestures were made to assist in explaining or communicating. Gestures, such as speech volume and amplification, language and syntax and auditory exchange, were noted throughout the experiment and analyzed from the recording. Through observation of the verbal and auditory gestures, it was noted that participants made extensive verbal communication with each other. Through observation of the emotional states displayed in the recordings, it was observed that participants displayed an active and engaged psychological state. The level of involvement in the task at hand was of a medium level. Participants were aware of their surrounding environment and events, but were also absorbed in the task that was distracting them. Common ground, social capitals, perspective invariance, and trust and spatial faithfulness was identified important issues that

322

X. Wang et al. / Computers in Industry 65 (2014) 314–324

this system is designed to tackle. In view of the fact that laboratory experiment is a subset of real-world activities, it is rational to infer in this paper that some of those facets, which support complex activities, would still benefit small-scale activities such as the realtime remote collaborative design. Since the research context of this paper is remote collaborative design, which is in the domain of ‘laboratory phenomena’ in a sense, the system concept and implementation were initialized with a focus on supporting common ground and social capital. The TeleAR presented in this paper takes multi-viewpoint video capture approach that leverages multiple cameras to capture real-time texture of the remote user. The TeleAR was designed to support gaze awareness for multi-user video conferencing. It may introduce less delay in the design process and it is less complex for implementation, which means, the distributed users can understand each other better and work more efficiently. However, it usually takes time for people to accept and get used to a new technology. In our future study, we will conduct experiment to investigate in what level the TeleAR system exhausts users and if there is any solution to avoid that. Separate cameras and displays are used to accommodate the perspective invariance issue. Through solving the perspective invariance issue, workspace awareness would be improved at both common ground level and social capital level. It is believed that this improvement can aid and maintain common ground level of workspace awareness by making 3-way communication more natural for three designers. They can sense more information about others’ gazes, which is limited or misleading in one single camera setup. In addition to the telepresence component, the tabletop Augmented Reality component is devised to enable design activities awareness. Augmented Reality technology is adopted here to convey one’s arms and hands activities to others through reconstructed 2D video streaming overlaid on tabletop system. Both vertical display and horizontal tabletop are used as source of awareness but with different focus and situations. The vertical display provides an easy and direct interface for identifying remote users, together with their faithful gazes and postures while the tabletop enhanced with augmented arms is for more detailed awareness that is specific to the design activities. These two independent features are logically connected for supporting awareness in remote collaborative design. In considering the major factor of ‘workspace awareness’ the TeleAR system with a telepresence component and an Augmented Reality component are proposed as a strong cognitive design tool for remote collaborative design. Knowledge of distributed cognition and mutual awareness is a great source of inspiration for the development of the novel design system. In order for the future studies to proceed, further evaluation will be carried out based on the following identified methods: Groupware Heuristic Evaluation [6], Groupware Walkthrough [18,19], Collaboration Usability Analysis [22], Performance Analysis [5], and Human-Performance Models (HPM) [4]. There are a variety of methods to evaluate usability and cognitive activities, thus it is very important to choose evaluation methods selectively and comprehensively. Some methods would focus on data gathered from users while others rely on experienced evaluators for the captured data. The details of how experimental evaluations are conducted depends upon the specifics of the issue in question (e.g., awareness) and such knowledge is accumulated through numerous experiences with numerous prototypes such as TeleAR, each one adding to the corpus of knowledge about that technology’s usability aspects. That is why we proposed many different evaluation methods above to suit different needs; the knowledge is developed gradually through aggregation across this area of inquiry. The attributes of this class of technology and its application dictate the appropriate considerations in the usability

evaluation. In this paper, we put forward just such an approach which is TeleAR that we try to demonstrate as being appropriate for applying this class of technology, in general, to the general collaborative design tasks. As such, the usability evaluation is to be tested by application against the present and future prototypes that are developed for research and industry. In this article, we intend only to present and illustrate the theoretical work that has been done to create the formal concept of TeleAR as a class of technology for that purpose. There are two future evaluations planned out as described in the followings: 5.1. Social presence Questionnaires are the most common method to measure social presence in an environment in similar [13,28,29]. In the presence area, Witmer and Singer’s presence questionnaire [38] is accepted by most researchers as a standard questionnaire to measure people’s sense of being in a distributed collaborative virtual environment. Therefore, the questionnaire used in this study was adapted from those questionnaires. After experiencing and collaborating certain tasks using TeleAR system, participants will be asked to fill in an online questionnaire, which is related to their feelings about the collaborative virtual environment, awareness, and their intentions to use such systems are asked in the questionnaire. 5.2. Trust and spatial faithfulness Measuring trust and spatial faithfulness is also a complicated process because the term of ‘‘trust’’ and ‘‘faithfulness’’ are products of mind. However, they could be reflected by people’s performances when they are doing a task. Nguyen and Canny [25] conducted a cooperative investment task to measure the task performance based on Bos et al.’s [1] social dilemmas game ‘‘Daytrader’’. Borrowing their methodology, experiment that measures the task performance will be conducted. The measure of trust and spatial faithfulness can be operationalized as the sum total of collaborative investments between the three designers using TeleAR system, comparing to the other group who use traditional tele-conferencing. Each group will be given 120 credits, which will be allocated if they invest their time on collaboration rather than doing work individually. After finishing the task, postquestionnaire and post-interview will be conducted as well. 6. Conclusion This paper presents a conceptual TeleAR system to facilitate remote collaborative design, which consists of a telepresence component and an Augmented Reality component so as to provide better awareness in shared environments. The theoretical investigation of this paper concentrates on how these two components support different levels of awareness among geographically dispersed designers to work efficiently. This system was configured with as many features as inherent in F2F communication, no more and no less, rather than merely focusing on media richness. The TeleAR system can enable and promote different levels of workspace awareness by addressing some issues found in collaborative environments. In the telepresence component of the setup, the video and audio interface in the TeleAR facilitates the exchange of synchronized audio/video. The synchronization of multimedia channels is very essential considering the fact that there are often random delays which are unavoidable to network. It is very critical to have integration between video and audio communications for effective distributed interaction process. The shared Augmented Reality whiteboard on top of the horizontal tabletop is innovative as it re-constructs a natural office

X. Wang et al. / Computers in Industry 65 (2014) 314–324

environment where participants can freely communicate digitalized visual information (e.g., sketches, images, 3D virtual solid models) of designs. The shared AR whiteboard fulfills the information-sharing requirement that is one of the critical success factors for collaborative workspace. The advantages of the system are: The system supports the common ground and therefore increases the social capital, as it provides natural interpersonal interaction. Designers will not only notice the action, but also know who performed the action directly from the video display. This would also achieve higher level of awareness than other methods, such as the name list used in online chat programs. The second advantage is from solving the issue of perspective invariance. The TeleAR was designed to support gaze awareness for multi-user video conferencing. Through solving the perspective invariance issue, workspace awareness may be improved at the common ground and social capital level. The improvement in the perspective may aid and maintain common ground level of workspace awareness. The third advantage of the system is the support of trust and spatial faithfulness. This feature further enhances the common ground awareness and natural interpersonal interaction for efficient remote collaboration. The non-verbal cues, such as facial expressions, gaze directions, and gestures viewed from the display, are preserved to help maintain turn-taking protocols and pointing actions, which are underpin facilitators found mostly in F2F communication. The limitation of the system is the scalability which refers to the scale-up of hardware equipment once the size of team increases. This induces another limitation which is the network bandwidth constraint. Acknowledgement Acknowledge Rui Wang for her assistance in taking the screenshots of certain images in this paper. References [1] N. Bos, J. Olson, D. Gergle, G. Olson, Z. Wright, Effects of four computer-mediated communications channels on trust development, in: Paper Presented at the Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, vol. 4(1), 2002, pp. 135–140. [2] G. Bradski, OpenCV: examples of use and new applications in stereo, recognition and tracking, in: Paper Presented at the Proceedings of Conference on Vision Interface (VI’2002), 2002, pp. 347–356. [3] J. Buur, M.V. Jensen, T. Djajadiningrat, Hands-only scenarios and video action walls: novel methods for tangible user interaction design, in: Paper Presented at the DIS’04, 2004, 1–9. [4] J.M. Carroll, M.B. Rosson, G. Convertino, C.H. Ganoe, Awareness and teamwork in computer-supported collaborations, Interacting with Computers 18 (1) (2006) 21–46. [5] J. Cerella, Pigeon pattern perception: limits on perspective invariance, Perception 19 (2) (1990) 141–159. [6] M. Chen, Design of a virtual auditorium, in: Paper Presented at the Ninth ACM International Conference on Multimedia, Ottawa, Canada, (2001), pp. 19–28. [7] F. Coldefy, S. Louis-dit-Picard, Digitable: an interactive multiuser table for collocated and remote collaboration enabling remote gesture visualization, in: Paper Presented at the Proceedings of the 4th IEEE Workshop on Projector-Camera Systems, 2007, pp. 1–8. [8] P. Dietz, D. Leigh, DiamondTouch: a multi-user touch technology, in: Paper Presented at the Proceedings of the 14th Annual ACM Symposium on User interface Software and Technology (UIST ‘01), Orlando, FL, (2001), pp. 219–226. [9] K.M. Everitt, S.R. Klemmer, R. Lee, J.A. Landay, Two worlds apart: bridging the gap between physical and virtual media for distributed design collaboration, in: Paper Presented at the Proceedings of CHI, vol. 5(1), 2003, pp. 553–560. [10] M. Gross, S. Wuermlin, M. Naef, E. Lamboray, C. Spagno, A. Kunz, et al., Blue-C: a spatially immersive display and 3D video portal for telepresence, ACM Trans Graphics 22 (3) (2003) 819–827. [11] C. Gutwin, S. Greenberg, A descriptive framework of workspace awareness for realtime groupware, Computer Supported Cooperative Work 11 (3) (2002) 411–446. [12] C. Gutwin, S. Greenbery, Workspace awareness in real-time distributed groupware: framework, widgets, and evaluation, in: Paper Presented at the People and Computers XI (Proceedings of HCI’96), 1996. [13] C. Halverson, Distributed Cognition as a theoretical framework for HCI: Don’t throw the baby out with the bathwater-the importance of the cursor in air traffic control, Technical Report 9403, Department of Cognitive Science, Univ. of California, San Diego, 1994.

323

[14] E. Hornecker, J. Buur, Getting a grip on tangible interaction: a framework on physical space and social interaction, in: Paper Presented at the CHI 2006, 2006, 437–446. [15] L. Hossain, R.T. Wigand, ICT enabled virtual collaboration through trust, Journal of Computer-Mediated Communication JCMC 10 (1) (2004), online journal. [16] H. Ishii, T.M. Group, Tangible Bits: Towards Seamless Interface between People, Bits, and Atoms, NTT Publishing Co., Ltd., Tokyo, Japan, 2000. [17] H. Ishii, M. Kobayashi, J. Grudin, Integration of inter-personal space and shared workspace: ClearBoard design and experiments, in: Paper Presented at the ACM Conference on Computer-Supported Cooperative Work, 1992, 33–42. [18] C.A. Johnson, Enhancing a Human–Robot Interface Using a Sensory EgoSphere, Vanderbilt University, Nashville, TN, 2003. [19] T. Kanade, P. Rander, P.J. Narayanan, Virtualized reality: constructing virtual worlds from real scenes, IEEE Transactions on Multimedia 4 (1) (1997) 34–47. [20] I.P.H. Kato, M. Billinghurst, ARToolkit User Manual (Version 2.33), 2000. [21] M.J. Kim, M.L. Maher, The impact of tangible user interfaces on designers’ spatial cognition. Human–computer interaction, A Journal of Theoretical, Empirical, and Methodological Issues of User Science and of System Design 23 (2) (2008) 101– 137. [22] N. Kock, The psychobiological model: towards a new theory of computer-mediated communication based on darwinian evolution, Organization Science 15 (3) (2004) 327–348. [23] P. Milgram, H. Colquhoun, A taxonomy of real and virtual world display integration, in: Y. Ohta, H. Tamura (Eds.), Mixed Reality – Merging Real and Virtual Worlds, Ohmsha/Springer Verlag, Tokyo/Berlin, 1999, pp. 1–16. [24] D. Nguyen, J. Canny, Multiview: spatially faithful group video conferencing, in: Paper Presented at the CHI 2004, 2005, 799–808. [25] D.T. Nguyen, J. Canny, Multiview: improving trust in group video conferencing through spatial faithfulness, in: Paper Presented at the Proceedings of the SIGCHI Conference On Human Factors in Computing Systems, 2007, pp. 1465–1474. [26] I.M. Pepperberg, The importance of social interaction and observation in the acquisition of communicative competence: possible parallels between avian and human learning, in: T.R. Zentall, B.G.J. Galef (Eds.), Social Learning, Lawrence Erlbaum Associates Publishers, Hillsdale, 1988, pp. 279–300. [27] B.W. Ricks, C.W. Nielsen, M.A. Goodrich, Ecological displays for robot interaction: a new perspective, in: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Sendai, Japan, (2004), pp. 2855–2860. [28] T. Robertson, Cooperative work and lived cognition: a taxonomy of embodied actions, in: Paper Presented at the E-CSCW’97, Kluwer, 1997. [29] Y. Rogers, J. Ellis, Distributed cognition: an alternative framework for analysing and exploring collaborative working, Journal of Information Technology 9 (2) (1994) 119–128. [30] G. Salomon, Distributed Cognitions: Psychological and Educational .Considerations, Cambridge University Press, Cambridge, United Kingdom, 1993 [31] S.D. Scott, K.D. Grant, R.L. Mandryk, System guidelines for co-located, collaborative work on a tabletop display, in: Paper Presented at the Eighth Conference on European Conference on Computer Supported Cooperative Work, 2003, 159–178. [32] T. Sheridan, Musings on telepresence and virtual presence, Presence, Teleoperators, and Virtual Environments 1 (1) (1992) 120–126. [33] J. Stewart, B.B. Bederson, A. Druin, Single display groupware: a model for copresent collaboration, in: Paper Presented at the the SIGCHI Conference on Human factors in Computing Systems, 1999, 286–293. [34] A. Tang, M. Boyle, S. Greenberg, Understanding and mitigating display and presence disparity in mixed presence groupware, Journal of Research and Practice in Information Technology 37 (2) (2005) 193–210. [35] P. Tuddenham, P. Robinson, Distributed tabletops: supporting remote and mixedpresence tabletop collaboration, in: Paper Presented at the Second Annual IEEE International Workshop on Horizontal Interactive Human–Computer Systems, TABLETOP ‘07, 2007, 19–26. [36] X. Wang, P.S. Dunston, Compatibility issues in Augmented Reality systems for AEC: an experimental prototype study, Automation in Construction, Elsevier 15 (3) (2006) 314–326. [37] X. Wang, P.S. Dunston, Potential of augmented reality as an assistant viewer for computer aided drawing, Journal of Computing in Civil Engineering, American Society of Civil Engineers (ASCE) 20 (6) (2006) 437–441. [38] X. Wang, P.S. Dunston, User perspectives on mixed reality tabletop visualization for face-to-face collaborative design review, Automation in Construction 17 (4) (2008) 399–412. [39] X. Wang, R. Chen, An empirical study on augmented virtuality space for teleinspection of built environments, Journal of Tsinghua Science and Technology (Engineering Index), Elsevier Science 13 (S1) (2008) 286–291, ISSN: 1007-0214. [40] X. Wang, P. Love, M. Kim, L. Wang, Studying the effects of information exchange channels in different communication modes on trust building in computermediated remote collaborative design, Journal of Universal Computer Science 17 (14) (2011) 1971–1990. [41] X. Wang, P.S. Dunston, A user-centered taxonomy for specifying mixed reality systems for AEC, Journal of Information Technology in Construction, International Council for Research and Innovation in Building and Construction (CIB), Rotterdam, Netherlands 16 (2011) 493–508. http://www.itcon.org/2011/29. [42] X. Wang, P.S. Dunston, Comparative effectiveness of mixed reality based virtual environments in collaborative design, IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews 41 (3) (2011) 284–296. [43] X. Wang, P. Love, R. Klinc, M. Kim, P. Davis, Integration of E-learning 2.0 with Web 2.0, Journal of Information Technology in Construction, International Council for Research and Innovation in Building and Construction (CIB), Rotterdam, Netherlands 17 (2012) 387–396. http://www.itcon.org/2012/26.

324

X. Wang et al. / Computers in Industry 65 (2014) 314–324

Prof. Xiangyu Wang is holding Curtin-Woodside Chair Professor for Oil, Gas & LNG Construction and Project Management and the Co-Director of Australasian Joint Research Centre for Building Information Modelling (BIM). Professor Wang is an internationally recognised leading researcher in the field of Construction IT, Building Information Modelling (BIM), Lean, Visualization Technologies, Project Management, and Training, publishing numerous peer-reviewed technical papers. He is the Chair of Australian National Committee of International Society in Computing in Civil and Building Engineering. He has presented numerous keynote speeches at international and industrial conferences on BIM, construction and project management, VR and AR research and practice. He is currently the Editor-in-Chief of Visualization in Engineering which is an international research journal hosted by Springer-Verlag. His work with Woodside Energy Ltd. and other industries, wins numerous awards including the Runner-Up of 2012 Curtin Commercial Innovation Award. Dr. Peter E.D. Love is a John Curtin Distinguished Professor at Curtin University and a Fellow of the Royal Institution of Chartered Surveyors. He has varied research interests which include engineering design and management, and information systems evaluation. He has published more than 600 research papers which have appeared in journals such as the European Journal of Operations Research, Journal of Management Studies, IEEE Transactions in Engineering Management, International Journal of Production Research and International Journal of Production Economics.

Dr. Mi Jeong Kim is an Assistant Professor of Housing and Interior Design at Kyung Hee University, Republic of Korea. She received her Ph.D (2007) in the Key Centre of Design Computing and Cognition at the University of Sydney. She worked as a postdoc fellowship in the Department of Engineering Research Support Organization in UC Berkeley before joining Kyung Hee University. Her current research interest includes Design Studies, Housing Studies, and Human-Computer Interaction. Her email address is [email protected] and her postal address is Department of Housing and Interior Design, College of Human Ecology, Kyung Hee University, Hoegi-dong, Dongdaemun-gu, Seoul 130-701, Republic of Korea. Wei Wang is a postgraduate research student in University of Sydney. He specializes in human computer interaction, design computing, and computer-mediated human-human interaction. He has a bachelor degree in China and has a Master degree in information technology at University of Sydney.