G Model
ARTICLE IN PRESS
JJIM-1389; No. of Pages 11
International Journal of Information Management xxx (2015) xxx–xxx
Contents lists available at ScienceDirect
International Journal of Information Management journal homepage: www.elsevier.com/locate/ijinfomgt
Knowledge Bricks—Educational immersive reality environment Tomasz Hachaj a,∗ , Danuta Baraniewicz b a b
Pedagogical University of Cracow, Institute of Computer Science and Computer Methods, 2 Podchorazych Ave, 30-084 Cracow, Poland Pedagogical University of Cracow, Institute of Special Needs Education, R. Ingardena 4, 30-060 Cracow, Poland
a r t i c l e
i n f o
Article history: Available online xxx Keywords: Virtual reality Gesture Description Language Natural user interface Gestures and actions recognition Gestures knowledge base management
a b s t r a c t The purpose of this paper is to describe our newest achievement – the scientific concept of a novel Virtual Reality Educational System and its implementation – the Knowledge Bricks application. The synthesis of various scientific methodologies and technical approaches that we use led to this unique and novel and fully practical solution. In our opinion one of the most valuable elements of Knowledge Bricks is the application of the Gesture Description Language technology as a natural user interface and easy to manage gestures knowledge base usable for teachers when they are designing virtual worlds or visual information management application. The task-based architecture of the virtual world enables a clear definition of the interaction goal. What is more, our implementation based on off-the-shelf hardware has proven that deploying such an application in schools does not require expensive hardware or dedicated classrooms. We evaluated our approach on a group of students of different ages, particularly of the pre-school and primary school age for whom we believe our proposition will be the most valuable and attractive. Based on this system evaluation we can conclude that our natural user interface can be successfully used for all the considered age groups. We believe that 6–9-year-olds are the best target group for our system. © 2015 Elsevier Ltd. All rights reserved.
1. Introduction The increasing computational power, the relatively low cost of personal computers and the development of cloud computing has made it possible to launch multimedia applications with virtually unlimited functionalities for off-the-shelf hardware. In addition, the new, affordable natural user interface devices can be used in various applications in which the human–computer interaction becomes easier and more attractive (Middleton et al., 2013; Ruppert, Reis, Amorim, de Moraes, & da Silva, 2012). Computer programs utilizing the Natural User Interface (NUI) find their commercial applications not only in entertainment (like computer games) but also in medicine (de Bruin, Schoene, Pichierri, & Smith, 2010; Ogiela & Hachaj, 2015), scientific and technical simulations (Middleton et al., 2013) and in education (Richard, Tijou, Richard, & Ferrier, 2006). The aim of this paper is to present results of our research and the practical implementation of this type of a NUIbased, virtual reality educational application. What is more actions recognition methods we applied enables easy management of gestures knowledge base by application of a rule-based approach to
∗ Corresponding author. Tel.: +48 12 662 63 22; fax: +48 12 662 61 66. E-mail addresses:
[email protected] (T. Hachaj),
[email protected] (D. Baraniewicz).
actions’ descriptions what allow to create a very unique and efficient visual information management system. 1.1. State of the art Virtual reality and its application is a very broad subject (Houliez & Gamble, 2013; Zhao, 2009; Zhou & Deng, 2009). Regardless of the easy accessibility of the necessary hardware and software, there are few immersive virtual reality applications that are designed specifically for educational purposes (Virtual Reality Educational applications – VREa). 1.1.1. Virtual reality in education Literature describes some interesting ideas of VREa, but they are mainly at the conceptual stage of their development. For example, Eggarxou and Psycharis (2007) present a Virtual Reality Modeling Language VRML exploration of the Erechtheum in Athens. It is addressed to 4th grade students and constitutes a teaching approach through the use of various representations created in the VRML language. The design principles and technical characteristics of the applications are described. Cheung et al. (2008) present a virtual, interactive, studentoriented learning environment (VISOLE) which represents a game-based constructivist pedagogical approach encompassing the creation of an online interactive world modeled on a set of
http://dx.doi.org/10.1016/j.ijinfomgt.2015.01.006 0268-4012/© 2015 Elsevier Ltd. All rights reserved.
Please cite this article in press as: Hachaj, T., & Baraniewicz, D. Knowledge Bricks—Educational immersive reality environment. International Journal of Information Management (2015), http://dx.doi.org/10.1016/j.ijinfomgt.2015.01.006
G Model JJIM-1389; No. of Pages 11 2
ARTICLE IN PRESS T. Hachaj, D. Baraniewicz / International Journal of Information Management xxx (2015) xxx–xxx
interdisciplinary domains in which students participate as “citizens” to cooperatively and competitively participating in defining the development of the virtual world as a means to build their knowledge and skills. The virtual world deployed is a farming system including the domains of cultivation, horticulture and pasturage, situated in a competitive economy governed by good public policies. The design and implementation of FARMTASIA follow three key principles. The first one is to make the game as realistic as possible so that students can learn in an almost real-life environment; the second is to provide elements motivating the students to continue learning and gaining various knowledge and skills in the game; and the third is to make it easy for teachers to conduct various VISOLE facilitation tasks. In Richard et al. (2006), it has been suggested that immersive virtual reality technology gives knowledge-building experience and thus provides an alternative educational process. Important key features of constructivist educational computer-based environments for science teaching and learning include the interaction, size, transduction and reification. The authors of that paper also describe an application allowing students to experience the abstract concept of the Bohr atomic model and the quantization of the energy levels. Different configurations supporting the interaction, size and reification through the use of immersive and multi-modal (visual, haptic, auditory and olfactory) feedback are proposed for their further evaluation. The literature emphasizes the role of the way the user is immersed in VREa. Mikropoulos (2006) investigates the effect of the presence on the learning outcomes in educational virtual environments on a sample of 60 pupils aged 11–13. It was earlier/already studied about the effect of the personal presence, the social presence and the participant’s involvement on certain learning outcomes. Results show that the existence of avatars as the pupils’ representations enhanced their presence and helped them to successfully perform their learning tasks. The pupils had a high sense of presence in both cases of the VREa presentation, the projection on a wall and through a head mounted display (HMD). According to the authors, the socialized virtual environment seems to play an important role in learning outcomes. The pupils had a higher sense of presence and completed their learning tasks more easily and successfully if their egocentric representation model used an HMD. Virvou and Katsionis (2008) emphasize the problem of the VREa interface design. According to this article, educational software games aim at increasing the students’ motivation and engagement when they learn. However, if software games are targeted at school classrooms, all students must be able to use them and like them. The usability of virtual reality games may be a problem because these games tend to have complex user interfaces to make them more attractive. Moreover, if these games acquire an educational content, they may become less attractive and appealing for users who are familiar with commercial games. In addition, the main empirical contribution of report (Harrington, 2012) concerns the impact of the parameters of the user interface design, namely the graphical fidelity and navigational freedom, on learning outcomes. According to it, there is strong empirical evidence to support the use of desktop virtual environments, built using a high-fidelity, photo-realistic, and free navigational game engine technology, as educational simulations for informal education. Similar results are reported by Smith and Ericson (2009). Study (Smith & Ericson, 2009) was conducted as part of a larger, more comprehensive long-term research project aimed at combining the two techniques and demonstrating a novel application of the result, using immersive VR to help children learn about fire hazards and practice escape techniques. Results indicate that students were more engaged by the new game-like learning environment and said that they found the experience fun and intriguing.
According to Sánchez, Barreiro, and Maojo (2000), one of the main problems with virtual reality as a learning tool is that there are hardly any theories or models that could serve as the base and reason for developing applications. That paper presents a model defending the metaphorical design of educational virtual reality systems. The goal is to build virtual worlds capable of embodying the knowledge to be taught: the metaphorical structuring of abstract concepts looks for bodily forms of expression in order to make knowledge accessible to students. Among VREa there are those which facilitate so called virtual classes. For example, Di Blas and Poggi (2007) present the project Learning@Europe, its main features and its outcomes, eventually discussing the role of social virtual presence in building effective and lively communities. Learning@Europe is a VR-supported educational service that has involved more than 1000 students from 6 different European countries. These types of VREa are reported to have been successfully used in higher education. For example, Morales, Bang, and Andre (2013), Glowacz and Glowacz (2007), and Mengoni, Germani, and Peruzzini (2011) summarize case analyses of a new high school learning method making use of VR simulation and design systems. Of course, we have to aware that virtual environments permit the exchange of useful (and useless) information, but their educational value is limited because no genuine human contact is experienced, even when they do provide highly mediated social interactions (Noonan & Coral, 2013). 1.1.2. Gesture-based natural user interfaces The subject of gesture recognition and natural user interface includes the topics of touchless interfaces based on full body movement analysis. Hachaj and Ogiela (2014a) present the state of the art discussion of gesture recognition approaches. In addition, a large part of publication (Ogiela & Hachaj, 2015) is devoted to methods and applications of natural user interfaces. A solution proposed by Hachaj and Ogiela (2014a) for a gestures knowledge base creation and management use syntactic approaches that are known for they applications for example in economy (Ogiela, 2013; Ogiela and Ogiela, 2014a) or cryptography (Ogiela & Ogiela, 2010, 2012, 2014b). The process of designing an immersive VR system comprises challenging tasks that require not only the adequate hardware and a gesture recognition method but also an exhausting calibration and validation process. For this reason, this type of an interface should also enable the appropriate reduction of the vast quantity of data that are gathered during the interaction, the integrated analysis of online and offline events, and interactions between qualitative and quantitative data (Feldon & Kafai, 2008). 1.2. Our motivation In summary, to be usable for educational purposes, the VREa has to satisfy certain conditions. From the teacher’s point of view, the potential benefits of using the application in the educational process have to be clear. The target group of the application also has to be clearly identified. What is more, an application must feature an easy to use content management system which the teacher can use to prepare the educational materials for the students (or download these materials from a dedicated website). An attractive data presentation method has to motivate student to learn and solve given tasks. In the system where human–computer interaction is based on actions recognition there is a need for method of creating gestures descriptions database that are easy to manage that means adding, removing altering and combining gestures’ descriptions can be done relatively easily. Finally, the visual appearance of the application should not obstruct the main goal of the work with the program, that is education.
Please cite this article in press as: Hachaj, T., & Baraniewicz, D. Knowledge Bricks—Educational immersive reality environment. International Journal of Information Management (2015), http://dx.doi.org/10.1016/j.ijinfomgt.2015.01.006
G Model
ARTICLE IN PRESS
JJIM-1389; No. of Pages 11
T. Hachaj, D. Baraniewicz / International Journal of Information Management xxx (2015) xxx–xxx
From the student’s point of view, tasks to be solved have to be presented clearly, attractively and in a challenging way. The goal of the interaction with the application also has to be clear and the interface intuitive. Finally, if virtual reality is used, the principles of the virtual environment have to be known in order not to create alienation making the interaction uncomfortable. The reason for writing this paper was to describe our newest achievement – the scientific concept of a novel VREa and its implementation – the Knowledge Bricks application. The synthesis of various scientific methodologies and technical approaches we use creates a unique and novel solution fully usable in practice. Knowledge Bricks satisfy all of VREa requirements mentioned at the beginning of this section. We have to emphasize that we did not intend to create a global, virtual-classroom-type cooperative environment but rather a single class dedicated VREa. In our opinion, one of the most valuable elements of Knowledge Bricks is the application of the Gesture Description Language (GDL) technology in both the natural user interface and easy to manage gestures knowledge base usable for teachers when they are designing virtual worlds. The task-based architecture of the virtual world allows the goal of the interaction to be clearly defined. Our implementation is also based on off-the-shelf hardware, which proves that deploying such an application in schools does not require expensive hardware or dedicated classrooms. We evaluated our approach on a group of students of different ages but especially of the pre-school and primary school age, for whom we believe our proposition will be the most valuable and attractive. Because our interface is based mainly on full-body actions including walking, we had to take into consideration some cognitive phenomena like the distance perception in VR (Kelly, Donaldson, Sjolund, & Freiberg, 2013). What is more, even in the most sophisticated and costly VR systems people do not necessarily perceive and behave the way they would in the real world. This might be related to our inability to use embodied (and thus often highly automated and effective) spatial orientation processes (Riecke, Sigurdarson, & Milne, 2012). However, based on research (Ambinder, Wang, Crowell, Francis, & Brinkmann, 2009), we knew from the start that people with a basic knowledge of geometry can learn to make spatial judgments of the length, and the angle between line segments embedded in a VREa. 2. Materials and methods We have created a scientific concept (and then the implementation) of a virtual reality environment based on new research and several components, both our own and third parties’. In this section, we will describe the role of each of these modules in the overall solution. 2.1. Full body gesture recognition The basic part of our VREa solution is a full-body-activity-based natural user interface. Using it, one can control the behavior of his or her avatar in the virtual environment. We have decided to use this type of an interface instead of a traditional one (like a mouse, a keyboard or a game pad) for several reasons. Firstly, a system should be very intuitive to control even for children that are not familiar with modern computer technology. Children who cannot read yet should also be able to interact with our solution. The gesturebased NUI reinforces the impression of being embodied in an avatar. We also wanted to motivate the user to do some physical exercise which is important at every stage of human development. In order to create this interface, we needed both the hardware (motion capture device) and software solution (drivers and a gesture classifier). We used a Microsoft Kinect sensor as the hardware with Kinect
3
SDK drivers and a software segmentation algorithm (Shotton et al., 2011). We used the method of the GDL as the classifier. GDL has already been proven useful as the NUI (Hachaj & Ogiela, 2014a; Ogiela & Hachaj, 2015) and a classifier for Shorin Ryu Karate techniques (Hachaj, Ogiela, & Piekarczyk, 2014). The GDL specification has been published elsewhere (Hachaj & Ogiela, 2014a; Ogiela & Hachaj, 2015). The left part of Fig. 1 presents the schema of our solution’s configuration during user interactions. It requires a PC, a Microsoft Kinect sensor and an LCD screen or a multimedia projector as well as about 6 m2 of free space. The GDL is a rule-based classifier that uses the knowledge base (similar to one from an expert system) on which the inferring engine performs a forward chaining reasoning. The gestures to be recognized are described in a special context-free grammar called a GDL script (GDLs). In our system, we used GDLs 1.1 and the same implementation as in our other applications, for example the Gesture Description Language Studio (Official website of GDL project). There are two possible ways to create a GDLs description: it can be written by a programmer or it can be automatically generated by the unsupervised learning method (the so-called R-GDL approach (Hachaj & Ogiela, 2014b, 2014c)). Automatically generated rules can also be modified by a program or hand-written rules enhanced by automatically generated scripts. There are no restrictions on managing of GDLs gestures’ knowledge base. Manually adding, removing or editing GDLs rules is possible what is innovation for this type of NUI driven systems. Thanks to this a user that designs virtual world has large freedom in developing interaction scenarios. In the following sections we will identify the gestures used in our NUI. 2.2. Virtual reality environment In order to create a virtual reality environment, a rendering algorithm must be used. We can state that two groups of rendering engines are nowadays used for this task: photorealistic (for example, Hachaj & Ogiela, 2012a, 2012b) and non-photorealistic engines. The first group includes e.g. modern computer game engines, but in order to create a new virtual world (we will also call it a map or a level) in which the user is situated, the world architect very often needs to know a specialized CAD system. Hence it is very often impossible to create the new location in a photorealistic engine without previous experience or extensive training. On the other hand, there are now many non-photorealistic engines that use some simplifications in order to make virtual world modeling very easy and intuitive. They include voxel engines. In a voxel engine, the world is modeled with piles of cubical elements of a uniform size (voxels) that differ one from another mainly in the color pattern (the texture). The world creation is based on arranging these voxels to resemble objects from the real world. One can also define multiple light sources. One of the best known engines of this type is used in the computer game Minecraft (Official website of Minecraft). The popularity of Minecraft in all age groups proved that despite the extreme simplification of the world architecture, users accept it in exchange for easy world modeling. In our VREa, we used a public domain voxel engine called the Voxel Game (Official website of Voxel Game engine). It has capabilities similar to Minecraft, but is open source and there are no restrictions on its use. It utilizes the OpenTK graphic library (a wrapper of OpenGL) and the C# programming language. To make it applicable to our VREa, we had to replace its keyboard- and mouse-based interface with our GDL NUI. We also had to add some elements that were not available before in the Voxel Game in order to supply the educational context. All of these will be described in the following section. Fig. 2 presents an example visualization we have made with the Voxel Game engine. The Voxel Game has a
Please cite this article in press as: Hachaj, T., & Baraniewicz, D. Knowledge Bricks—Educational immersive reality environment. International Journal of Information Management (2015), http://dx.doi.org/10.1016/j.ijinfomgt.2015.01.006
G Model JJIM-1389; No. of Pages 11
ARTICLE IN PRESS T. Hachaj, D. Baraniewicz / International Journal of Information Management xxx (2015) xxx–xxx
4
Fig. 1. Left: the schema of our solution configuration during user interaction. Right: a diagram visualizing dependencies between entities existing in our system. This image corresponds to a class diagram of the implementation.
built-in, very intuitive interface that supports building the virtual environment from pre-designed blocks. 2.3. Knowledge Brick We have to introduce several types of enhancements to the Voxel Engine in order to make it a part of our VREa. A student immersed in a virtual world has to know the goal of this interaction and the teacher has to have a tool enabling them to program a model of the interaction/exercises the student will do. To do so, we have to make it possible to implement task-based scenarios in virtual worlds. In this section, we will propose a scenario building method we implemented in our VREa. The basic assumption of the Knowledge Bricks concept is that the user has to explore a virtual world in search of special tasks boxes containing information and exercises placed there by the teacher. After the user reaches such a task box, they are presented with a particular task in the form of a two-dimensional virtual screen (we will later call it a page) with which they can interact using gestures. Each task can consist of multiple pages. The navigation between pages is designed by the page creator and is event-based. 2.3.1. Task structure Fig. 1 presents a diagram visualizing dependencies between entities that exist in our system. A level is an entity containing multiple resources stored in a single compressed level configuration file. These resources include e.g. additional files used by web pages (images, sounds, css etc.). In addition, this entity contains the inner configuration of the virtual reality environment (the day start time, whether the game uses the fog effect etc.). Each level can have zero or more tasks (box tasks). There is one task displayed at the start of the interaction with a level and multiple tasks that can be found in boxes. These boxes can be found inside a virtual world and are
placed there by the world architect. Each task has zero or more pages. Each page represents a two-dimensional screen with which the user interacts through the NUI. A page is written in HTML (it supports the same version of HTML as the IE browser installed on the system). Apart from the HTML code, every page can also have a GDLs code; zero, one or many events attached to it and some display configuration data, e.g. whether we would like to hide the cursor, whether we want to show the depth stream of the Kinect Sensor and if so, do we use a small or a large window. When an event is fired, it redirects the user to another page from the same task, or it closes a 2D window and sends them back to 3D world. There are three types of events. The first is an HTML object click event – this event is fired if the user holds the cursor over the defined HTML object for 3 s. We define a “NUI-active” html object by giving it a name tag in the HTML page definition. The second is a time event – this event is fired after a defined number of seconds. The third is a GDLs rule satisfied event – this event is fired if a given rule with a conclusion defined in the GDL section of a page is satisfied at least Count times, where Count is specified in the event definition. All these configurations are defined using a dedicated managing application we have developed, called the Knowledge Brick LVL Editor (see Fig. 3).
2.3.2. Natural user interface Our gesture-based user interface is strictly connected with the tasks the user executes in the virtual world. The NUI interface is disabled when the user is too close to the data capturing sensor. We set the distance threshold to 220 cm because our observations show that when an adult user (about 180 cm height) crosses this distance, our data tracking software is very often unable to track all of their body joints.
Fig. 2. An example visualization we have done using a Voxel Game engine. A user can travel through these locations and interact with them.
Please cite this article in press as: Hachaj, T., & Baraniewicz, D. Knowledge Bricks—Educational immersive reality environment. International Journal of Information Management (2015), http://dx.doi.org/10.1016/j.ijinfomgt.2015.01.006
G Model
ARTICLE IN PRESS
JJIM-1389; No. of Pages 11
T. Hachaj, D. Baraniewicz / International Journal of Information Management xxx (2015) xxx–xxx
Winding path
5
Pile of boxes
Box 4
Box 2 Box 1 Fig. 3. A screenshot of the Knowledge Brick LVL Editor with an example level file loaded.
From the NUI perspective there are three basic types of tasks a user can execute. The first is to Select and point – these are tasks a user performs in two-dimensional screens (pages). The position of the cursor on the screen is calculated as a relative position of the right hand to the point in the middle of the user’s chest. The “click” action is fired when a cursor stays over a “clickable” object for at least 3 s. This select and point interface is implemented outside the GDL paradigm. The second are Movement tasks – these tasks are examined with GDLs that are stored inside the page definition. They are used when the page uses a GDL rule satisfied event. A task designer can use them e.g. to check if the user performed some type of physical activity and if so, redirect them to another page. The last are Explore the world tasks – this part of the interface is used while the user is traveling through the virtual world. There are four types of explore the world movements: walking (or running) straight (walking/running can be performed simultaneously with any other command), turning right, turning left and jumping. All these commands are implemented as GDL heuristics. In order to make the system immersive, we have designed those movements to be as similar to natural movements as possible. However, we must be aware that the NUI interface should be accessible for people of different ages, agility levels and body proportions. Consequently, some simplifications are necessary, and we have decided not to apply the R-GDL technology here but rather design them manually. Other movements used in our NUI are jumping (described as the continued rise of the y (horizontal) coefficient which represents the central part of the body (spine)). A jump also moves the person forward in the same manner as if the “walk forward” command was executed. In our VR, one can jump over a single voxel block, but two voxel high walls are too high to leap. The right and left turns are realized by the shoulder balance. The rotation speed can be executed at two speeds (depending on the angle balance threshold): for the first threshold, the rotation speed is 0.5 rad/s, for the second it is equal to 2 rad/s. As one can walk only forward, in order to move in the opposite direction one has to turn 180◦ , which takes about 1.5 s. Walking is represented by a sequence of two key frames. The GDL examines the right and the left knee flection and the relative vertical positions of knees. These descriptions are satisfied when a person is walking (or walking in place). The GDLs for running are very similar, but the knee flection must be greater than during “walking” and the movement frequency must be at least two times higher than in the “walking” definition. The walking speed is 4 voxels/s while the running speed is 16 voxels/s. Forward movements have inertia which modifies the movement speed in time:
⎧ j ⎨ v(j) = v0 · 1 whenv(j − 1) > 0.1 or j = 0 1.1 ⎩ 0
otherwise
Box 3
Narrow door
Straight path Start location
Fig. 4. A bird’s eye view of our test map drawn on a square grid that corresponds to virtual world voxels. This plan lacks the third dimension, but is not distorted by the projection transform. The boxes indicate the most important parts of the map discussed in Section 3.
where v0 is the initial movement speed, i is an index of the discrete time span (1/60 s each) that has passed since the beginning of the movement when j = 0. 3. Evaluation This section presents the evaluation setup and the obtained result of our VREa. Because the content of an application depends on the loaded map that should matched the age and development of the child, we will concentrate on the NUI performance of our solution. One of the most important aspects that we must investigate is the minimum age of a person capable of comfortably interacting with our NUI and how well people of different ages overcome the problem of embodiment mentioned in Section 1.2. After finding this out, we will be able to determine if the body awareness that we develop during development affects our performance in an immersive virtual reality. The results obtained will be very important not only for the scope of the VREa but for the entire field of immersive VR systems. We will also know whether people of different ages enjoy solving tasks while immersed in a virtual reality and are motivated by this. Finally, we will be able to indicate the best target group of our solution, i.e. the youngest possible group of people capable of interacting with our VREa who are young enough to benefit from this type of task-based education. In order to do so, we have to formally (numerically) evaluate our approach using a test map. We will measure the time each user spends performing a particular task in our test map. The shorter the time the person struggled with a task, the more intuitive the solution, so this is a clear indicator that the embodiment was successful. We have decided to use the same map for all age brackets. For this reason, the content of the task in the map has to be understandable to all users. No users taking part in the experiment have ever used this type of VREa before. This lack of their experience allowed us to evaluate the intuitiveness of our interface. 3.1. Tasks preparation This sub-section describes the test map we have prepared for our experiment. The map is a nearly flat surface containing four tasks boxes which serve as check-points for users. On this map we will
Please cite this article in press as: Hachaj, T., & Baraniewicz, D. Knowledge Bricks—Educational immersive reality environment. International Journal of Information Management (2015), http://dx.doi.org/10.1016/j.ijinfomgt.2015.01.006
G Model JJIM-1389; No. of Pages 11 6
ARTICLE IN PRESS T. Hachaj, D. Baraniewicz / International Journal of Information Management xxx (2015) xxx–xxx
Fig. 5. A bird’s eye view visualization of our map in a voxel-based virtual environment. This visualization is enhanced by different patterns (textures) of voxels and “tree” objects that make the interaction with system more attractive. The most important parts of the map that are discussed in Section 3 are labeled in boxes.
check all three types of our NUI listed in Section 2.3.2. Figs. 4 and 5 present a bird’s eye view of the map and its final visualization. The evaluation procedure is organized as follows. At first, the person collecting test data (called the experimenter below) explains the aim of the study and the goal of the interaction with the virtual world to the experiment participant. As stated above, this is the first contact of the participant with our VREa. The tester is informed that they need to find four color boxes on the virtual map and follow the instructions “inside” those boxes. The experimenter shows them how to move, rotate and jump in the virtual reality. They explain, that the correct answers on twodimensional screens (the “select and point” interface) should be selected using the hand-controlled cursor. If the person cannot read yet, the experimenter will read the instruction from 2D screens to them. At the second stage, the user stands in front of a screen – see Fig. 1 (we used a digital projector, but it could also be a large LCD TV) displaying the visualization from the system. They have to perform several actions displayed on pages that prepare that person to interact with the virtual reality. At first, the person is ordered to raise their right hand above their head. This is the easiest task, which aims at calibrating the Kinect SDK tracking and checking whether the user understands instructions from the system. We
Fig. 6. Four example visualizations from the map we prepared for our system evaluation. The visualizations are from the first person perspective, i.e. the way a user sees it. Descriptions that correspond to those from Figs. 4 and 5 are shown in boxes.
will later refer to this task as the Hand up. On the next 2D screen the user is asked to select one of two images (a test of the select and point interface). The detailed order is to “Select the tree” when a house and a tree are shown. We will later refer to this task as 1 from 2. Then, the user is asked to select one of three images (a test of the select and point interface). The detailed order is “Select the image on the wall” when an image, a bookshelf and a house are shown (see Fig. 7 on the left). We will later refer to this task as 1 from 3. The remaining pages test Movement tasks. These actions are also parts of the explore the world interface for the 3D visualization and aim at teaching the interface. On this page, the user is asked to walk in place for about 6 s (though this task may be completed much faster because the GDL script that governs this activity depends on the frequency of motions). We will later refer to this task as Walking. After walking, the user runs in place for about 6 s (they should raise their knees higher than during walking and move a bit faster) – Running. Finally, the user turns right – Turn r, turns left – Turn l and at last jumps two or more times – Jump. At the third stage, the user is transferred to a three-dimensional environment and positioned at the Start location facing Box 1 (see Figs. 4–6). The aim of this task is to reach Box 1 traveling along a relatively Straight path using the explore the world interface. This task prepares the user for more advance navigation in our VREa. We will later refer to this task as Box 1.
Fig. 7. The two dimensional interface (pages) that are displayed to the user when they are performing the Select and point and Movement tasks. The captions and the graphical layout are prepared with HTML scripts as described in Section 2.3. The visualization may also contain visual feedback that shows the depth data of the Kinect sensor with the user’s segmented body.
Please cite this article in press as: Hachaj, T., & Baraniewicz, D. Knowledge Bricks—Educational immersive reality environment. International Journal of Information Management (2015), http://dx.doi.org/10.1016/j.ijinfomgt.2015.01.006
G Model
ARTICLE IN PRESS
JJIM-1389; No. of Pages 11
T. Hachaj, D. Baraniewicz / International Journal of Information Management xxx (2015) xxx–xxx
7
Table 1 The results of evaluating our test map. Each row represents a single attempt by a user to interact with our test map. Each user had only one attempt. The last 10 rows are the means of values in the column and standard deviations grouped by the participants’ age. Pre is a preschool group, Pri are primary school students, Sec are secondary school teenagers, Adult is the adult group and All are the statistics of all participants together. The first column shows the ordinal number of the participant. The second column contains information about the participants’ sex the third about their age. The rest of the columns show times, measured in seconds, spent by the particular person performing a task. No.
Sex
Age
Hand up
1 from 2
1 from 3
Walking
Running
Turn r.
Turn l.
Jump
Box 1
Box 2
Box 3
Box 4
Clapping
BWS
JJ
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
M M M M F F M M M M F F F M F F F F M F F F M F F M M Pre Pre Pri Pri Sec Sec Adult Adult All All
5 6 6 6 7 7 9 9 9 16 16 16 16 17 20 20 20 20 20 20 20 24 24 24 24 24 24 Avg. SD Avg. SD Avg. SD Avg. SD Avg. SD
7.1 2.8 2.6 0.0 4.2 4.2 1.9 4.1 1.6 0.3 0.7 1.1 0.7 0.7 0.3 0.9 0.2 0.5 1.3 1.9 1.1 1.1 0.5 3.3 3.4 0.6 1.8 3.1 2.9 3.2 1.3 0.7 0.3 1.3 1.0 1.8 1.7
11.9 9.7 6.2 8.0 5.2 7.5 11.5 7.4 7.0 3.9 4.8 4.6 3.9 5.4 4.0 4.5 3.3 4.4 4.4 3.4 4.2 3.9 3.5 3.5 4.5 3.0 3.3 9.0 2.4 7.7 2.3 4.5 0.6 3.8 0.5 5.4 2.5
11.9 6.6 6.8 14.2 5.2 8.2 4.8 8.1 5.1 5.5 5.5 6.3 4.6 3.6 6.7 5.1 5.1 6.6 4.7 5.2 3.3 6.1 4.8 8.0 3.7 6.1 6.3 9.9 3.8 6.3 1.7 5.1 1.0 5.5 1.3 6.2 2.4
8.5 6.7 5.8 2.6 13.9 8.5 7.3 10.1 6.5 7.6 3.9 5.1 5.1 8.5 15.0 8.7 2.5 4.9 4.3 5.9 9.0 7.8 5.4 16.4 5.7 4.5 5.4 5.9 2.5 9.3 2.9 6.0 1.9 7.4 4.1 7.3 3.4
5.0 3.5 4.9 9.5 2.6 1.0 2.3 3.4 2.6 4.3 5.7 9.9 5.3 3.5 5.8 4.6 5.7 4.7 4.8 5.8 5.1 5.5 6.7 6.2 4.9 6.5 5.5 5.7 2.6 2.4 0.9 5.7 2.5 5.5 0.7 5.0 1.9
8.0 3.3 5.2 1.0 1.8 2.0 2.1 2.7 1.2 3.3 2.8 3.3 2.9 2.8 5.0 3.9 3.0 2.3 3.0 4.2 4.2 4.1 3.0 4.5 4.2 3.5 8.2 4.4 3.0 2.0 0.6 3.0 0.2 4.1 1.4 3.5 1.7
5.5 3.8 8.5 6.5 2.5 2.7 2.5 2.7 2.5 2.2 2.6 2.2 2.2 2.1 1.9 2.4 2.6 3.8 2.6 2.9 4.8 1.9 2.4 1.0 6.8 2.1 1.8 6.1 2.0 2.6 0.1 2.3 0.2 2.8 1.5 3.2 1.8
7.7 2.3 2.6 5.8 2.7 4.1 2.2 4.7 2.4 2.1 2.0 2.0 2.6 2.0 2.2 2.0 1.7 2.7 1.8 1.9 1.1 2.0 2.4 2.8 5.7 3.1 1.2 4.6 2.6 3.2 1.1 2.1 0.3 2.3 1.2 2.8 1.5
11.0 14.9 10.7 10.7 30.1 29.7 11.5 9.1 4.4 4.7 11.8 4.4 7.3 3.1 7.2 14.6 11.2 14.0 10.9 11.7 5.5 7.2 7.2 11.3 7.2 8.4 11.1 11.8 2.1 17.0 12.1 6.3 3.5 9.8 2.9 10.8 6.4
91.4 64.5 71.8 173.0 25.6 88.7 38.5 59.3 28.9 55.1 48.8 34.7 55.4 18.7 88.2 42.9 45.5 34.1 20.4 38.9 36.9 41.5 53.7 136.5 51.2 42.2 29.1 100.2 49.8 48.2 26.1 42.6 15.7 50.8 30.3 56.1 34.8
10.1 10.2 10.0 5.9 2.4 20.0 10.3 22.5 7.0 7.8 9.2 6.4 4.6 5.9 15.4 6.1 17.9 11.8 10.6 6.4 3.5 11.7 7.7 19.9 75.9 16.2 21.7 9.0 2.1 12.4 8.6 6.8 1.8 17.3 18.5 13.2 13.7
22.9 16.4 53.7 11.5 6.9 15.6 9.5 70.2 6.8 10.2 11.6 9.9 12.7 4.9 14.0 26.3 12.6 11.6 4.6 7.0 47.7 14.6 13.6 12.6 77.4 13.5 10.0 26.1 19.0 21.8 27.3 9.9 3.0 20.4 20.3 19.6 19.3
5.1 5.3 3.7 3.0 3.9 3.6 0.7 0.8 3.0 2.2 1.0 1.6 1.1 2.2 0.9 2.5 1.1 0.9 1.3 1.7 1.6 1.6 2.0 0.9 1.1 1.8 1.3 4.3 1.1 2.4 1.5 1.6 0.6 1.4 0.5 2.1 1.3
9.1 7.4 4.3 4.4 5.4 14.0 6.3 5.5 5.5 4.5 6.9 5.0 5.6 4.2 10.3 6.1 5.3 4.8 5.8 4.6 5.6 4.8 4.7 5.9 10.6 5.6 6.4 6.3 2.3 7.3 3.7 5.2 1.1 6.2 2.0 6.2 2.3
6.4 7.7 5.8 5.4 3.7 8.5 6.1 11.2 4.3 8.2 19.0 3.7 13.6 22.3 3.4 24.6 3.6 3.4 4.5 3.5 3.8 3.6 3.8 3.4 6.1 10.6 4.2 6.3 1.0 6.7 3.1 13.4 7.6 6.0 5.9 7.6 5.9
Fig. 8. How much time each participant spent accomplishing a particular task while interacting with our test map. Each vertical bar represents a single user and is marked with the No. of the participant (compare to Table 1). Bars are of several colors, each color corresponds to a single task as explained in the data labels of the figure. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
Please cite this article in press as: Hachaj, T., & Baraniewicz, D. Knowledge Bricks—Educational immersive reality environment. International Journal of Information Management (2015), http://dx.doi.org/10.1016/j.ijinfomgt.2015.01.006
G Model JJIM-1389; No. of Pages 11 8
ARTICLE IN PRESS T. Hachaj, D. Baraniewicz / International Journal of Information Management xxx (2015) xxx–xxx
Fig. 9. The average time a user spent accomplishing a particular task. The results are grouped by the participants’ age just like in Table 1. The red bar is the plus–minus standard deviation of this time. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
After reaching Box 1, the user is asked on a 2D page screen to reach Box 2. In order to do this they must travel along a Winding path, which requires several left and right turns (see Figs. 4 and 5). To finalize this task, more advanced body awareness than in the Box 1 task is required. We will later refer to this task as Box 2. After reaching Box 2, the user has to reach Box 3 which is situated on the top of a Pile of boxes (see Figs. 4–6). We will later refer to this task as Box 3. The last test of the explore the world interface is to reach Box 4 located inside a small house. The user has to go through Narrow doors that are relatively narrow and difficult to pass. We will later refer to this task as Box 4 (see Figs. 4–6). The last three tasks are displayed to the user on set of pages and they are all movement tasks. Completing them requires executing more advanced whole body movements. The aim of this test is to check if the GDLs rules generalize different ages, body proportions and movement patterns well enough. The user has to clap
their hands – clapping, then do three body weight squats – BWS and finally three Jumping Jacks – JJ. The right-hand side of Fig. 7 shows the page view that is presented to the user when they are performing JJ. This is typical visual feedback that visualizes the depth data of the Kinect sensor with the user’s segmented body. We show the same type of visualization in all movements tasks. After finishing the last task, the interaction with our VREa ends. We use manual definitions for GDLs descriptions of hand up, walking, running, turn l., turn r., jump and clapping actions. In the case of the BWS and JJ, we utilized the R-GDL methodology (Hachaj & Ogiela, 2014b, 2014c). The training dataset consists of 87 complete body weight squat SKL recordings of 10 persons and 92 jumping jack recordings also of 10 adults aged 28+. It has to be emphasized that no individual from the training dataset later took part in the experiment presented in this paper. The shortest path between the Start location and Box 1 measured 29 voxels (7.25 s of walking), between Box 1 and Box 2 79 voxels
Please cite this article in press as: Hachaj, T., & Baraniewicz, D. Knowledge Bricks—Educational immersive reality environment. International Journal of Information Management (2015), http://dx.doi.org/10.1016/j.ijinfomgt.2015.01.006
G Model JJIM-1389; No. of Pages 11
ARTICLE IN PRESS T. Hachaj, D. Baraniewicz / International Journal of Information Management xxx (2015) xxx–xxx
9
Fig. 10. How much time a user spent accomplishing box tasks. This time data is grouped by the task name so it is easier to see the distribution of time among the samples. Color bars represent users, each color is assigned to a single user as shown in the data labels of the figure. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
(19.75 s), between Box 2 and Box 3 14 voxels (3.5 s) and between Box 3 and Box 4 16 voxels (4 s). However, it must be noted that these walking times do not include the actions of rotation or jumping which are often necessary to finish these tasks. 3.2. Numerical results The experiment was performed on two consumer-quality PC computers. The first was equipped with an Intel Core i7-4470 CPU 3.40 GHz processor, 8 GB of RAM, an Nvidia GeForce GTX 770 graphics card and run 64-bit Windows 7 Home Premium. The second had an Intel Core i5-2320 CPU 3.00 GHz, 8 GB of RAM, an AMD Radeon HD 6570 graphic card and run Windows 7 Home Premium 64 Bit. This hardware and software configuration is sufficient for our needs and enables Kinect data tracking at the frequency of 30 Hz (the maximum frequency possible) and 3D rendering at 60 Hz which is typical for all multimedia systems of that type. Our VREa was evaluated by 27 persons: 13 men and 14 women of different ages, education, agility and body proportions. They could be divided into four age groups: preschool children (4 boys), primary school children (2 girls and 3 boys), secondary school teenagers (3 girls and 2 boys) and adult students (9 women and 4 men). All of them declared that they had no previous experience with this type of immersive virtual reality systems. It has to be noted that typical “Kinect-based” games do not allow the same freedom of movements and free navigation through the environment. The distance from the data capturing sensor varies from 2.2 to 3.5 m. If the user left this area, the system indicated this by changing the color intensity of the visualization. Table 1 presents the results of evaluating our test map. Figs. 8–10 visualize the obtained results from Table 1 in order to simplify their analysis and discussion. 4. Discussion The results appear to be quite satisfactory. All experiment participants accomplished all tasks within a reasonable time. First, we can note that the overall time a user needs to finish the task correlates with his or her age. In case of preschool children it amounts to 212.7 ± 40.1 s, while in the primary school group it is 152.4 ± 62.7 s, in the secondary school group 115.2 ± 19.3 s and in the adult group 144.9 ± 53.4 s. Apparently teenagers need the shortest time to finish these tasks and also the standard deviation in this dataset is the
lowest. The standard deviation is relatively high (in our opinion mainly because the number of samples obtained in each age group is limited), but it supports our observations during the experiment and is consistent with expectations. What is more, in nearly all cases the average time a teenage user needs to accomplish a particular task is shorter than the corresponding time for other age groups (see Figs. 8 and 9). This observation is easy to justify: teenagers seem to be the most agile of all the considered groups and have physical education at school. Teenagers also have a lot of experience with modern computer technologies now. For all these reasons, we have decided to use teenagers as the reference group in further discussions. We can easily notice that differences in the average times between teenagers and other groups are not very large. In the case of the select and point tasks, no participants had any problems with adapting to this interface type. Pre-school children required about 9–10 s to indicate the correct answer (including 3 s in which they had to hold the hand cursor in the region of the image to select it), adults, secondary school teenagers and primary school children needed about 3–6 s. That means that this type of interface does not cause any serious interaction problems. With regard to Movement tasks, the GDL classifier approach proved to be capable of generalizing to different body proportions and movement patterns. The Hand up and Clapping gestures were quite obvious but in the case of more complicated full body actions like the BWS and JJ, almost all gestures were correctly recognized at once. According to our observations JJ was the most difficult for teenage users. Compared to other groups, a relatively long average time was needed to accomplish this task (13.4 s). This phenomenon is easy to explain: three persons from this age group in the test set struggled to do jumping jacks correctly. That was because this exercise requires good, simultaneous coordination of hands and legs. However, the overall results reassured us that the R-GDL approach to unsupervised GDL rule generation is right for our system. The last group of tasks we examined were Explore the world tasks. They included Walking, Running, Turn r., Turn l., Jump (as mentioned before, they are parts of the Explore the world interface) and Box tasks (1–4). The first five tasks were completed very quickly by all age groups. Box tasks are the most interesting because they evaluate the user’s cognition and behavior in three dimensional immersive environments. Fig. 10 shows how much time a user spent accomplishing a particular Box task. The data is grouped by task name to make the distribution of time among samples easier
Please cite this article in press as: Hachaj, T., & Baraniewicz, D. Knowledge Bricks—Educational immersive reality environment. International Journal of Information Management (2015), http://dx.doi.org/10.1016/j.ijinfomgt.2015.01.006
G Model JJIM-1389; No. of Pages 11 10
ARTICLE IN PRESS T. Hachaj, D. Baraniewicz / International Journal of Information Management xxx (2015) xxx–xxx
to see. The minimum possible times to complete these tasks (while walking) mentioned in Section 3.1 are unrealistic as they do not take into account the jumping and turning actions. However, several participants were very close to the minimum or even faster because they were running (see Table 1 participant 14 as an example). The most difficult task (the one that required the longest time to complete) was Box 2. This was because it had the longest path and also required several turns. The narrow door in Box 4 could also have been difficult, but users accomplished the last Explore the world tasks in the mean time of 19.6 s. The standard deviation for all considered NUI elements is high. In the case of Explore the world its value is close to the average. This is of course a sign that the time the participants needed to complete the tasks differed substantially. However, as discussed before, even the longest time that occurs in our dataset does not disqualify our NUI interface for reason of its long latency time because it is within an acceptable range and we did not observe any signs of impatience or frustration in the examined group.
5. Conclusions The discussion from the previous sections allows us to conclude that all three elements of our NUI can be successfully used by all the considered age groups. Of course, in the case of teenagers and adults, we cannot use this system for task-based educational purposes for obvious reasons. Still, the results obtained are very important because they prove that our full body gesture-based interface can be operated by a wide group of potential users. In our opinion, the best target group for our system consists of children aged from 6 to 9, because at this early stage of education many school exercises have a form similar to our select and point interface. What is more, we observed that the children greatly enjoyed exploring the three-dimensional world and were motivated to find boxes with new tasks and quickly solve them to learn something new and continue the exploration. Movement and Explore the world tasks also motivated children to do physical exercises. The proposed solution offers tremendous opportunities in the early education process. The teacher can show the class any type of data that can be embedded in the HTML technology and can enhance presentation with multimedia virtual reality features. For example, when teaching vocabulary during a foreign language class, the teacher can very easily build a three-dimensional world visualizing the discussed environment. The teacher could use a similar approach during biology, geography, native language classes etc. Our solution is a very powerful tool for teachers, because the proposed implementation is portable and allows school teachers to easily exchange their teaching aids (created levels and maps). Because of using the GDL technology whose generalization properties were proven in this paper, the person who develops new movements tasks can very easily create exercises requiring different types of physical actions. A teacher can also use other accessible tools which help develop GDLs (Official website of GDL project). As we already mentioned there are no restrictions on managing of GDLs gestures’ knowledge base. Manually adding, removing or editing GDLs rules is possible what is innovation for this type of NUI driven systems. Thanks to this a user that designs virtual world has large freedom in developing interaction scenarios. The solution also requires relatively inexpensive hardware, because it only needs an off-the-shelf PC and an inexpensive multimedia tracking sensor. Obviously, our pilot solution is only an addition to the traditional education process and cannot replace standard methods of teaching. The future goal of our scientific research is to determinate (or estimate) the shape of the learning curve of the proposed NUI. This
is the amount of time/number of attempts users of different ages need to fully explore the potential of the interface. We believe that there are some time constraints to which one can optimize their interaction with our VREa. This future research will require collecting data from a group of people similar to the one from the current research project, but the evaluation will have to be repeated several times within some time period. We also plan to deploy our solution in some pioneering primary schools and preschool institutions, just as we have commercialized our previous GDL-based scientific projects. We plan to create and support a community of teachers using our approach in practice to get the necessary feedback from the teaching community. This feedback will allow us to optimize our proposed approach and indicate goals of future research work toward improving proposed visual information management system.
References Ambinder, M. S., Wang, R. F., Crowell, J. A., Francis, G. K., & Brinkmann, P. (2009, October). Human four-dimensional spatial intuition in virtual reality. Psychonomic Bulletin & Review, 16(5), 818–823. Cheung, K. K. F., Jong, M. S. Y., Lee, F. L., Lee, J. H. M., Luk, E. T. H., Shang, J. J., et al. (2008). FARMTASIA: An online game-based learning environment based on the VISOLE pedagogy. Virtual Reality, 12, 17–25. http://dx.doi.org/10.1007/ s10055-008-0084-z de Bruin, E. D., Schoene, D., Pichierri, G., & Smith, S. T. (2010, August). Use of virtual reality technique for the training of motor control in the elderly. Zeitschrift für Gerontologie und Geriatrie, 43(4), 229–234. Di Blas, N., & Poggi, C. (2007). European virtual classrooms: Building effective “virtual” educational experiences. Virtual Reality, 11, 129–143. http://dx.doi.org/ 10.1007/s10055-006-0060-4 Eggarxou, D., & Psycharis, S. (2007). Teaching history using a Virtual Reality Modelling Language model of Erechtheum. International Journal of Education and Development using Information and Communication Technology (IJEDICT), 3(3), 115–121. Feldon, D. F., & Kafai, Y. B. (2008, December). Mixed methods for mixed reality: Understanding users’ avatar activities in virtual worlds. Educational Technology Research and Development, 56(5–6), 575–593. Glowacz, Z., & Glowacz, A. (2007). Simulation language for analysis of discretecontinuous electrical systems (SESL2). In Proceedings of the 26th IASTED international conference on modelling, identification, and control (pp. 94–99). Hachaj, T., & Ogiela, M. R. (2012). Visualization of perfusion abnormalities with GPUbased volume rendering. Computers & Graphics, 36, 163–169. Hachaj, T., & Ogiela, M. R. (2012, November). Framework for cognitive analysis of dynamic perfusion computed tomography with visualization of large volumetric data. Journal of Electronic Imaging, 21(4), 043017. http://dx.doi.org/ 10.1117/1.JEI.21.4.043017 Hachaj, T., & Ogiela, M. R. (2014, February). Rule-based approach to recognizing human body poses and gestures in real time. Multimedia Systems, 20(1), 81–99. http://dx.doi.org/10.1007/s00530-013-0332-2 Hachaj, T., & Ogiela, M. R. (2014, July). Unsupervised learning of GDL classifier. In IMIS 2014 – The Eighth International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing (IMIS-2014) (pp. 186–191). Birmingham, UK: Birmingham City University. http://dx.doi.org/10.1109/IMIS.2014.22. ISBN: 978-1-4799-4331-9/14 Hachaj, T., & Ogiela, M. R. (2014, August). Full-body gestures and movements recognition: User descriptive and unsupervised learning approaches in GDL classifier. In SPIE Optics + Photonics 2014, Applications of Digital Image Processing XXXVII (Conference OP326) San Diego, USA. Hachaj, T., Ogiela, M. R., & Piekarczyk, M. (2014). Real-time recognition of selected karate techniques using GDL approach, image processing and communications challenges 5. Advances in Intelligent Systems and Computing, 233, 99–106. Harrington, M. C. R. (2012). The Virtual Trillium Trail and the empirical effects of Freedom and Fidelity on discovery-based learning. Virtual Reality, 16, 105–120. http://dx.doi.org/10.1007/s10055-011-0189-7 Houliez, C., & Gamble, E. (2013). Dwelling in second life? A phenomenological evaluation of online virtual worlds. Virtual Reality, 17, 263–278. http://dx.doi.org/ 10.1007/s10055-012-0218-1 Kelly, J. W., Donaldson, L. S., Sjolund, L. A., & Freiberg, J. B. (2013). More than just perception–action recalibration: Walking through a virtual environment causes rescaling of perceived space. Attention, Perception & Psychophysics, 75, 1473–1485. http://dx.doi.org/10.3758/s13414-013-0503-4 Mengoni, M., Germani, M., & Peruzzini, M. (2011, June). Benchmarking of virtual reality performance in mechanics education. International Journal on Interactive Design and Manufacturing (IJIDeM), 5(2), 103–117. Middleton, K. K., Hamilton, T., Tsai, P. C., Middleton, D. B., Falcone, J. L., & Hamad, G. (2013, November). Improved nondominant hand performance on a laparoscopic virtual reality simulator after playing the Nintendo Wii. Surgical Endoscopy, 27(11), 4224–4231.
Please cite this article in press as: Hachaj, T., & Baraniewicz, D. Knowledge Bricks—Educational immersive reality environment. International Journal of Information Management (2015), http://dx.doi.org/10.1016/j.ijinfomgt.2015.01.006
G Model JJIM-1389; No. of Pages 11
ARTICLE IN PRESS T. Hachaj, D. Baraniewicz / International Journal of Information Management xxx (2015) xxx–xxx
Mikropoulos, T. A. (2006). Presence: A unique characteristic in educational virtual environments. Virtual Reality, 10, 197–206. http://dx.doi.org/10.1007/ s10055-006-0039-1 Morales, T. M., Bang, E., & Andre, T. (2013, October). A one-year case study: Understanding the rich potential of project-based learning in a virtual reality class for high school students. Journal of Science Education and Technology, 22(5), 791–806. Noonan, J., & Coral, M. (2013). Education, social interaction, and material copresence: Against virtual pedagogical reality. Interchange, 44, 31–43. http://dx. doi.org/10.1007/s10780-013-9195-x Official website of GDL project http://www.cci.up.krakow.pl/gdl/ Official website of Minecraft https://minecraft.net/ Official website of Voxel Game engine http://www.voxelgame.com/ Ogiela, L. (2013). Data management in cognitive financial systems. International Journal of Information Management, 33, 263–270. Ogiela, M. R., & Ogiela, U. (2010). The use of mathematical linguistic methods in creating secret sharing threshold algorithms. Computers & Mathematics with Applications, 60(2), 267–271. Ogiela, M. R., & Ogiela, U. (2012). DNA-like linguistic secret sharing for strategic information systems. International Journal of Information Management, 32, 175–181. Ogiela, L., & Ogiela, M. R. (2014a). Cognitive systems for intelligent business information management in cognitive economy. International Journal of Information Management, 34(6), 751–760. Ogiela, L., & Ogiela, M. R. (2014b). Cognitive systems and bio-inspired computing in homeland security. Journal of Network and Computer Applications, 38, 34–42. Ogiela, M. R., & Hachaj, T. (2015). Natural user interfaces in medical image analysis. Berlin, Heidelberg: Springer Verlag. ISBN 978-3-319-07799-4 Richard, E., Tijou, A., Richard, P., & Ferrier, J.-L. (2006). Multi-modal virtual environments for education with haptic and olfactory feedback. Virtual Reality, 10, 207–225. http://dx.doi.org/10.1007/s10055-006-0040-8 Riecke, B. E., Sigurdarson, S., & Milne, A. P. (2012). Moving through virtual reality without moving? Cognitive Processing, 13(Suppl. 1), S293–S297. http://dx.doi.org/10.1007/s10339-012-0491-7
11
Ruppert, G. C., Reis, L. O., Amorim, P. H., de Moraes, T. F., & da Silva, J. V. (2012, October). Touchless gesture user interface for interactive image visualization in urological surgery. World Journal of Urology, 30(5), 687–691. http://dx. doi.org/10.1007/s00345-012-0879-0 Sánchez, A., Barreiro, J. M., & Maojo, V. (2000, December). Design of virtual reality systems for education: A cognitive approach. Education and Information Technologies, 5(4), 345–362. Shotton, J., Fitzgibbon, A., Cook, M., Sharp, T., Finocchio, M., Moore, R., et al. (2011). Real-time human pose recognition in parts from single depth images. In CVPR, 2011, 3. Smith, S., & Ericson, E. (2009). Using immersive game-based virtual reality to teach fire-safety skills to children. Virtual Reality, 13, 87–99. http://dx.doi.org/ 10.1007/s10055-009-0113-6 Virvou, M., & Katsionis, G. (2008, January). On the usability and likeability of virtual reality games for education: The case of VR-ENGAGE. Computers & Education, 50(1), 154–178. Zhao, Q. A. (2009, March). A survey on virtual reality. Science in China Series F: Information Sciences, 52(3), 348–400. Zhou, N.-N., & Deng, Y.-L. (2009, November). Virtual reality: A state-of-the-art survey. International Journal of Automation and Computing, 6(4), 319–325. http://dx.doi.org/10.1007/s11633-009-0319-9 Dr. Tomasz Hachaj Works in Chair of Computer Science and Computational Methods at the Pedagogical University of Cracow. In 2006 graduated at the Tadeusz Kosciuszko Krakow University of Technology. In 2010 he was awarded the title of Doctor of Computer Science at the AGH University of Science and Technology. Author of more than 40 scientific international publications on image processing, pattern recognition, computer graphic and artificial intelligence. Dr. Danuta Baraniewicz Works in Institute of Special Needs Education at the Pedagogical University of Cracow. In 2006 she was awarded the title of Doctor of Pedagogy at the Pedagogical University of Cracow. Conducts research on supporting the development of people with intellectual disability and issues of school integration.
Please cite this article in press as: Hachaj, T., & Baraniewicz, D. Knowledge Bricks—Educational immersive reality environment. International Journal of Information Management (2015), http://dx.doi.org/10.1016/j.ijinfomgt.2015.01.006