Accepted Manuscript
A Visual Recommender Tool in a Collaborative Learning Experience Antonio R. Anaya, Manuel Luque, Manuel Peinado PII: DOI: Reference:
S0957-4174(15)00662-4 10.1016/j.eswa.2015.01.071 ESWA 10305
To appear in:
Expert Systems With Applications
Received date: Revised date: Accepted date:
31 October 2014 23 January 2015 25 January 2015
Please cite this article as: Antonio R. Anaya, Manuel Luque, Manuel Peinado, A Visual Recommender Tool in a Collaborative Learning Experience, Expert Systems With Applications (2015), doi: 10.1016/j.eswa.2015.01.071
This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
ACCEPTED MANUSCRIPT
Highlights • We propose a tool that visually guides the recommendation process to increase collaboration.
CR IP T
• It analyzes interactions and a influence diagram warms about the collaboration circumstances. • A visual explanation decision tree shows the collaboration circumstances understandable.
AC
CE
PT
ED
M
AN US
• Our tool provokes self-reflection and provokes sense making about collaboration.
1
ACCEPTED MANUSCRIPT
A Visual Recommender Tool in a Collaborative Learning Experience
CR IP T
Antonio R. Anayaa,∗, Manuel Luquea , Manuel Peinadob a b
Dept. of Artificial Intelligence. UNED. Juan del Rosal, 16. 28040 Madrid. Spain Universidad Politécnica de Valencia. Camino de Vera, s/n. 46022 Valencia. Spain
Abstract
CE
PT
ED
M
AN US
Collaborative learning incorporates a social component in distance education to minimize the disadvantages of studying in solitude. Frequent analysis of student interactions is required for assessing collaboration. Collaboration analytics arose as a discipline to study student interactions and to promote active participation in e-learning environments. Unfortunately, researchers have been more focused on finding methods to solve collaboration problems than on explaining the results to tutors and students. Yet if students do not understand the results of collaboration analysis methods, they will rarely follow their advice. In this paper we propose a tool that analyzes student interactions and visually explains the collaboration circumstances to provoke the self-reflection and promote the sensemaking about collaboration. The tool presents a visual explanatory decision tree that graphically highlights student collaboration circumstances and helps to understand the reasoning followed by the tool when prescribing a recommendation. An assessment of the tool has demonstrated: 1) the students collaboration circumstances showed by the tool are easy to understand, 2) the students could realize the possible actions to improve the collaboration process.
AC
Keywords: Recommender tool, data mining, influence diagram, collaborative learning
Corresponding author. Email addresses:
[email protected] (Antonio R. Anaya),
[email protected] (Manuel Luque),
[email protected] (Manuel Peinado) ∗
Preprint submitted to Elsevier
October 8, 2015
ACCEPTED MANUSCRIPT
1. Introduction
AC
CE
PT
ED
M
AN US
CR IP T
In recent years, collaborative learning has become an appropriate pedagogical strategy for minimizing the disadvantages of distance education because the social component has been incorporated into e-learning environments (Barkley et al., 2004). However, frequent and regular analysis of student actions is necessary so that tutors know that collaborative learning is taking place (Johnson and Johnson, 2004). Moreover, the lack of standards and comparative studies in the collaboration analytics field is a drawback (Strijbos, 2011). Some approaches have focused on student monitoring and participant assessments to analyze collaboration (Martinez-Maldonado et al., 2013). Other approaches have focused on data mining (DM) techniques to carry out collaboration analytics (Gaudioso et al., 2009; Romero et al., 2011). Some approaches have also proposed comparing a student’s collaboration model with an a priori model to infer the student’s state (Chronopoulos and Hatzilygeroudis, 2012). Despite the increasing interest in collaboration analytics in the e-learning community (Wang et al., 2010), there is no common strategy to improve the collaborative process, as we mentioned above. We warn that, in addition to the corrective actions that the different researchers have proposed to improve collaboration, student self-reflection and self-regulation can only be enhanced if the results are clearly shown to the students, according to the open learner model strategy (Bull and Kay, 2010). Therefore, the open learner model strategy requires that the information shown is self-explanatory. So, the development of explanation facilities is crucial for the acceptance of expert systems (Lacave et al., 2007). Humans do not usually accept the advice provided by a computer if they cannot understand how the system reasoned to reach the conclusions. Thus, the system must communicate the knowledge in a way that is easily understandable to a person without any expertise in the inference methods utilized. The motivation of our research is to develop a tool that offers recommendations to students to improve their collaboration process. The tool has the following functionalities: 1) tracking and analysis of the students interactions in a collaborative learning experience; 2) warnings about possible problematic collaboration circumstances of the students; 3) guiding the process to create a recommendation and make it understandable. We would like to point out this issue. The objective of the tool is not to offer a recommendation such as a learning object or an exercise, which the students 3
ACCEPTED MANUSCRIPT
AC
CE
PT
ED
M
AN US
CR IP T
could study or make; that is the typical objective of recommender systems in other learning experiences (Drachsler et al., 2015). The objective of the tool proposed in this paper is to show the analysis to promote the sensemaking of the students and teachers. According to Knight et al. (2013) exploring the sensemaking process offers opportunity to understand how learners, and educators, identify the value of learning through data, and the best ways to support this. Once the tutor understands the students collaboration circumstances, which the tool shows, s/he could make a personal recommendation to one student. Once the students understand their collaboration circumstances, they could follow the recommendation because they understand the reasons of the recommendation. In a previous research (Anaya et al., 2013) we used the visual features of a decision tree (Quinlan, 1986) to explain the pedagogical implications of the proposed analysis method. In this paper we propose a tool that performs collaboration analytics and visually explains the results to students to enhance their self-reflection about collaboration and promote the students and tutor sensemaking. The tool guides tutors so that they can create a personal recommendation to a student who could need a recommendation and explains the reason for the recommendation showing the student’s collaboration circumstances. The tool presents to the student a visual explanatory decision tree (VEDT), which is a graphic representation highlighting the student’s personal circumstances in the collaboration process. We hypothesize that visually showing metainformation about the collaboration process eases the understanding of the problem and encourages students to think about how they are collaborating, thus provoking self-reflection (Klerkx et al., 2014) and sensemaking (Knight et al., 2013). The tool shows the collaboration circumstances as a route in a hierarchical logical tree. The nodes, which represent collaboration indicators, order in the hierarchical logical tree helps to focus student attention on the indicators that are more important for prescribing the recommendation (Klerkx et al., 2014). Each route is labeled to advise the tutor in the process of making, or not, the recommendation. The tool does not require human intervention so tutor and student workloads are not increased during the collaboration learning experience. The proposed tool consists of: 1) a data mining module, which calculates student collaboration indicators using DM techniques (Anaya and Boticario, 2011a); 2) a recommender module, which can identify students who need a recommendation according to the student collaboration indicators and an influence diagram (Anaya et al., 2013); 3) a visual recommendation module, 4
ACCEPTED MANUSCRIPT
AN US
CR IP T
which warns the tutor about the students who could need a recommendation, explains to the student the reasons of the warning and shows the student’s collaboration circumstances visually; 4) an administration module, which helps the teacher or the tutor to configure the automatic operation of the tool and adapt it to the specific learning context. The tool has been assessed by a set of students. The assessment has informed that the students are capable to understand the collaboration circumstances showed by the tool and they could realize the possible corrective actions to improve the collaboration process. The rest of the paper is organized as follows. The next section describes previous research that has focused on collaboration analytics. Section 3 presents the theoretical background of our research. Following, we describe the objective and structure of the tool we have developed. Section 5 illustrates how to use the tool effectively in a collaborative learning experience. An assessment of the tool is described and analyzed in Section 6. We finish this paper with the conclusions and future works. 2. Related works on collaboration analytics
PT
ED
M
Several researchers have recently studied computer support collaborative learning (CSCL). In this paper we briefly review those works on CSCL that have proposed methods for collaboration analytics. As mentioned above, frequent analysis of student actions is necessary to understand the collaboration process (Johnson and Johnson, 2004) and the usefulness of these analyses (Wang et al., 2010). For this reason, we would like to concentrate on other researchers’ collaboration analytics work in the field of collaborative learning improvements and collaboration modeling.
AC
CE
2.1. Collaborative learning improvements According to Soller et al. (2005), the possible types of tools in a collaborative environment are: monitoring, metacognitive and guiding ones. We take into account the conditions of Johnson and Johnson (2004): during collaborative learning, systems should perform frequent and regular processing of collaborative teamwork. Thus, metacognitive and guiding tools are the most appropriate types of tools to improve collaboration learning as they perform inferences on student collaboration. Metacognitive tools generally offer metainformation that students or tutors can use to understand the actual student learning process. Next, stu5
ACCEPTED MANUSCRIPT
AC
CE
PT
ED
M
AN US
CR IP T
dents and tutors can realize the corrective activities to improve the learning process. This fact is called self-regulated learning. Self-regulated learning is guided by metacognition (thinking about one’s thinking), strategic action (planning, monitoring, and evaluating personal progress against a standard), and motivation to learn (Boekaerts and Corno, 2005). Steffens (2001) stated the advantage of also using self-regulation to improve social skills and hence the collaborative learning process. Some research focused on analyzing collaboration to establish assessments or indicators to infer information on student collaboration (Redondo et al., 2003; Talavera and Gaudioso, 2004; Perera et al., 2007). Others researchers focused on displaying tracking or monitoring assessments (Bratitsis and Dimitracopoulou, 2006; Daradoumis et al., 2006; Martínez et al., 2006). Their hypothesis was that the monitoring assessments showed could cause student self-regulation and thus improve collaborative processes. Collaboration analysis is advisable due to the self-regulation features of the information that is displayed to students. Yet this information on collaboration should enhance student self-regulation (Bull and Kay, 2010) and be self-explanatory (Lacave et al., 2007). The third possible type of tool is the guiding one (Soller et al., 2005), also called the recommender tool. Casamayor et al. (2009) proposed assistance for tutors in collaborative e-learning environments. After student participation had been assessed, the rule-based assistant warned about conflictive situations to tutors, where tutor intervention might be necessary. Chronopoulos and Hatzilygeroudis (2012) proposed a system that aims to support users by advising them on the collaborative learning process. The system made a representation of the learning behaviors of learners and groups in the collaborative activities using a fuzzy model and quantitative and qualitative data of their performance and participation. An intelligent agent, monitoring the learning behaviors, issued recommendations to the instructors. Both approaches monitored the interactions and used an a priori set of rules to infer warnings and advice. Although recommender systems are becoming more popular with the aim of supporting learning (Drachsler et al., 2015), few approaches have been applied to the educational context of collaborative learning. Bieliková et al. (2014) proposed the platform ALEF for adaptive collaborative learning. One of the functionalities of ALEF is to store and maintain information in the corresponding user and domain models, which can provide learners recommendations on how to achieve more successful collaboration. 6
ACCEPTED MANUSCRIPT
AC
CE
PT
ED
M
AN US
CR IP T
2.2. How to model collaboration Nowadays, in the e-learning and distance education field, learning analytics and collaboration analytics present important advantages to improve teaching and learning effectively (Wang et al., 2010). We can distinguish two branches in collaboration modeling: 1) the construction of a collaboration model, 2) the identification of collaboration indicators. One model contains indicators, but its purpose is different from the indicators that it possesses. Another model involves a description "of the world" for a variety of purposes. Indicators assume an implicit model and focus on decision making. On one hand, collaboration models propose a set of attributes that are related to each other. Therefore the knowledge that the model contains is wider than the knowledge that is described by the attributes. Duque and Bravo (2007) proposed an approach where student actions were turned into a fuzzy logic attribute. This fuzzy collaboration model was compared with an ideal model of the interactions made by experts. Thus, student interactions were compared with an ideal collaboration model. The comparison gave the student collaboration state. Baghaei and Mitrovic (2007) proposed a collaboration model based on constraints. This collaborative model consisted of a set of 25 constraints that represent collaboration. Each constraint consists of a relevance condition, a satisfaction condition and feedback message, which occurs when the constraint is violated. Duque et al. (2013) proposed an ontology about the collaboration model and collaboration modeling analytics. The authors described the analytic process as a three-phase lifecycle: (i) observation, which captures the user actions and stores the products or artifacts generated; (ii) abstraction, which calculates analysis indicators; and (iii) intervention, which uses the analysis indicators to enhance group work. On the other hand, other researchers merely focused on the collaboration indicators. We describe just a couple of examples. Collazos et al. (2007) proposed high level indicators, socio-cultural and personality indicators (obtained in questionnaires), and low level attributes (students action observations). The high level indicators were calculated using mathematical relationships of the low level attributes. The aim of the authors was to monitor student collaboration. Martinez-Maldonado et al. (2013) proposed a set of dimensions (i.e. mutual understanding, dialogue management, information pooling, consensus reaching, etc.). Experts quantified each dimension with an integral number ranging from -2 (very bad) to 2 (very good). Bieliková et al. (2014) proposed, in their platform ALEF, two core models: 1) domain model, rich yet lightweight domain model semantically describes resources 7
ACCEPTED MANUSCRIPT
ED
M
AN US
CR IP T
within a course; 2) user model, a overlay user model represents current state of user’s knowledge and goals. The collaboration models studied above are a priori models. We have already mentioned the difficulties of having complete a priori knowledge in the educational context, where some variables are unknown (Johnson and Johnson, 2005). In spite of this, the aforementioned researchers proposed a different set of collaboration attributes. We warn about the lack of agreement on modeling student collaboration (Strijbos, 2011). Also, we have observed that the intervention of participants (expert or teammate assessments, questionnaire) was needed to value or assess some attributes in the above research. Collaboration analyses should not increase the participant workload, which is a disadvantage when analyzing collaboration regularly and frequently, as Johnson and Johnson (2004) claimed. We proposed different types of automatic analyses to infer the state of student collaboration. Once the collaboration state is known, warnings appear. However, the tutor should validate and create the recommendation to the student. The tool offers monitoring and metacognitive information on the students to guide the tutor in the recommendation process. In this way the tool restricts the need for a priori knowledge, which is incomplete in the educational context (Johnson and Johnson, 2005). In addition, we have also developed a visual tool to explain collaboration circumstances easily to students (Bull and Kay, 2010; Lacave et al., 2007). 3. Theoretical development
AC
CE
PT
The motivation of our research is to improve collaborative learning without increasing tutor or student workloads. Thus, we have focused on collaboration analytics and we have proposed an automatic tool, where the acquisition data phase and DM analysis phase do not need human intervention, except for the configuration and administration of the tool. The tool focuses on the collaborative learning experience that we have proposed to higher education students during recent academic years. We centered our approach on forum communications acts. This service is quite common in CSCL systems (Dringus and Ellis, 2005). Therefore we point out the possibility of transferring our approach to other environments. After describing the collaborative learning experience, the paper continues with an explanation, based on Anaya et al. (2013) (we advice to read this paper to study in deep the theoretical background of the tool described in 8
ACCEPTED MANUSCRIPT
CR IP T
this paper), of the analysis method that the tool uses and we discuss in depth the visual tool. This visual tool recommends improvements in the collaboration process to students and it displays student collaboration circumstances, which can provoke student self-reflection and self-regulation.
AC
CE
PT
ED
M
AN US
3.1. Collaborative learning experience We have already designed a collaborative learning experience at UNED (Spanish National University for Distance Education). It is divided into two phases: a first phase to do work individually for a few days; and a second phase to develop collaborative work in three-member teams for around two months (Anaya and Boticario, 2011b). During the first phase students become familiar with the problem that they have to solve and the environment that they have to face in the second phase. The learning management system that they utilized was dotLRN platform1 , which supports students with an e-learning space for all the students (general space) and team subspaces so that students in teams do the second phase. In the general space the students can find all the teaching documentation and services for completing the assigned tasks, and can communicate with their peers using the forums. After the first phase, students are grouped into three-member teams based on their answers to the initial questionnaire. Each team has to perform five tasks in the second phase. The main features of the tasks are (listed in sequential order): Task-1, students should communicate with their team members to choose a problem; Task-2, team members should solve part of the problem individually; Task-3, team members should cooperate with one another to join the individual solutions; Task-4, team-members should extend the problem or create a new one collaboratively; and Task-5, team members should write a report together. Collaboration is increased from the first task to the fourth, where the students should collaborate and not only communicate and cooperate. The students are thus able to adjust to the collaboration process gradually. The tasks encouraged interpersonal communication. Finally, team functioning is frequently and regularly processed. We next describe the methods for analyzing student collaboration. 1
http://dotlrn.org
9
ACCEPTED MANUSCRIPT
CE
PT
ED
M
AN US
CR IP T
3.2. Analysis methods In educational environments, where the student is responsible for the collaboration process, or where there are a large number of students, regular and frequent processing of collaboration poses serious challenges if tutors have to address this task. To tackle this issue, DM techniques based on various ML techniques have been introduced, since they have shown their value in computer-supported educational environments (Romero et al., 2009). The objective of these domain-independent techniques is to analyze the collaboration by obtaining inferences while students are collaborating. To achieve these aims, we have developed a quantitative acquisition method by recording forum interactions. Dringus and Ellis (2005) provided strong evidence on the large amount of knowledge that can be discovered from forum analysis. Our analysis is based on a set of statistical indicators derived from the forum interactions (Anaya and Boticario, 2011a). In earlier works we have already proposed a set of quantitative statistical indicators of student forum interactions. These indicators are domain independent and are related to active interactions. The indicators aim to disclose student features such as initiative, activity, constancy and regularity, and acknowledgment by their peers (Anaya and Boticario, 2011a). We have also proposed two DM approaches, based on the statistical indicators, to assess student collaboration with a clustering approach and a metric approach (Anaya and Boticario, 2011b). The clustering approach groups student interactions into three clusters using an EM algorithm and compares the clusters with the expert-based analysis; the cluster algorithms have been shown to group students according to their collaboration (Anaya and Boticario, 2011b). The metric approach is an ML-based method that provides a set of student collaboration indicators. The metric approach uses decision tree algorithms to classify students according to their collaboration label, then selects the most relevant indicators in the resulting decision trees, and, finally, calculates a metric from the selected indicators.
AC
3.3. Decision analysis based on an influence diagram Some variables of the collaboration problem are out of control in a typical e-learning environment. Under these circumstances, analyzing collaboration requires dealing with uncertainty and probability (Anaya et al., 2013). Bayesian networks (Pearl, 1988) constitute a very well-known graphical framework for probabilistic inference that eases communication with the domain expert. Bayesian networks comfortably represent independence and 10
ACCEPTED MANUSCRIPT
AC
CE
PT
ED
M
AN US
CR IP T
dependence probabilistic relations among a set of chance variables. Influence diagrams (Howard and Matheson, 1984) are a framework for representing and solving decision problems under uncertainty. Influence diagrams extend Bayesian networks to include two new types of variables: decision variables, which are under the control of the decision maker, and utility variables, which quantify the decision maker’s preferences. An influence diagram has the two main components of a decision problem under uncertainty: (1) a qualitative part, consisting of a graph where each node represents a variable of the decision problem, and (2) a quantitative part, formed by a set of probability and utility functions that include all the input numerical parameters that the solution algorithm must consider, jointly with the graph, to calculate the best action policies for the decision maker. Due to the circumstances of uncertainty in the collaboration problem, we used the interaction data obtained from the collaboration analysis experience (Anaya and Boticario, 2011b,a) to build an influence diagram for the collaboration analysis (Anaya et al., 2013). We selected some collaboration attributes to include as chance variables in the influence diagram. The collaboration attributes were obtained from a statistical analysis of the student interactions. The selected attributes are: related to student initiative (Initiative); regularity of the initiative (I − regularity); student activity (Activity); regularity of the activity (A − regularity); student leadership (Leadership) and acknowledgment or reputation (Reputation). We also used two variables from the DM approaches proposed by Anaya and Boticario (2011a): collaboration level (Level) and collaboration metric (Metric). A variable named Collaboration represents the expert-based analysis collaboration label. The decision of the problem is represented by variable D, indicating whether we should act or not to solve a collaboration issue detected by the system. The utility variable U represents the tutor preferences. The values of all the variables, except D and U , are high, medium and low. Figure 1 shows the graph of the influence diagram. The probability parameters of the model by (Anaya et al., 2013) were automatically learned from a database of the recorded academic years. The utilities were elicited by the tutor using his expert knowledge; they tried to encourage student collaboration, but did not disturb a student when he / she was already collaborating and therefore no intervention was necessary.
11
AN US
CR IP T
ACCEPTED MANUSCRIPT
Figure 1: Influence diagram of collaboration analysis.
AC
CE
PT
ED
M
Figure 1 displays the influence diagram implemented in OpenMarkov2 , a software tool for editing, debugging and inferring using probabilistic graphical models. The graphical representation of the influence diagram helps the expert (the tutor) to understand the system reasoning and the variable relations. The tutor can easily modify the influence diagram to adapt it to different circumstances in the future. For example, the tutor can add new nodes to the graph, corresponding to new indicators or machine learning measures. The tutor can even add nodes modeling preferences (utility nodes), as for example, student emotional or motivation factors. If new dependence or independence variable relations are discovered as a result of some DM analysis of forum interactions, then the tutor can modify the probabilistic variable relations by adding or removing arcs. The tutor can also modify the model numerical parameters: the conditional probabilities of chance variables or the collaboration preferences values in the utility node U . The tutor can perform changes on the ID and analyze the effect by using the OpenMarkov GUI explanation capabilities (similar to those implemented in Elvira, Lacave et al., 2007). 3.4. Visual explanation of the inference The main purpose of solving an influence diagram is to obtain an optimal policy for every decision of the problem. We solved our influence diagram of 2
See wwww.openmarkov.org.
12
ACCEPTED MANUSCRIPT
AC
CE
PT
ED
M
AN US
CR IP T
collaboration analysis with an inference algorithm available at OpenMarkov. We thus obtained the optimal policy for variable D (see Figure 1), the decision on whether to recommend or not. A decision scenario (Nielsen and Jensen, 1999) is an assignment of values of the variables that are known in the decision problem, and thus their values can be observed by the decision maker when he / she has to make a decision. For example, in the influence diagram in Figure 1, the variables known by the decision maker when deciding on decision D are Initiative, I-regularity, Activity, A-regularity, Leadership, Reputation, Metric and Level. These variables are the parents of D in the graph in Figure 1. Note that no arc exists from Collaboration to D, thus indicating that Collaboration is not known by the decision maker when deciding on D. An optimal policy is usually represented as a table containing a tuple per each decision scenario. It indicates, for the decision scenario in question, which decision option (yes or no in our case) is suggested by the system as being optimal. Unfortunately, given the large size of the domain space (the Cartesian product) of the variables observed in our influence diagram, the policy table of decision D is so huge that a human expert can barely understand and debug the policy. Human beings are reluctant to accept the advice that is provided by a computer system if they are not able to understand how it arrived at those recommendations (Lacave et al., 2007). Thus, presenting a huge policy table directly to a tutor or a student is completely disregarded as they would not understand the system reasoning and, consequently, they would not accept its advice. We therefore need to hide the complexity of the computations and data structures used by the probabilistic inference algorithms, and then to present the results in an understandable way to the system user. We analyzed and synthesized the optimal policy obtained from the influence diagram to make it more understandable (Anaya et al., 2013). We thus used a decision tree learning algorithm to create a classification tree (Quinlan, 1986) in order to predict the best action in each decision scenario of the policy table. Decision tree algorithms classify, explain the classification obtained and can simplify the classification tree to make it easier to understand. We selected the best tree by trying to strike a balance between the precision of the tree and its complexity. Then, the primitive decision tree was pruned and the branches with a small number of events were removed. The decision tree was accordingly simplified. The expert analyzed the scenarios of the simplified decision tree to assess 13
ACCEPTED MANUSCRIPT
CR IP T
the recommendation decision process. He realized that the acknowledgment of the student by teammates is the most relevant indicator for deciding to make a recommendation, and he identified four collaboration scenarios that helped him to understand the intelligence of the system when trying to solve a collaboration problem (Anaya et al., 2013). 4. Visual explanation of the students’ circumstances
AC
CE
PT
ED
M
AN US
Continuing the work initiated by (Anaya et al., 2013) when presenting a decision tree to explain the pedagogical implications of the ID decisions, we are proposing here to also use a decision tree to make easy the understanding of the students collaboration circumstances, to provoke the self-reflection and promote the sensemaking. Thus, in this paper we have evolved the ideas described in our previous research to offer a new tool that guides the collaboration process. We have followed an approach based on information visualization because it is a powerful means of making sense of this data (Heer and Shneiderman, 2012) and this has emerged from research in, for example, human-computer interaction. Duval (2011) pointed out that it can be extremely useful for learners and teachers alike to have a visual overview of their activities and how they relate to those of their peers or other actors in the learning experience. An information visualization approach represents an abstract information space in a dynamic way to facilitate human interaction for exploration and understanding. Information visualization makes use of the principles in Gestalt Theory regarding human visual capacity as a pattern-finding engine to provide a powerful means of making sense of the abundance of available data (Klerkx et al., 2014). These authors classify the visual tools around five basic activities in the learning process; three of them are present in our case: 1) collaborative learning; 2) (self-)reflection about the learning process; 3) designing environments to facilitate learning processes. Our proposal attempts to enhance student (self-)reflection about the collaboration process circumstances. The visual tool displays student collaboration indicators with the values high, medium, or low (easy to understand) in a hierarchical way using a decision tree (see Figure 4). Klerkx et al. (2014) observed that for hierarchical data the most explanatory way of displaying information is usually the tree. The student attribute values are highlighted in the decision tree, so the student can understand his/her collaboration process circumstances, that is, his/her path in the decision tree (see Figure 4). 14
ACCEPTED MANUSCRIPT
AC
CE
PT
ED
M
AN US
CR IP T
The VEDT offers metacognitive information that helps the students to selfreflect about their behavior. Finally, the VEDT constitutes a guiding tool that offers advice in an understandable way so that the student accepts the recommendation. The highlighting of the particular student’s circumstances differentiates the decision tree used by (Anaya et al., 2013) from ours. While Anaya et al. (2013) were interested in giving a whole insight of the collaboration process to the tutor, in this paper we are proposing to highlight the path of the student’s circumstances to promote sensemaking and promote student self-reflection about their actions in the learning environment (Klerkx et al., 2014). This should help the student to better understand the reason for the recommendation, then to increase her/his acceptance of the system (Lacave et al., 2007) so he/she could follow the system advice for solving the collaboration problem. The proposed tool also collects and analyzes student forum interactions to guide the recommendation process. The analysis of student actions and collaboration is quite useful for inferring knowledge (Wang et al., 2010; Dringus and Ellis, 2005). The influence diagram built by Anaya et al. (2013) can take advantage of the inferred knowledge from the forum analysis; the influence diagram thus calculates the best decision for each student by means of probabilistic reasoning. The influence diagram was built under the principles of intervening when necessary and not disturbing the student when he/she is already collaborating. The tool was implemented as a package of collaboration management for the dotLRN platform, but it can be exported to other e-learning environments. Figure 2 shows its structure. Each ellipse in light green indicates a module. Each cylinder in brown indicates a set of relational database tables. The tool stores and maintains student forum collaborations data in a dotLRN database table. The data-mining module collects, prepares and analyzes student interaction data to make them available for the recommender module. The recommender module computes the decisions about the recommendation. The visualization module builds and presents visual information on the recommendation decision on a personalized view based on individual student collaboration circumstances. The administration module maintains the analysis configuration data, the portlet and the view settings. The student and the tutor access the recommender, administration and visualization modules through a dotLRN portlet. If we wanted to export the tool to another e-learning environment different from dotLRN, then we would 15
AN US
CR IP T
ACCEPTED MANUSCRIPT
Figure 2: Structure of the complete tool.
ED
M
just have to implement the corresponding access portlet. Thus, exporting the tool would be reasonably simple. We next give details about each module.
CE
PT
4.1. Data mining module The data mining module starts collecting data on student contributions in the forums. The data are stored in a set of relational database tables. This module computes the attributes of each student’s collaboration record and stores them in the database. It also analyzes the data with machine learning and clustering algorithms to group students according to their collaboration. This module is automatically executed at time intervals set by the tutor; this allows the tool to have updated information on student collaboration.
AC
4.2. Recommender module The recommender module uses the influence diagram and the information obtained from the data mining module to calculate the recommendation decision for the student. Figure 3 illustrates an execution cycle of the recommender module for a particular student. The module first uses the student’s collaboration data and the influence diagram to run an inference algorithm 16
Figure 3: Recommender module.
CR IP T
ACCEPTED MANUSCRIPT
AN US
that calculates the decision on the recommendation. The result of this operation is stored in the database of recommendation historical data. This database allows the tutor or the student to consult the decision on the recommendation in the future.
PT
ED
M
4.3. Recommendation visualization module The recommendation visualization module offers different visualization possibilities to the tutor and the student, as follows. The tutor can consult the recommendation decisions that were calculated by the recommender module. Student collaboration indication values and the tool decision for each student are presented in a table to the tutor who can then decide whether to validate the decision and attach a comment for the student. The student can read the recommendation and the comment (if any) that was attached by the tutor. The student can also visualize an explanatory decision tree in which his/her particular collaboration circumstances—obtained from the forum interactions—are highlighted. The explanatory decision tree is the central element of the explanation of the tool. We describe it in detail below and after explain how the student can interact with it.
AC
CE
4.3.1. A visual explanatory decision tree A VEDT consists of a graph that allows the student to self-reflect and be aware (Klerkx et al., 2014) of his/her collaboration (see, for example, Figure 4). The VEDT constitutes an effective way for the student to observe his/her behavior and promotes the sensemaking. The VEDT shows the collaboration indicators ordered in a logical tree. A route in the tree corresponds to the student collaboration circunstances and is highlighted in red (see Figure 4). The highlighted nodes represent the student collaboration attribute values, i.e., the student collaboration circumstances. The terminal node in the highlighted route is labeled with “yes” (the tool advises a 17
ACCEPTED MANUSCRIPT
ED
M
AN US
CR IP T
recommendation) or “no” (the tool does not advise any recommendation). The visual information of students collaboration circumstances in comparison with other circumstances related to other students (Duval, 2011) and the hierarchical order (Klerkx et al., 2014) provoke self-reflexion and promote sensemaking about the collaboration process. Thus, patterns in the reasoning of the tool can be detected in a short time, and this should thus increase the confidence in the tool (Lacave et al., 2007). The VEDT eases navigation, enabling students to think about the hypothetical scenario of modifying their behavior. The VEDT also allows students to contrast their own path with the same visual data that the tool applies to their peers. The hierarchical presentation of the VEDT helps to focus student attention on the variables that are more important for prescribing the recommendation. Finally, the VEDT is based on the same principles of many visualization tools used in mathematics, as it presents information that cannot be easily inferred from the input data (in our case the policy table) (Klerkx et al., 2014). Figure 5 shows the structure of the life cycle from executing the recommendation visualization module for a particular student. The base decision tree is calculated before operating with any student. It is calculated by synthesizing the policies obtained from evaluating the influence diagram, as explained in (Anaya et al., 2013). We use the word base to name the decision tree initially calculated in order to distinguish it from the VEDT that will be built for each student. The computation of the base decision tree is omitted from the diagram in Figure 3.
AC
CE
PT
4.3.2. Student’s interaction with the visual explanatory decision tree As explained in Section 3.4, the decision tree is a compact and visual representation that synthesizes the output of the reasoning of the tool. Thus in our tool, a 25-leaf decision tree synthesizes the policy table of the influence diagram, which contained 4452 decision scenarios (data obtained from Anaya et al., 2013). The decision tree contains several explanatory elements for the student (Duval, 2011), as we will see with an example that also illustrates the power of the decision tree as a self-reflection tool (Klerkx et al., 2014). Let us assume a student who presents a low value in variables, or collaboration attributes, Reputation and Metric, a medium value in A-regularity, and a low value in Activity. The visualization module builds a VEDT for that student, as illustrated in Figure 4. The highlighted path starting from the root of the VEDT corresponds to the student collaboration indicator 18
PT
ED
M
AN US
CR IP T
ACCEPTED MANUSCRIPT
AC
CE
Figure 4: Visualization explanatory decision tree for a student.
Figure 5: Visualization module.
19
ACCEPTED MANUSCRIPT
AC
CE
PT
ED
M
AN US
CR IP T
values. That highlighted path ends on a leaf labeled with the decision yes (meaning “to recommend”). The student can see in the VEDT the variables appearing in the path (Reputation, Metric, A-regularity, and Activity); these variables have determined the recommendation that the tool has calculated for this student. By following the highlighted path, the student can see the order of importance of these variables: Reputation is the most important in the VEDT. Therefore, we can think that the student’s actions that provoke the teammate reactions, which increase the student’s reputation, are very important in the collaboration process. Of course, the other collaboration attributes (Metric, A-regularity and Activity) cannot be forgotten to understand the student’s collaboration circumstances. If the student analyzes his / her circumstances from a global perspective of the tree, he / she can observe that the recommendation is yes although the his / her A-regularity is medium; he / she can thereby infer that the reason for the recommendation yes is the low values in the other variables (Reputation, Metric, and Activity). Thus, the student can understand if he / she increases his / her activity and provokes teammate reactions, he / she will improve the collaboration process. With all the graphic information contained in the VEDT, the student can perform high-level what-if reasoning on the VEDT to self-reflect about his / her interactions and to think about a hypothetical situation. For example, the hypothetical path in which there is a low value in variables Reputation and Metric, and medium values in A-regularity and Activity ends on a leaf labeled with the decision no; observe in Figure 4 that this leaf is just to the right of the leaf of the highlighted path. This hypothetical path only differs with respect to the highlighted path for the value of the variable Activity, which in the hypothetical path is medium and in the highlighted path is low. Thus, the student knows that if he / she increases the Activity from low to medium (for example, being more active in the forum, i.e. writing more messages) then the tool would not recommend him / her to collaborate more. There are more leaves in the tree that are labeled with the decision no, and thus there are different ways in which the student could modify his / her behavior so that the tool’s recommendation would be no. However, the decision tree gives enough freedom to the student to choose the best way to improve his / her collaboration.
20
ACCEPTED MANUSCRIPT
AN US
CR IP T
4.4. Administration module The administration module allows the tutor to configure the tool. On one hand, the tutor can choose which ML methods, from those explained in Section 3.2, will be applied in the collaboration analytics, which course will be the object of analyses, and how long the analysis process will be performed. When the tool has been configured, the whole collaboration analytics process is executed periodically at the time intervals specified. On the other hand, the presentation of the data views can also be configured. The tool creates a data view for each team. The tutor can edit or remove a view record and can choose which information is presented to the students. The data view information is updated whenever the collaboration analytics process is executed. 5. How to use the tool
AC
CE
PT
ED
M
5.1. From a tutor point of view The tutor directs the recommendation process. He/she validates the recommendations and gives tips to students. A tutor session in the tool can be typically divided into three phases: (1) login into the learning environment, (2) review the assessments and decide which to authorize, and (3) validate and forward recommendations to students. When the tutor has logged into the platform, the tool displays a screen with access to different virtual communities. In each community the tutor can access the general forums and each team forum. The tutor can participate in the forums by answering questions or helping in the collaboration, or he/she can just browse the student interactions. Although the tutor can participate in both the general and the team forums, the tutor usually only intervenes in the general forum and leaves team forums for collaborations between the team’s individual members. The tutor can validate and forward recommendations to students. The tool presents a list of students to the tutor with their collaboration indicator values and a recommendation decision or suggestion for each student (see Figure 6). Initially, all recommendation decisions appear as not validated. To validate a decision, the tutor must review it, and if the tool prescribes that the best action is to recommend to the student, then the tutor can also attach a description or comment for the student. The tutor can read the student collaboration indicator values; this helps the tutor to understand the decision process to better explain the recommendation to the student. It 21
AN US
CR IP T
ACCEPTED MANUSCRIPT
Figure 6: List of recommendations awaiting the validation by the tutor.
AC
CE
PT
ED
M
is important to note that the tutor must validate all decisions, both those associated with the recommendation or not. For example, Figure 6 shows a table displaying the recommendation data for three students, as shown to the tutor. Each row corresponds to a student. For instance, in the first row of the table there is a student where the tool has suggested recommending (value yes), as indicated in the column suggestion. The tutor can visualize the student attribute values. In the particular case of this student, most of the attributes have a low value, and only two attributes have a medium value. This is the case of the student where the recommendation is clearly advisable. The tutor can attach a comment to the student. In this case, as shown in the column description, the tutor encourages the student “to add more content to the posts to arouse more interest and provoke more discussion”, because the tutor understands that this student does not stimulate teammate activity. When the tutor has finished introducing the description, then he / she can validate the recommendation by indicating it in the column validation. 5.2. From a student’s point of view The student is the main user for whom the tool is designed. The normal behavior of a student in the tool is as follows: (1) login into the learning environment, (2) interact via discussions with other colleagues, (3) check the 22
AN US
CR IP T
ACCEPTED MANUSCRIPT
Figure 7: Message indicating the absence of recommendations.
AC
CE
PT
ED
M
status update periodically and (4) view and self-reflect about the recommendation (if any). Based on the student’s contributions to the forums, he / she may have a recommendation from the tool or not. The student can edit and view information of interest about the collaboration. The student can access this information through the collaboration panel, as shown in Figure 7. Figure 7 shows that the team’s indicators Metric and Level of team collaboration are very low. The collaboration panel also offers the student a link to view collaboration metric details about the student’s teammates. Yet the most interesting information that the student can visualize is at the bottom of the collaboration panel. Based on his / her contributions to the forums, the student may have a recommendation from the tool or not. If the student has a recommendation, the tool shows the text message that the tutor attached to the student. For example, in Figure 7 we can see that the tutor encouraged the student “to add more content to the posts to arouse more interest and provoke discussion”. In conjunction with the recommendation, the tool shows a thumbnail of the VEDT; if the user clicks on it then he / she visualizes his / her collaboration circumstances in the VEDT (see Figure 4). This VEDT highlights the student’s collaboration circumstances and the tool decision. In Section 4.3, we saw that the VEDT is a compact and visual explanatory tool for the student. By following the highlighted path (which corresponds to the student collaboration circumstances) he/she can observe which collabo23
ACCEPTED MANUSCRIPT
AN US
CR IP T
ration attributes have been more relevant (and ordered in relevance) because they have determined the decision. However, more importantly, the student can perform what-if reasoning to reflect about his/her interactions. The student can thus know how he/she could modify his/her interactions to increase collaboration; the student can investigate this by analyzing the possible variations of the highlighted path leading to a leaf labeled with the decision no. If the student understands his / her collaboration circumstances, then he / she can understand the reason for the recommendation and realize the tutor’s advice. Students can access a tutorial that explains to them how to use the tool, and, in particular, it gives details about how to interact with the VEDT. Some students may not read the tutorial or just skim it. Nevertheless, the visual information in the VEDT is so intuitive that those students will not have any problem using the VEDT. 6. Survey
ED
M
We have proposed a survey to the students to evaluate their opinion about the tool. As an introduction to the survey, we have given them a text with the basic concepts of a VEDT and the meaning of each collaboration indicator. In the survey we have presented them three different hypothetical students, so called cases, each one with different collaboration circumstances. Cases were labeled as A, B and C. The value of the indicators and the decision for each case are as follows:
CE
PT
Case A The values of the indicators Reputation, Metric, A-regularity and Activity are low. The decision value is yes. This case means that the student has a big collaboration problem. S/he does not interact and her/his team-mates either.
AC
Case B The value of the indicator Reputation is high. The decision value is no. In this case the student interacts with the team-mates without problems.
Case C The values of the indicators Reputation, Metric and A-regularity are low, and the value of A-regularity is medium. The decision value is yes. This case means that the student has a collaboration problem. S/he interacts but not her/his team-mates. 24
ACCEPTED MANUSCRIPT
For each case we have presented the corresponding visual explanatory decision tree, and then we have asked the survey participants the next two types of questions:
CR IP T
1. Which description better reflects the situation shown by the figure?
AN US
(a) The student has no problem collaborating. He’s doing very well. (b) The student is in a good situation for collaboration, but not perfect. There is no problem but there could be some in the future. (c) The student shows a lower activity, but it seems that his colleagues ignore him. (d) The student shows low activity and there is little interaction with peers. 2. What can a student do to improve the situation shown in the figure?
ED
M
(a) The student does not have to do anything, because he/she is collaborating very well. (b) The student should continue collaborating, in order to avoid future problems, and should try to interact with peers a little more. (c) The student has a problem of collaboration and should interact more with her/his peers. (d) The student has a serious problem of collaboration and should interact more frequently as well as motivating their peers to interact more with her/him.
AC
CE
PT
Questions 1 to 6 of the survey were about the cases A, B, and C. The first question about a case was of the type 1 presented above, and the second one was of type 2. Questions 1 and 2 were about case A, questions 3 and 4 were about case B, and questions 5 and 6 were about case C. Taking into account the cases described above, the answer considered as correct for questions 1 and 2 (case A) is option d, for questions 3 and 4 (case B) it is a, and for questions 5 and 6 (case C) it is c. The questions 7 and 8 of the survey were about the general evaluation of the tool: 8. If the tool presents your circumstances collaboration in figures similar to the cases A, B or D, but something different, do you think you would understand your collaboration circumstances? Please, answer using a scale of 5-0, where 5 means you would understand it perfectly and 0 means you would not understand anything. 25
ACCEPTED MANUSCRIPT
Table 1: Percentage of students answering correctly each question.
Question type 1 86.96 65.22 43.48
Question type 2 60.87 60.87 52.17
Mean 73.91 63.04 47.83 61.59
CR IP T
Case A B C Total
Table 2: Overall evaluation of the tool in the scale 0-5.
Standard Deviation 0.98 0.92
AN US
Question Mean 7 3.17 8 3.26
9. If the tool shows your circumstances collaboration as figures of cases A, B or C, but something different, would you know what to do to improve your collaboration? Please, answer using a scale of 5-0, where 5 means you would understand it perfectly and 0 means you would have no idea about what to do.
AC
CE
PT
ED
M
6.1. Results The survey was anonymous, and not compulsory. A total of 23 students answered it. The percentages of questions answered correctly are presented in Table 1. The overall evaluation of the tool (questions 7 and 8) is presented in Table 2 in the scale 0-5. As shown in Table 2, the percentage of students answering correctly each question is clearly higher than the expected percentage 25% that would correspond to a student answering every question randomly. The overall 61.6 % of correct answers in the survey implies that the average number of questions answered correctly was 3.70. We therefore consider that the students have understood correctly the cases presented and they could realize the appropriate corrective actions. We can also see in Table 2 that the students had a lower percentage of correct answers in the case C. We have also calculated the frequencies of the different options of each question, and, we observed that the percentages of students answering the option d in question types 1 and 2 of case C were 43.48 % and 21.74 % respectively. That means it seems that many students that answered wrongly in case C it is because they confused this case with case A (the correct option in this case was d). This confusion is reasonable 26
ACCEPTED MANUSCRIPT
AN US
7. Conclusions and future works
CR IP T
because options c and d represented a scenario where there is a problem of collaboration with a low student’s activity. We suspect students did not have the same confusion problem when answering to case A because all the indicators in this case were low, while in case C some of them were low, but one indicator was medium. We conclude that the students have difficulties to differentiate between two similar collaboration circumstances using the VEDT. The values of the students’ overall evaluation of the tool, 3.17 and 3.26, are both above the mean value of the scale of evaluation (2.5). Thus, we can conclude that the students has evaluated positively the tool.
AC
CE
PT
ED
M
We have presented the research that we have performed to build a tool that can help tutors and students to identify and correct gaps in student behavior in the context of preventive collaborative learning. From the tutor’s perspective, the tool tries to solve the difficulties when facing a large group of students interacting in an online learning environment and simultaneously trying to follow every student’s behavior and establishing a priori patterns to know which students are working incorrectly. From the student’s perspective, the tool constitutes an automatic collaboration learning tool that attempts to guide him/her toward a powerful and simple improvement in the collaboration process. The collaboration analytics process of the tool offers a recommendation based on the student’s particular collaboration circumstances. This contrasts with approaches based on a priori recommendations created before the collaboration learning experience. Our tool warns the tutor in real-time about those students whose circumstances require a recommendation and offers the tutor clear information for him/her to communicate a comment or some advice effectively to the student. The tool supports the student with metainformation about his/her collaboration circumstances. The continuous feedback that the tool receives from forum interactions updates the student’s recommendation status when his / her collaboration circumstances change. The tool offers a visual explanatory decision tree that tries to provoke selfreflection (Klerkx et al., 2014) above the students collaboration circumstances and promote the sensemaking about the collaboration process (Knight et al., 2013). The visual tree illustrates the collaboration circumstances using a hierarchical order that should help them to understand the tool reasoning 27
ACCEPTED MANUSCRIPT
AC
CE
PT
ED
M
AN US
CR IP T
(Duval, 2011; Klerkx et al., 2014) and should eventually help to increase confidence in the recommendation (Lacave et al., 2007). Student confidence in the tool is essential in the e-learning environment because it guarantees that he/she will follow the tool advice and will thus modify his/her behavior to increase collaboration. We can conclude that the students have confidence in the tool according to the assessment, they can understand easily the collaboration circumstances and they could realize the appropriate corrective actions to improve the collaboration process. As future work, we could make the tool independent of the learning environment (in our case dotLRN) to increase the transferable features of our approach. This could be done with a single Java application using OpenMarkov API, the SQL database and WEKA. Since we first implemented an influence diagram, the administrator can now modify the collaboration model (see Figure 1) during the collaborative learning experience. However, if we want to use a new model with new attributes in the tool, then this would require future additional work. Other future work is to automate the whole recommendation process. We could explore mechanisms to assist the tutor in writing a recommendation by including other indicators within the tool or creating the recommendation and asking the tutor for its validation. For instance, the tool can offer a list of recommendations according to the student collaboration scenario and previous recommendations. In this case, the tool should also provide an explanation for the recommendation proposed so that students can understand the reasoning behind such a recommendation. We have observed the difficulties of the students to differentiate between similar collaboration circumstances. The VEDT could highlight the collaboration circumstances with one of the three colors typically used in traffic lights (red, amber and green), where red could indicate the existence of a serious collaboration problem, green could indicate that the student is collaborating perfectly, and amber could be used for more intermediate situations. This use of colors could help the students to realize differences among collaboration circumstances. It would also be interesting to investigate how the tool could automatically modify the numerical parameters of the influence diagram after collecting collaboration data from students. We have assumed in our tool that the model parameters are independent of the domain and of the course and group of students to which the tool is applied. However, the collaboration patterns could be different, for example, on a Massive Open Online Course (MOOC) 28
ACCEPTED MANUSCRIPT
CR IP T
and on a university undergraduate course. We should expect our tool to perform better if it can adapt its parameters to a varying environment. Our tool could thus use automatic learning methods for Bayesian networks (Neapolitan, 2004) to update the influence diagram without human assistance. References
Anaya, A. R., Boticario, J. G., 2011a. Application of Machine Learning techniques to analyze student interactions and improve the collaboration process. Expert Systems with Applications 38, 1171–1181.
AN US
Anaya, A. R., Boticario, J. G., 2011b. Content-free collaborative learning modeling using data mining. User Modeling and User-adapted Interaction 21, 181–216. Anaya, A. R., Luque, M., García-Saiz, T., 2013. Recommender system in collaborative learning environment using an influence diagram. Expert Systems with Applications 40, 7193–7202.
ED
M
Baghaei, N., Mitrovic, A., 2007. From modelling domain knowledge to metacognitive skills: Extending a constraint-based tutoring system to support collaboration. In: Conati, C., McCoy, K., Paliouras, G. (Eds.), UM 2007. Vol. 4511 of LNAI. pp. 217–227.
PT
Barkley, E., Cross, K., Major, C., 2004. Collaborative Learning Techniques: A Practical Guide to Promoting Learning in Groups,. San Francisco,Cal.: Jossey Bass.
CE
Bieliková, M., Šimko, M., Barla, M., Tvarožek, J., Labaj, M., Móro, R., Srba, I., Ševcech, J., 2014. Alef: From application to platform for adaptive collaborative learning. In: Recommender Systems for Technology Enhanced Learning. Springer, pp. 195–225.
AC
Boekaerts, M., Corno, L., 2005. Self-regulation in the classroom: A perspective on assessment and intervention. Applied Psychology: An International Review 54, 199–231. Bratitsis, T., Dimitracopoulou, A., 2006. Indicators for measuring quality in asynchronous discussion forae. In: The 12th International Workshop on Groupware, CRIWG 2006. Springer Verlag, pp. 54–61. 29
ACCEPTED MANUSCRIPT
Bull, S., Kay, J., 2010. Open learner models. In: Advances in Intelligent Tutoring Systems. Springer, pp. 301–322.
CR IP T
Casamayor, A., Amandi, A., Campo, M., 2009. Intelligent assistance for teachers in collaborative e-learning environments. Computers & Education 53, 1147–1154. Chronopoulos, T., Hatzilygeroudis, I., 2012. An advising system for supporting users of collaborative activities in lmss. In: Fourth International Conference on Intelligent Networking and Collaborative Systems. pp. 81– 88.
AN US
Collazos, C. A., Guerrero, L. A., Pino, J. A., Renzi, S., Klobas, J., Ortega, M., Redondo, M. A., Bravo, C., 2007. Evaluating collaborative learning processes using system-based measurement. Educational Technology and Society 10, 257–274. Daradoumis, T., Martínez-Mónes, A., Xhafa, F., 2006. A layered framework for evaluating online collaborative learning interactions. International Journal of Human-Computer Studies 64, 622–635.
M
Drachsler, H., Verbert, K., Santos, O., Manouselis, N., 2015. Panorama of recommender systems to support learning. status: accepted.
ED
Dringus, L. P., Ellis, E., 2005. Using data mining as a strategy for assessing asynchronous discussion forums. Computers & Education 45, 140–160.
PT
Duque, R., Bravo, C., 2007. A method to classify collaboration in CSCL systems. In: Adaptive and Natural Computing Algorithms. Vol. 4431 of Lecture Notes in Computer Science. pp. 649–656.
CE
Duque, R., Bravo, C., Ortega, M., 2013. An ontological approach to automating collaboration and interaction analysis in groupware systems. Knowledge-Based Systems 37, 211 – 229.
AC
Duval, E., 2011. Attention please!: learning analytics for visualization and recommendation. In: Proceedings of the 1st International Conference on Learning Analytics and Knowledge. ACM, pp. 9–17. Gaudioso, E., Montero, M., Talavera, L., del Olmo, F. H., 2009. Supporting teachers in collaborative student modeling: A framework and an implementation. Expert Systems with Applications 36, 2260–2265. 30
ACCEPTED MANUSCRIPT
Heer, J., Shneiderman, B., 2012. Interactive dynamics for visual analysis. Queue 10, 26.
CR IP T
Howard, R. A., Matheson, J. E., 1984. Influence diagrams. In: Howard, R. A., Matheson, J. E. (Eds.), Readings on the Principles and Applications of Decision Analysis. Strategic Decisions Group, Menlo Park, CA, pp. 719– 762. Johnson, D., Johnson, R., 2005. Learning groups. The handbook of group research and practice, 441–461.
AN US
Johnson, D. W., Johnson, R., 2004. Cooperation and the use of technology. In: Handbook of research on educational communications and technology. Taylor and Francis Group, pp. 401–424.
Klerkx, J., Verbert, K., Duval, E., 2014. Enhancing learning with visualization techniques. In: Handbook of Research on Educational Communications and Technology. Springer, pp. 791–807.
ED
M
Knight, S., Buckingham Shum, S., Littleton, K., 23 February 2013. Collaborative sensemaking in learning analytics. In: In: CSCW and Education Workshop (2013): Viewing education as a site of work practice, co-located with the 16th ACM Conference on Computer Support Cooperative Work and Social Computing (CSCW 2013). San Antonio, Texas.
PT
Lacave, C., Luque, M., Díez, F. J., 2007. Explanation of Bayesian networks and influence diagrams in Elvira. IEEE Transactions on Systems, Man and Cybernetics—Part B: Cybernetics 37, 952–965.
CE
Martínez, A., Dimitriadis, Y., Gómez, E., Jorrín, I., Rubia, B., Marcos, J. A., 2006. Studying participation networks in collaboration using mixed methods. International Journal of Computer-Supported Collaborative Learning 1, 383–408.
AC
Martinez-Maldonado, R., Dimitriadis, Y., Martinez-Monés, A., Kay, J., Yacef, K., 2013. Capturing and analyzing verbal and physical collaborative learning interactions at an enriched interactive tabletop. International Journal of Computer-Supported Collaborative Learning 8, 455–485. Neapolitan, R. E., 2004. Learning Bayesian Networks. Prentice-Hall, Upper Saddle River, NJ. 31
ACCEPTED MANUSCRIPT
CR IP T
Nielsen, T. D., Jensen, F. V., 1999. Welldefined decision scenarios. In: Laskey, K., Prade, H. (Eds.), Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI’99). Morgan Kauffmann, San Francisco, CA, pp. 502–511. Pearl, J., 1988. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, San Mateo, CA.
AN US
Perera, D., Kay, J., Yacef, K., Koprinska, I., 2007. Mining learners’ traces from an online collaboration tool. In: Proceedings of the 13th International Conference of Artificial Intelligence in Education, Workshop Educational Data Mining. pp. 60–69. Quinlan, J. R., 1986. Induction of decision trees. Machine Learning 1, 81–106.
Redondo, M. A., Bravo, C., Bravo, J., Ortega, M., 2003. Applying fuzzy logic to analyze collaborative learning experiences in an e-learning environment. USDLA Journal 17, 19–28.
M
Romero, C., Espejo, P. G., Zafra, A., Romero, J. R., Ventura, S., 2011. Web usage mining for predicting final marks of students that use moodle courses. Computer Applications in Engineering Education 21, 135–146.
PT
ED
Romero, C., González, P., Ventura, S., del Jesus, M. J., Herrera, F., 2009. Evolutionary algorithms for subgroup discovery in e-learning: A practical application using Moodle data. Expert Systems with Applications 36, 1632–1644.
CE
Soller, A., Martinez, A., Jermann, P., Muehlenbrock, M., 2005. From Mirroring to Guiding: A Review of State of the Art Technology for Supporting Collaborative Learning. International Journal of Artificial Intelligence in Education 15, 261–290.
AC
Steffens, K., 2001. Self-regulation and computer based learning. Anuario de Psicología 32, 77–94. Strijbos, J.-W., 2011. Assessment of (computer-supported) collaborative learning. IEEE Transactions On Learning Technologies 4, 59–73. Talavera, L., Gaudioso, E., 2004. Mining student data to characterize similar behavior groups in unstructured collaboration spaces. In: Proceedings of 32
ACCEPTED MANUSCRIPT
the Workshop on Artificial Intelligence in CSCL. 16th European Conference on Artificial Intelligence, (ECAI 2004), pp. 17–23.
AC
CE
PT
ED
M
AN US
CR IP T
Wang, Q., Jin, H., Liu, Y., 2010. Collaboration analytics: Mining work patterns from collaboration activities. In: Proceedings of the 19th ACM International Conference on Information and Knowledge Management. CIKM ’10. ACM, New York, NY, USA, pp. 1861–1864.
33