Available online at www.sciencedirect.com Available online at www.sciencedirect.com
Available online at www.sciencedirect.com
ScienceDirect
Procedia Computer Science 00 (2018) 000–000 Procedia Computer Science (2018) 000–000 Procedia Computer Science 12600 (2018) 566–575
www.elsevier.com/locate/procedia www.elsevier.com/locate/procedia
22nd International Conference on Knowledge-Based and 22ndIntelligent International Conference on Knowledge-Based Information & Engineering Systems and Intelligent Information & Engineering Systems
An An Enhanced Enhanced xAPI xAPI Data Data Model Model Supporting Supporting Assessment Assessment Analytics Analytics Azer Nouira∗∗, Lilia Cheniti-Belcadhi, Rafik Braham Azer Nouira , Lilia Cheniti-Belcadhi, Rafik Braham
PRINCE Research Lab ISITCom H- Sousse, Sousse University, Tunisia PRINCE Research Lab ISITCom H- Sousse, Sousse University, Tunisia
Abstract Abstract In the learning analytics field, it is highly significant to track and collect the big educational data to improve the learning experience. In field, it is highlyfor significant to track and collect educational data to improve In the fact,learning one of analytics the e-learning standards data interoperability which the hadbig attracted a remarkable amountthe of learning attentionexperience. in the last In fact, one of the e-learning standards for data interoperability which had attracted a remarkable amount of attentionassessment in the last years is the Experience API (xAPI). In this paper, we explore the use of xAPI in the learning analytics field. Therefore years is the Experience API (xAPI). In this we explore thegenerated. use of xAPI in thewe learning analytics field. Therefore assessment data represents an important proportion of paper, the educational data When focus on assessment, we can launch a new data represents an important proportion of thecontribute educational generated. When focus on assessment, weIncan launch a new source of data that can be analyzed and hence to data the improvement of thewe field of learning analytics. fact, we discuss source of data that can be analyzed and hence contribute to the improvement of the field of learning analytics. In fact, we discuss the suitability of xAPI standard to track the assessment data and try to enhance its data model to support effectively the assessment the suitability of xAPI standard the assessment and try toanalytics enhancepurpose its data model effectively analytics. An ontological modeltoistrack proposed supportingdata assessment based to onsupport the weaknesses ofthe theassessment xAPI data analytics. Anassessment ontological model is proposed supporting analytics purpose based the us weaknesses xAPIabout data model from point of view. Since our proposedassessment pattern is an ontological model, thison gives the chanceoftothe reason model from assessment point of view. Since our proposed pattern is an ontological model, this gives us the chance to reason about the assessment data by performing some logic rules edited with SWRL(Semantic Web Rule Language) for supporting inference the assessment data to bythe performing some logic rules with SWRL(Semantic mechanisms related leaner level according to itsedited assessment performance. Web Rule Language) for supporting inference mechanisms related to the leaner level according to its assessment performance. c 2018 Authors. Published Published by by Elsevier Elsevier Ltd. © 2018 The The Authors. Ltd. c 2018an The Authors. Published by Elsevier Ltd. This This is is an open open access access article article under under the the CC CC BY-NC-ND BY-NC-ND license license (https://creativecommons.org/licenses/by-nc-nd/4.0/) (https://creativecommons.org/licenses/by-nc-nd/4.0/) This is an and openpeer-review access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/) Selection under responsibility of KES International. Selection peer-review Selection and peer-review under responsibility of KES International. Keywords: xAPI; Learning Analytics; Assessment Analytics; Ontology; SWRL Keywords: xAPI; Learning Analytics; Assessment Analytics; Ontology; SWRL
1. Introduction 1. Introduction In the learning analytics field, we need to track and collect detailed information about the learner experience and In the learning field, such we need to track and collect detailed the learner experience and behavior. Learninganalytics environment as Learning Management Systeminformation (LMS) andabout Massive Open Online Courses behavior. Learning environment such as Learning Management System (LMS) and Massive Open Online Courses (MOOC) has the capability to track and collect a huge educational data about the learner experience. In fact, the use (MOOC) has the capability track and collect a huge services educational data about thecollecting learner experience. fact,isthe use of an appropriate e-learningtostandard which provides for tracking and educationalIndata a key of an appropriate e-learning standard which provides services for tracking and collecting educational data is a key step. It plays a major role in deciding the amount of learner data that can be tracked. In the literature we found differstep. It plays a standards major rolefor in e-Learning deciding thecontent amountinteroperability of learner data [1].As that cananbeexample tracked.ofInthose the literature we foundstudent different e-learning which addressed ent e-learning standards for e-Learning content interoperability [1].As an example of those which addressed student performance data interoperability, we can cite Sharable Content Object Reference Model (SCORM)[2].Then, we have performance data interoperability, we can citestandards Sharablefamily: Contentthe Object (SCORM)[2].Then, we have the IEEE Standard for Learning Technology IEEEReference 1484.11.1Model [3] which provides a complex data the IEEE Standard for Learning Technology standards family: the IEEE 1484.11.1 [3] which provides a complex data ∗ ∗
Corresponding author. Tel.: +21697888921. Corresponding Tel.: +21697888921. E-mail address:author.
[email protected] E-mail address:
[email protected] c 2018 The Authors. Published by Elsevier Ltd. 1877-0509 c 2018 1877-0509 Thearticle Authors. Published by Elsevier Ltd. This is an open access under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/) 1877-0509 © 2018 The Authors. Published by Elsevier Ltd. This is an and open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/) Selection peer-review under responsibility of KES International. This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/) Selection and peer-review under responsibility of KES International. Selection and peer-review under responsibility of KES International. 10.1016/j.procs.2018.07.291
Azer Nouira et al. / Procedia Computer Science 126 (2018) 566–575
567
model structure for tracking information on student interactions with learning content. And the IEEE 1484.11.2 [4] provides an API allowing digital educational content and the LMS to query and share collected information. Finally, we cite the Experience API (xAPI) [5] known as Tin Can API which is a recent standard of e-learning interoperability which is widely adopted by different learning environments. The xAPI presents a flexible data model for logging data about the learner experience and performance. The xAPI specification is suitable with the learning analytics purpose, since it tracks and stores the experience and the performance of the learner (learning traces). This can help on converting educational data into useful actions to improve the learning process which is the main objective of the learning analytics field. However, learning environments generate a large amount of learning traces, among these traces, there are assessment traces, collaboration traces, communication traces, etc. Assessment is one of the major steps in the learning process. A successful learning environment must provide effective assessment of learners. The assessment data must be tracked, processed and analyzed in the same way as the learning data. This research area is called assessment analytics. According to Cooper in [6] assessment analytics is the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments from which the data derives assessment. When we focus on assessment data, we can launch a new source of data such as individual assessment results, progression results, assessment comments, assessment context, assessment achievement maps etc, that can be analyzed and give new and different indicators to be interpreted. For this objective, we need to track and collect a large amount of assessment data and the choice of the e-learning standard for data interoperability must be considered as a key step of the assessment analytics process. The aim of this paper is to investigate the xAPI specification in the context of covering the needs of tracking effectively the different types of assessment data. Hence, the major issues in this current research can be summarized in two major questions: • Can the xAPI specification support effectively the track of the different types of assessment data? • Can we enhance the existing xAPI data model dedicated to assessment data to support the assessment analytics objective?. This paper is structured as follows: in section 2 we describe the xAPI specification. In section 3 we present the related work which is divided into two categories. In section 4 we perform a detailed study of xAPI from assessment point of view in order to identify its weaknesses. In section 5 we present our enhanced data model of xAPI which supports effectively assessment analytics and finally in section 6 we formalize a set of reasoning rules for assessment analytics. 2. The xAPI Specification xAPI called also TIN CAN API is developed by Advanced Distributed Learning Initiative (ADL)[5], and it aims at defining a data model for logging data about students learning paths [7]. The main objective of the TIN CAN API standard is to identify guidelines to track, express and store statements about the experience and the performance of the learner. The xAPI specification is based on two main parts. The first part is the format of learning activity statement and the second part is the Learning Record Store (LRS). LRS is the element responsible for storage and exchange of learning activities traces presented as activities statements. The activity statement is a key part of the xAPI data model. All learning activities are stored as statements such as: I did this, of the form actor, verb and object and it can be extended with some optional properties like result and context. In fact, the xAPI data format can describe an activity statement with the following 11 attributes: Unique Identifier, Actor, Verb, Object, Result, Context, Timestamp, Stored (internal recording timestamp), Authority, Protocol and Attachment. The minimal descriptive information of an xAPI activity statement is the actor, verb and object triple, called descriptive information and can be extended for example with result and context. All the other properties are optional and categorized into metadata and complementary data. For example, the unique identifier and the protocol version are the metadata. When a learner completes a learning activity, a simple human and machine readable activity statement is generated. As an example of an activity statement tracked by the xAPI specification is Daniel answered question 4 with score 80in 20 seconds. The xAPI specification is flexible. Therefore, among web-based formal learning, xAPI is capable of tracking informal learning, social learning, and real world experiences. A wealth of examples related to the learning activities that can be tracked including reading an article, watching a training video, using a
568
Azer Nouira et al. / Procedia Computer Science 126 (2018) 566–575
mobile application, or having a conversation with a mentor. As a result the LRS stores various statements concerning the content view and sharing, application usage, video consumption, answering question and quiz and assessment result. It is possible to access and query the data stored in Learning Record Store (LRS) and therefore we could provide different services such as statistical service, reporting service, assessment service and semantic analysis. 3. Related work In this section, we mention several research works using the xAPI specification in the learning analytics context. Hence according to our research, these works can be classified into two categories. The first category deals with the limitation of xAPI specification in covering some specific issues related to the learning context. The second category deals with tracking and analyzing the learning experience using the xAPI specification. Regarding the first category of research works, we quote the work of Chiang et al in [8] which examines whether the xAPI standard can be a solution for two specific issues namely the inconsistence of the learning behaviors across platforms and the limitation of learning interaction which disable an effective analytics. Hence, a case study on learning analytics using the Experience API is performed, analyzing and visualizing learning data interaction such as time of accessing the course, frequencies of clicking etc. As a result of the case study, the authors note that the xAPI is not able to solve the problem of inconsistency across platforms and insist that xAPI has the potential to record more attributes to provide more meaningful analytic results. Corbi and Burgos in [9] discuss the suitability of a recommender system with standardized monitoring engines, hence they give a case study showing the suitability of LIME (Learning, Interaction, Mentoring, and Evaluation) recommender system model with xAPI E-learning standard for monitoring learning activities. In addition they present the required adaptations and modifications that xAPI sentences need, in order to build LIME compatible inputs and how those can be aggregated and mined, in order to feed the system rules and deliver suggestions to students and learners. De nies et al in [10] base their contribution on the fact that the xAPI specification missed an interoperable data model to raise xAPI to its full potential. Therefore, the authors argue that the interoperability missed is especially the provenance learning process logs. The solution proposed is to use the W3C PROV model which provides the needed interoperability. Hence a description of a method that exposes PROV using xAPI statement is highlighted. The second category of research works concerning xAPI standard can be summarized in the following works. Brouns et al in [11] propose an integrated learning analytic solution to the existing EMMA platform, which is a MOOC platform giving a huge number of courses in different languages; this way, learners may be overwhelmed with this massive courses and language choices. The solution is a personalized component that enables EMMA to provide personalized feedback through dashboard solutions and tracking the learning process. Data is tracked with the xAPI standard to perform analytics such as participant¢AZs progress, accessed materials, actively and passively participated modules etc. Amrieh et al in [12] use xAPI standard to track and store educational data retrieved from an e-learning environment called Kalboard 360. The tracked data is classified into three features: behavioral, demographic and academic background feature. After the data collection step, the authors use three different algorithms of data mining techniques: Artificial Neural Network, Naive Bayesien and Decision Tree Classifier to evaluate the impact of such features on student performance. The experimental results show that there is a strong relationship between learners behaviors and their academic achievement. Chakravarthy and Raman in [13] use the xAPI standard of e-learning interoperability to track the learner activities, generates activities statements and store them in the Learning Record Store (LRS). The learning experience is retrieved from a public LMS (Learning Management System) called SCORM cloud. The authors present various real time examples of the learning experience tracked by xAPI, and then they identify the possible inferences that can be derived. From the state of the art, we noticed that there were several attempts made by researchers related to learning analytics using the xAPI standard concerning the twice categories. From the first category of related works presented, we can learn that xAPI specification cannot cover several issues and suffers from some weaknesses related to its suitability with the self regulated learning, the existing interoperable data model, suitability with the recommender system, limitation of learning interaction etc. And from the second category, we can learn that xAPI specification tracks and logs the learning behaviors and experiences of the learner during the whole learning process and does not focus for example on the assessment process which is considered as one of the major steps in the learning process. During our research we did not find any paper that had focused on tracking assessment data with xAPI standard
Azer Nouira et al. / Procedia Computer Science 126 (2018) 566–575
569
despite the fact, that when we investigated the xAPI data model; we found a property called result that can deal with assessment result in the context of learning. Is this due to the weakness of xAPI standard for tracking assessment data? In our case, we will focus essentially on studying the xAPI data model which expresses statements related to the assessment activities. That is to say statements expressing the assessment experience described with the assessment result and investigating, if the existing specification of xAPI can express and describe assessment statement with rich assessment result for the purpose of assessment analytics. In the next section we will study and investigate the xAPI specification from assessment point of view. 4. xAPI and Assessment Analytics Assessment is both ubiquitous and very meaningful as far as students and teachers are concerned [14]. The new learning environment such as MOOCs generate big assessment data (Big Data) given the massive number of courses proposed and the great number of learners enrolled. These assessment data must be tracked, processed and analyzed as the learning data. When we focus on assessment data that is mean we study the various types of assessment data such as the assessment activities, the assessment results of the learners and the assessment context. This research area is called assessment analytics. According to Ellis in [14] assessment analytics has the potential to make valuable contribution to the field of learning analytics by extending its scope and increasing its usefulness. And then he continues to affirm that the role that assessment analytics could play in the learning process is significant and yet it is underdeveloped and underexplored. As mentioned above, the xAPI standard provides services which can be used for data collection describing the experience of the learner in the context of formal learning, informal learning and social learning. Hence, this can lead us to deduce that xAPI standard is presented as an e-learning standard for tracking data interoperability in the whole learning process. It does not focus particularly on assessment. Our objective is to examine the xAPI data model dedicated to conceive the assessment data and show that certain analytics in the assessment process are impossible to achieve when using xAPI as is. When we observe the data model of xAPI, we note that the activity statement is composed of a minimum of three properties namely the actor, the verb and the object. All others properties are optional. An activity statement is presented through an example in figure 1 below.
Fig. 1. Example of a xAPI learning activity statement.
To express statements related to the assessment activities, each statement is described with an optional property which is the result property storing the assessment results in the context of e-learning. The figure 2 below describes the different metadata of the result property to identify its characteristics.
Fig. 2. The result elements.
570
Azer Nouira et al. / Procedia Computer Science 126 (2018) 566–575
As we can observe from figure 2 above, the result property can be described with different metadata that can record information about the assessment result such as the score, the success, the duration and the completion. All these assessment results are very important later for the analytics purpose. Therefore, with this existing description of the assessment result, an example of a set of assessment activities can be expressed as in the figure 3 below.
Fig. 3. A set of assessment activities stored in the learning record store.
The existing description of the assessment result has five different elements: the score, the success, the duration, the completion and the response. Consequently, the analytic process will be based only on these elements. For example, we can for example collect all the quizzes that Bill answered with full completion, or the set of quizzes in which Bill has a passing score of more than 50But this description seems to be poor and general. We cannot for example have more specified information about assessment result and thus perform further analytics such as analytics related to the answered and unanswered questions, true and false answers, frequency of errors and the number of attempt of each assessment object etc. For that reason, in the assessment analytics area we need to track a maximum of assessment result and data related to the assessment object. Another optional property can be used also to express statements about the learning and assessment activities. This property is the context property. Figure 4 below shows the different element of the context property.
Fig. 4. The context elements.
The observation of the context property of the xAPI data model (figure 4) leads us to deduce first of all, that these elements are not related to assessment context. In fact, all of them represent element of the context of learning activity such as the instructor and the team that the statement is related to, the platform used, the language of the statement recorded, etc. Any information is recorded about the context of assessment describing the form, the type or the technique of assessment. Therefore, performing analytics using the context of assessment is impossible such as analyze, compare and perform progression result of the learner performance in each form of assessment (diagnostic, formative and summative) also the measurement reliability of peer assessment etc. The investigation of the result and the context properties showed that the existing xAPI data model dedicated for conceiving assessment data suffers from several weaknesses. Therefore performing an effective assessment analytics will be hard to achieve. According to our point of view, the assessment results tracked by the xAPI specification are insufficient and need to be extended and annotated which can help later for assessment analytics. We suggest and recommend a set of metadata related to assessment result namely the number of correct answer of the assessment activity, the number of the wrong answer of the assessment activity, the number of unanswered questions of the assessment activity and the attempt which means the number of attempts performed by a learner while doing an assessment activity. These elements can further describe the assessment result and hence contribute to support effectively the assessment analytics. From the context property side, we can add metadata about especially the assessment context such as the type of assessment, the form
Azer Nouira et al. / Procedia Computer Science 126 (2018) 566–575
571
of assessment and the technique of assessment. In the next section, we will propose an enhanced xAPI data model that can conceive and track more effectively the assessment data for purpose of assessment analytics according to our recommendation. Our contribution is based on the weaknesses of the existing xAPI data model dedicated to assessment data summarized into two major points: • Insufficiency of information dedicated to the track of the assessment result. • Lack of information dedicated to the assessment context. 5. Enhanced xAPI Data Model Supporting Assessment Analytics Our objective in this section is to present an enhanced xAPI data model taking into account the weaknesses of the existing xAPI data model from the assessment analytics point of view. In order to ensure a consistent representation of our enhanced data model for assessment analytics, it will be interesting to develop an ontological model for the representation of the different terms of assessment analytics. The idea behind choosing an ontological model is considering the several advantages given by an ontological model namely aggregation of the scattered data in the web, permitting inferences and contribute coherence and consistency without forgetting also reusability and interoperability. For the purpose of developing our ontology, we follow the most important steps detailed in [15], the first step is to enumerate the most important terms in our ontology through specification of classes such as: assessment type, assessment statement, etc. Then it is necessary to define the classes and their hierarchy. After that, we need to define the class properties and attributes and finally determine the facets of these attributes. To develop our enhanced xAPI ontological model, several tools are available and can be used such as SWOOP [16], Prot´eg´e [17] and OntoEdit [18]. In our case, we used Prot´eg´e that offers a simple, complete and expressive graphical formalism. It also facilitates the design activity. The main class of our proposed assessment analytics ontology for enhancing xAPI data model is the assessment activity class (See figure 5 above). This class is linked with has-a relation with a set of classes describing the assessment activity context metadata, which is one of our contributions to enhance the existing xAPI data model to support assessment analytics.Technically we can use the owl:ObjectProperty element. Heres an example of implementing the HasAssesForm object property using OWL (Ontology Web Language):
∨
∨
∨
∨
∧ ∧ ∧ ∧
owl: rdf:ID=”HasAssesForm” rdfs:domain rdf:resource=”#Assessment Activity”/ rdfs:range rdf:resource=”#Assessment Form”/ /owl:ObjectProperty
For instance, we cite the assessment environment class and its possible instances such as MOOC (Massive Open online courses), LMS (Learning Management System) and PLE (Personal Learning Environment).The second class is the assessment session class and its possible attribute such as activity id and the date of logging. It serves to ensure that no information is duplicated. Also we have, the assessment form class which can be diagnostic assessment, formative assessment or summative assessment. Furthermore, we have the assessment technique class and its possible instances namely MCQ (Multiple Choice Question), MRQ (Multiple Response Question), T/F (True or False question) and Fill in the Blanks question. Then, we have the assessment type class which can be automated assessment, peer assessment and self assessment. Heres an example of how to implement the AutoAsses sub class using OWL:
∨
∨
∨
∧ ∧ ∧
owl:Class rdf:ID=”AutoAsses” rdfs:subClassOf rdf:resource=”#Assessment Type”/ /owl:Class
All these assessment context data must be tracked for enriching the existing xAPI data model to support assessment analytics because they are useful later in the stage of analytics. Moving now to the most important class and the core of our ontological model, which is the assessment statement class as is responsible to represent the assessment activity
572
Azer Nouira et al. / Procedia Computer Science 126 (2018) 566–575
Fig. 5. The enhanced ontological xAPI data model supporting assessment analytics.
which describes the learning behaviors and experience of the learner during the assessment process. The statement assessment class of the proposed ontology is able to express and formulate sentences of the form: Peter failed the quiz with a passing score 20quiz with 7 unanswered questions and Daniel attempted the quiz 3 times. This means that we are able to formulate statements of the form assessment actor, assessment verb, assessment object and assessment result. The assessment statements have 4 required properties which conform to its format such as Peter failed the quiz with a completion false. From this example we can extract four properties: the assessment actor, the assessment verb, the assessment object and the assessment result. Let us begin with the first property which is the assessment actor property that refers to whom? For example Peter that means the learner in our context. Each assessment actor class is described by a datatype property named the unsupervised level which is the level (prior knowledge) of the learner before starting the learning process. And an object property named the supervised level which is the real and the concrete level of the learner related to his performance in the assessment process and identified automatically by analyzing the assessment traces of the learner. The assessment actor class can be further described and annotated using some classes of FOAF ontology [19] which is an ontology describing individuals, their activities and relations with other people to define agent and group. As an example of classes that can be used, we cite the FOAF:Agent, the FOAF:Group and FOAF:OnlineAccount, as attributes we can use the Name and the Mbox of the assessment actor. The second one is the verb property which is a key part of an assessment analytics sentence; it describes the action performed by the learner during the assessment process. As instances of assessment verbs we may cite: answered, completed, attempted and scored. Figure 5 above shows that the verb class is described by two attributes. The first one is the value of the verb and the second one is the language of the verb.
Azer Nouira et al. / Procedia Computer Science 126 (2018) 566–575
573
The third property is the assessment object which forms the third part of the assessment statement which refers to what was experienced in the action defined by the verb e.g. the quiz. The assessment object class can be annotated and described through using some attributes of the standards LOM (Learning Object Metadata) [20] and Dublin Core [21] and can be described more thoroughly by some other attributes like the coefficient, the module of each assessment object and its level of difficulty. And finally, the assessment result class which is one of the main classes of our enhanced xAPI ontological model since it contains a several datatype properties that record information about assessment result. For instance, we have the score, the completion, the duration, and the success as they presented in the existing xAPI data models and our proposed metadata to describe more the assessment result namely the attempt, number of correct answer, number of wrong answer and number of unanswered question. Technically, it can be implemented by using the element owl:DatatypeProperty. Here is an example of the implementation of the score datatypeproperty:
∨
∨
∨
∨
∧ ∧ ∧ ∧
owl:DatatypeProperty rdf:ID=”Score” rdfs:domain rdf:resource=”#Assessment Result”/ rdfs:range rdf:resource=”http://www.w3.org/2001/XMLSchema#float”/ /owl:DatatypeProperty
This is particularly helpful for future assessment analytics. The assessment result class should be described with a rich metadata since our objective is to conceive a complete, correct and flexible ontological assessment analytics model. These bits of information are important and should be recorded for the purpose of assessment analytics. Our ontological model for assessment analytics is a consitent and flexible model. In fact it aggregates the scattered assessment data in the web such as the assessment activities, the assessment context and the assessment results. Thus we can describe a set of a logic formalism for assessment analytics. This gives us the chance to reason about the assessment data by performing logic rules for supporting inference mechanisms related for example to the leaner level and the assessment object level of difficulty. 6. Assessment analytics reasoning The inclusion of ontologies in the assessment analytics purpose is an interesting proposal. This leads us to formalize the concepts and relationships involved. To generate a set of inferences related to assessment analytics, we will manage and perform some reasoning rules containing information distributed essentially in the following concepts: Assessment Actor, Assessment Object, Assessment Result and Assessment Context. Hence, a formal description of assessment analytics can be generated. In our case, we will use SWRL (Semantic Web Rule Language) which is a combination of Rule Mark-up Language (known as RuleML) and OWL (Web Ontology Language) [22] and is intended to be the rule language of the semantic web. SWRL includes a high level abstract syntax for horn like rules. All rules are expressed in terms of OWL concepts (classes, properties and individuals). Rules in SWRL are implication rules with the following form: Consequent
∨
Antecedent ——-
The SWRL rule describes a conjunction of a list of antecedent representing the different assessment aspects responsible of detecting the concrete level of the learner. In fact, one of the most powerful features of SWRL is its ability to support a range of built-ins [23]. A built-in is a predicate that takes one or more arguments and is evaluated to be true if the arguments satisfy the predicate. For example, a greaterThan built-in can be defined to accept two arguments and return true if the first argument is no less than the second one. Each assessment actor (learner) has a level which contains two types of levels: unsupervised level and supervised level. The unsupervised level is the level declared by the learner before starting the process of learning. On the other hand the supervised level is the concrete level of the learner according to the learner performance during the assessment process. Thus some learners in learning environment may declare wrong information about their real level. The set of rules performed below will generate inferences about the real and the concrete level of the learner according to his performance by analyzing the assessment result traces.
574
Azer Nouira et al. / Procedia Computer Science 126 (2018) 566–575
Deducing that the real level of a learner is not advanced and it is beginner. (Advanced ⇒ Beginner)
∨
AssessmentActvity(?a) ∧ HasAssesForm(?a,?f) ∧ swrlb:equal(?f,”Formative Assessment”) ∧ HasAssesSttm(?a,?s) ∧ HasAsseActo(?s,?ac) ∧ HasSupervisedLev(?ac,?sp) ∧ Unsupervised Level(?ac,?ul) ∧ swrlb:equal(?ul,”advanced”) ∧ HasAssesObj(?s,?ob) ∧ HasOtherAttributes(?ob,?oat) ∧ LevelOfDifficulty(?oat,?level difficulty) ∧ swrlb:equal(?level difficulty,”easy”) ∧ HasAssesRslt(?s,?r) ∧ Completion(?r,?completion) ∧ swrlb:booleanNot(?completion, false) ∧ Success(?r, ?success) ∧ swrlb:booleanNot(?success,false) ∧ Duration(?r, ?duration) ∧ swrlb:greaterThan(?duration,10) ∧ attempts(?r,?attempts) ∧ swrlb:greaterThan(?attempts, 1) ∧ Score(?r,?score) ∧ swrlb:lessThan (?score,5) ∧ Nb wrong answer(?r,?nb wrong) ∧ swrlb:greaterThan(?nb wrong,15) – Beginner(?sp) This rule describes that for all assessment actors (learner) and assessment object, a list of conditions must be checked to conclude that the real level of the learner is beginner and cannot be advanced. These conditions are: the unsupervised level of the learner is advanced, the level of difficulty of the assessment object is easy, the assessment form must be formative, and the assessment result must be summarized as follow: the completion is false, the success is false, the duration greater than 10, the attempt greater than 1, the score less than 5 and the number of wrong answer is greater than 15. Technically, some SWRL built-ins, which are preceded by the namespace qualifier swrlb, are used in our SWRL rule to reason about the assessment results such as the score, the completion, the duration and the attempts. From all these conditions we can conclude that the real and the concrete level of the learner can be certainly beginner meaning that the supervised level is beginner. Below, we present some other rules which determine the concrete level of the learner with some other possible combinations between the existing levels of the learner: (Beginner ⇒ Advanced)and(Beginner ⇒ Intermediate). Deducing that the real level of a learner is not advanced and it is beginner. (Beginner ⇒ Advanced)
∨
AssessmentActvity(?a) ∧ HasAssesForm(?a,?f) ∧ swrlb:equal(?f,”Formative Assessment”) ∧ HasAssesSttm(?a,?s) ∧ HasAsseActo(?s,?ac) ∧ HasSupervisedLev(?ac,?sp) ∧ Unsupervised Level(?ac,?ul) ∧ swrlb:equal(?ul,”Beginner”) ∧ HasAssesObj(?s,?ob) ∧ HasOtherAttributes(?ob,?oat) ∧ LevelOfDifficulty(?oat,?level difficulty) ∧ swrlb:equal(?level difficulty,”Hard”) ∧ HasAssesRslt(?s,?r) ∧ Completion(?r,?completion) ∧ swrlb:booleanNot(?completion, true)∧ Success(?r, ?success) ∧ swrlb:booleanNot(?success,true) ∧ Duration(?r, ?duration)∧ swrlb:lessThan(?duration,5) ∧ attempts(?r,?attempts) ∧ swrlb:equal(?attempts, 1)∧ Score(?r,?score) ∧ swrlb:greaterThan (?score,15) ∧ Nb wrong answer(?r,?nb wrong) ∧ swrlb:lessThan(?nb wrong,5) – Advanced(?sp) Deducing that the real level of a learner is not advanced and it is beginner. (Beginner ⇒ Intermediate)
∨
AssessmentActvity(?a) ∧ HasAssesForm(?a,?f) ∧ swrlb:equal(?f,”Formative Assessment”) ∧ HasAssesSttm(?a,?s) ∧ HasAsseActo(?s,?ac) ∧ HasSupervisedLev(?ac,?sp) ∧ Unsupervised Level(?ac,?ul) ∧ swrlb:equal(?ul,”Beginner”) ∧ HasAssesObj(?s,?ob) ∧ HasOtherAttributes(?ob,?oat) ∧ LevelOfDifficulty(?oat,?level difficulty) ∧ swrlb:equal(?level difficulty,”Hard”) ∧ HasAssesRslt(?s,?r) ∧ Completion(?r,?completion) ∧ swrlb:booleanNot(?completion, false) ∧ Success(?r, ?success) ∧ swrlb:booleanNot(?success,true) ∧ Duration(?r, ?duration) ∧ swrlb:lessThan(?duration,7) ∧ attempts(?r,?attempts) ∧ swrlb:equal(?attempts, 1) ∧ Score(?r,?score) ∧ swrlb:greaterThan (?score,12) ∧ Nb wrong answer(?r,?nb wrong) ∧ swrlb:lessThan(?nb wrong,7) – Intermediate(?sp) The utility of deducing the concrete level of the learner can offer the opportunity of performing the exact and the appropriate feedback to each learner level. Furthermore, it can be very useful for improving the learning process
Azer Nouira et al. / Procedia Computer Science 126 (2018) 566–575
575
by establishing a personalized learning experience, providing recommendation, adapting the learning content and launching intervention by the tutor according to the each concrete level of learners. 7. Conclusions and Future Work Tracking and analyzing assessment data during the assessment process is fundamental in the educational context. In this paper, we investigate the xAPI data model and study its capability to track and store assessment data. For instance, our contribution was to enhance the existing data model by proposing an ontological data model supporting assessment analytics and based on enriching the assessment result and adding assessment context. The fact of using this model, can give the opportunity to perform the process of assessment analytics effectively, since it aggregates and stores a several assessment data. Concerning our future work, we will try to propose a semantic web based architecture for assessment analytics which uses our proposed ontological model for assessment analytics, and validates the proposed rules by executing them using a real assessment data extracted from a massive learning platform. References [1] Friesen, N. (2005) Interoperability and learning objects: An overview of e-learning standardization. Interdisciplinary Journal of E-Learning and Learning Objects, vol. 1, no 1, p. 23-31. [2] Bohl, O., Scheuhase, J., Sengler, R,. et al. (2002) The sharable content object reference model (SCORM)-a critical review. In : Computers in education. proceedings. international conference on. IEEE, 2002. p. 950-951. [3] IEEE. (2004) ”IEEE 1484.11.1, Draft 5 Draft Standard for Learning Technology Data Model for Content Object Communication.” [4] IEEE. (2003) ”IEEE 1484.11.2/D2 Draft Standard for Learning Technology - ECMAScript Application Programming Interface for Content to Runtime Services Communication.” [5] Advanced Distributed Learning (ADL). (2013) Experience API. Version 1.0.1, http://www.adlnet.gov/wp-content/uploads/2013/10/xAPI v1.0.1- 2013-10- 01.pdf. [6] Cooper, A. (2015) Assessment Analytics. In Eunis E-learning task force workshop. [7] Kelly, D. and Thorn, K. (2013) Should Instructional Designers care about the Tin Can API?. eLearn, 1. [8] Chiang, C. F., Tseng, H. C., Chiang, C. C. and Hung, J. L. (2015) A case study on learning analytics using Experience API. In Society for Information Technology and Teacher Education International Conference (pp. 2273-2278). Association for the Advancement of Computing in Education (AACE). [9] Corbi, A. and Burgos, D. (2014) Review of Current Student-Monitoring Techniques used in eLearning-Focused recommender Systems and Learning analytics. The Experience API and LIME model Case Study. IJIMAI, 2(7), 44-52. [10] De Nies, T., Magliacane, S., Verborgh, R., Coppens, S., Groth, P., Mannens, E., and Van De Walle, R. (2013) Git2PROV: exposing version control system content as W3C PROV. In Proceedings of the 12th International Semantic Web Conference (Posters and Demonstrations Track)Volume 1035 (pp. 125-128). CEUR-WS. org. [11] Brouns, F., Tammets, K. and Padrsn-Napoles, C. L. (2014) How can the EMMA approach to learning analytics improve employability?. [12] Amrieh, E. A., Hamtini, T. and Aljarah, I. (2015) Preprocessing and analyzing educational data set using X-API for improving students performance. In Applied Electrical Engineering and Computing. Technologies (AEECT), 2015 IEEE Jordan Conference on (pp. 1-5). IEEE. [13] Chakravarthy, S. S. and Raman, A. C. (2014) Educational Data Mining on Learning Management Systems Using Experience API. In Communication Systems and Network Technologies (CSNT), 2014 Fourth International Conference on (pp. 424-427). IEEE. [14] Ellis, C. (2013) Broadening the scope and increasing the usefulness of learning analytics: The case for assessment analytics. British Journal of Educational Technology, 44(4), 662-664. [15] Noy, N. F. and McGuinness, D. L. (2005) D´eveloppement d’une ontologie 101: Guide pour la cr´eation de votre premi´ere ontologie. Universit´e de Stanford, Stanford, Traduit de l’anglais par Anila Angjeli. ht tp://www. bnf. fr/pages/infopro/normes/pdf/no-DevOnto. pdf. [16] Swoop. (2007). http://semanticweb.org/wiki/Swoop.html [17] Protege. (2016). http://protege.stanford.edu. [18] Sure, Y., Angele, J., and Staab. S (2002) OntoEdit: Guiding Ontology Development by Methodology and Inferencing, In Proc of the International Conference on Ontologies, Databases and Applications of Semantics ODBASE 2002, University of California,Irvine, USA, Vol. 2519 of LNCS, pp. 1205-1222. [19] Brickley, D. and Miller, L. (2014) FOAF Vocabulary Specification 0.99, Namespace Document 14 January 2014-Paddington Edition. Recuperado a partir de http://xmlns. com/foaf/spec. [20] IEEE Learning Technology Standards Committee. (2002) Draft standard for learning object. [21] The Dublin Core Metadata Initiative. (2001) DC-Library Application Profile, Dublin Core. [22] Horrocks, I., Patel-Schneider, P.F., Boley, H., Tabet, S., Grosof, B., Dean et al. (2004) SWRL: A Semantic Web Rule Language Combining OWL and RuleML. W3C Member submission, 21, 79. [23] Built-Ins for SWRL, http://www.daml.org/2004/04/swrl/builtins.html, 2004.