Computers in Human Behavior xxx (2015) 1e9
Contents lists available at ScienceDirect
Computers in Human Behavior journal homepage: www.elsevier.com/locate/comphumbeh
Full length article
Personalized feedback for self assessment in lifelong learning environments based on semantic web Lilia Cheniti Belcadhi PRINCE Research Group, ISITC, Sousse University, Tunisia
a r t i c l e i n f o
a b s t r a c t
Article history: Received 3 April 2015 Received in revised form 20 July 2015 Accepted 21 July 2015 Available online xxx
Feedback constitutes an important component of assessment in learning environments, as it allows learners to evaluate their progress in the learning process and helps tutors to personalize learning content according to learners' needs and profiles. In this paper we propose an intelligent personalized feedback framework based on Semantic Web technologies, one that provides personalized feedback for self-assessment and that is appropriate for Lifelong Learning environment. The framework takes into consideration the level of complexity of each question in a self assessment test in order to decide on the type of individual feedback required. This process provides accurate information about the learner's level based on the result of their own participation in the assessment. The framework is based on semantic models for user information and feedback generation that ensure interoperability and reuse of Assessment and learning resources. In our approach, a personalized feedback framework based on web services uses the information contained in the learner's profile to proactively assist them by suggesting personalized feedback and helping them overcome their shortcomings in a particular field of knowledge. © 2015 Published by Elsevier Ltd.
Keywords: Personalized feedback Web services Ontology Web-based assessment Lifelong learning
1. Introduction There are several critical challenges, opportunities, and movements in learning that must be considered in the development and implementation of new learning environments. These include encouraging lifelong learning, valuing both informal and formal learning, addressing the open and social dimensions of learning, and recognizing the different contexts in which learning takes place, as well as the fundamental changes in the perception, technology and use of the Web in recent years. It is also crucial to address what today's learners need. As part of lifelong e-learning environments we observe that assessment plays a crucial role, as it helps learner have an idea on progress made in the learning process. Indeed, the main objective of assessment is to generate information or feedback to learners which has several functions. The assessment information or feedback helps learners and teachers identify what e-Learning solutions to improve and how (Peterson & Irving, 2008). Several previous studies showed that learners find e-
E-mail address:
[email protected].
Learning systems more interesting from the moment when assessment and feedback are inextricably linked. If the assessment is an integral part of the learning process, feedback must play a central role in the evaluation process (Mayen & Savoyant, 2002) and the e-Learning environments in terms of expected results and access to information. In the ongoing web based assessment, educational feedback could directly affect what students learn and how they should proceed to be effective (Kerka & Wonacott, 2000). Tutors' support is particularly important to facilitate feedback in online learning environments (Collis, De Boar, & Slotman, 2001). In addition, researchers discuss “the practical implications of feedback in spending time, clear expectations of learners and the efficiency of the management of global communication and feedback processes” (Mayen & Savoyant, 2002). During the debate on the effectiveness of feedback in distance online education, it has been admitted that in a traditional learning environment, one could not provide strategic information at the same level if one was in a situation of online courses, even if the synchronous information was available in a classroom. These results indicate that the electronic feedback of the web based assessment process could be more effective than a traditional course (Mayen & Savoyant, 2002). Nowadays, personalization of
http://dx.doi.org/10.1016/j.chb.2015.07.042 0747-5632/© 2015 Published by Elsevier Ltd.
Please cite this article in press as: Cheniti Belcadhi, L., Personalized feedback for self assessment in lifelong learning environments based on semantic web, Computers in Human Behavior (2015), http://dx.doi.org/10.1016/j.chb.2015.07.042
2
L. Cheniti Belcadhi / Computers in Human Behavior xxx (2015) 1e9
feedback is increasingly becoming a crucial research topic in eLearning systems. Indeed, personalized feedback can transform self assessment experience into a learning experience for learners. Although implemented in different ways, current assessment management systems share a core common weakness: the assessment process do not focus on feedback and its utility for personalization of learning and assessment. Besides, while lifelong learning is increasingly influencing university and the workplace, some critical issues still have to be worked out to make it achieve its full potential. There is then a need that assessment in this environment provides further feedback appropriate to each learner's profile to guarantee learning progress and to address the assessment expectations of learners. From this standpoint and recognizing the inadequacy of current assessment systems in higher education to achieve performance visibility, we suggest that we need to rethink the design of enhanced self assessment processes that make use of all information available in the learning environment and provide a personalized feedback to every learner. In this paper, we describe our approach of a generic personalized feedback framework which starts from registered users' profile to enforce personalized feedback services where the user can receive a positive or negative feedback reflecting their skills. Furthermore, the approach takes into account the level of complexity of each question within a test or an exam. Our feedback framework has been integrated in a personalized web based assessment tool using semantic web technologies. This paper is structured as follows: Section 2 provides insight into feedback classifications and work related to personalization of feedback. In Section 3, we describe the personalized feedback framework in a use case scenario. In Section 4, we use feedback and user's profile ontology in a system that generates personalized feedback. In Section 5, we explain how to generate personalized feedback and processing algorithm. The system architecture and web service are described in Section 6. A discussion of future perspectives concludes this paper. 2. Feedback in e-learning systems Feedback is an important part of the assessment process as it allows teachers and students to take actions to overcome the difficulties of learning that are often demonstrated in assessment tests. The concept of feedback in assessment processes has several definitions. Feedback is defined as “information about the gap between the actual level and the reference level of a system parameter which is used to alter the gap in some way” (Walker, 2009). If this information is capable of producing changes in the general method, it can be seen as a learning process. If this process was then applied to the students, the return would be feedback on the learning process based on specific predefined objectives. In our research, we propose an approach to customize feedback for knowledge assessment on semantic web. In this first study we present a process that provides personalized feedback to the learner within a web based assessment system. A wide range of technologies, both generic and purpose-built, can be used to support almost all aspects of assessment and feedback. This includes support for the administration, management and design of authentic tasks, as well as the provision of feedback and screen tests. Although there are few examples of effective practice in this area, many institutions have not yet built a sustainable technology-enhanced assessment and practice of personalized feedback. Several types of ranking feedback are presented in current literature (Dirks, 1997; Hancock, Shen, Forlines, & Ryall, 2005; Mason & Bruning, 2001). Effective feedback provides the learner
with two types of information (Mason & Bruning, 2001): Verification which consists in informing the learner whether their answer is correct; and Development, which is related to tips and stimulation that guide the learner to the correct answer. Development may be informational, topic-specific or response-specific. Feedback can be classified according to the level of verification and elaboration (Kulhavy & Stock, 1989): - No feedback: simply gives the proportion of correct answers. - Knowledge-of-response: tells the learner if the answer is correct or not. - Answer-until-correct: provides verification without development and requires the learner to stay on the test to give a correct answer. - Knowledge-of-right-answer: provides individual verification of the question and gives the correct answer to the learner. - Topic-quota: provides verification elements and general elaborative information about the target subject. - Response-contingent: provides feedback on the answer, which is why the correct answer is correct and the wrong answer is wrong. In other classifications, the feedback can be immediate or delayed; and it can be presented in text, graphics, audio or video (Saul, Runardotter, & Wuttke, 2010; Vasilyeva, Puuronen, Pechenizkiy, & Rasanen, 2007). In addition, feedback can be categorized into positive and negative. While negative feedback indicates a deviation from the expected answer, positive feedback indicates that a correct answer has been provided. It is useful to ensure that the feedback, whether positive or negative, remains at the level of destination and does not affect the level of identity. There is therefore a need of a great vigilance when providing a personalized feedback. The literature features some assessment systems that address feedback personalization. In Silva and Restivo (2012) a Feedback Module has been developed which uses both visual representations: a static visualization for the knowledge domain and a dynamic time-oriented visual representation for the student performance that helps students and teachers better understand the knowledge acquisition and change the interactions in the classroom for effective learning. In (Gouli, Papanikolaou, & Grigoriadou, 2002) the PASS System is described, which is a web based personalized assessment system. It also comprises a general feedback component to the learner, the question's parameters such as the initial difficulty level, the assessment parameters such as the termination criteria, as well as the weight of each educational material page and each prerequisite concept denoting their importance for the outcome concept. In (Lazarinis, Green, & Pearson, 2010) the authors describe iAdaptTest system, which is a desktop-based modularized adaptive testing tool conforming to the IMS QTI (IMS QTI), the IMS LIP (IMS LIP, 2005) and XML Topic Maps in order to improve the reusability and interoperability of the data. iAdaptTest provides only a few question types and the implemented feedback and help is rather simple and does not enable personalized support. The COMPASS (COnceptMapASSessment tool), described in (Gouli, Gogoulou, Papanikolaou, & Grigoriadou, 2004) is an adaptive web-based concept map assessment tool, which aims at assessing learners' understanding as well as supporting the learning process. COMPASS provides different informative and tutoring feedback components, tailored to the learner's knowledge level, preferences and interaction behavior. Besides both adaptive assessment systems SIETTE (Conejo et al., 2004) and CosyQTI (Lalos, Retalis, & Psaromiligkos, 2005) provide adaptive testing and present the learner with questions that are adapted to
Please cite this article in press as: Cheniti Belcadhi, L., Personalized feedback for self assessment in lifelong learning environments based on semantic web, Computers in Human Behavior (2015), http://dx.doi.org/10.1016/j.chb.2015.07.042
L. Cheniti Belcadhi / Computers in Human Behavior xxx (2015) 1e9
their current level of knowledge. However these systems provide only limited support in terms of feedback and help. In Del Mar nchez-Vera, Ferna ndez-Breis, Castellanos-Nieves, Frutos-MoSa rales, and Prendes-Espinosa (2012) an approach is suggested for generating feedback to open questions in assessment tests by making use of ontologies and semantic annotations. The assumption behind the lifelong learner modeling is that all learning has value and most of it deserves to be acknowledged through the learner model. It is a likely option to formal higher education to have non-formal and informal learning activities represented and assessed. It is also likely to afford an open learner model to support self-directed learning and interpret learner information from different tools (Ilahi, Belcadhi, & Braham, 2013). A key challenge is the ontologies necessary for understanding information from various sources. In addition Lifelong learning environments have been addressed ~ alvo, Rodríguezin some research works. In Conde, García-Pen Conde, Alier, and García-Holgado (2014) a service based framework has been proposed to provide openness in e-Learning platforms and response to lifelong learner needs. In the Trailor project ~ alvo et al., 2013) a framework for informal learning is (García-Pen defined, composed of several components and interfaces to make possible the interaction required. The framework is composed of tools and environments for enabling learning as well as a portfolio component in which informal, non-formal and formal learning experiences can be stored and published. Given this framework it is possible to define a workflow that makes informal learning experiences transparent to learners and institutions in such a way that both of them will benefit. Our work proposes a feedback generating service based on learner profile that is appropriate for lifelong learning environments. According to learners' objectives and skills, personalized assessment will be generated and tailored to the learner's needs. The use of semantic web technologies and e-learning standards to describe learning and assessment resources enable reuse and interoperability for feedback generation. 3. Personalized feedback scenario 3.1. Scenario description This section illustrates the context of our research using a use case scenario that includes personalized feedback in an assessment system. An employee in a computer applications development company accesses learning resources on software development in order to acquire programming skills. He is registered in a lifelong e-Learning environment offering a set of computer science courses and learning resources. The first phase consists in the registration process in which the student fills in a form by choosing from the appropriate skill levels proposed by the e-Learning system for each topic (see Fig. 1). The skill level can be one of the following: Expert, Average or Beginner. In the second phase, the student will be invited to browse a list of lessons in relation to the desired Learning resources (Objects) and he will select a lesson as a group of learning objects from the eLearning network. Besides the lesson material itself, and to ensure that the learner understands the important concepts related to a field of knowledge, for example object oriented development language, a set of questions will be selected and presented to constitute a personalized assessment test on the chosen lesson, according to the learner's profile. One question per page will be displayed with multiple choices and each question can only have one correct answer (Fig. 2: zone 1 and 2).
3
Fig. 1. A registration form overview.
The learner then begins to answer questions. After choosing an answer, they click the next button (Fig. 2: zone 3). Based on the learner's answers, the system will allocate a score. When the learner submits the answer, a personalized feedback popup divided into two parts will be displayed (Fig. 2: zone 4). The first part (background mechanism)will compare the level of the user in a given skill with the level of complexity of the question given in the assessment system and depending on the result, a positive or negative feedback will be generated (more description will be provided in Section 5). The correct answer will also be
Fig. 2. Generating feedback scenario.
Please cite this article in press as: Cheniti Belcadhi, L., Personalized feedback for self assessment in lifelong learning environments based on semantic web, Computers in Human Behavior (2015), http://dx.doi.org/10.1016/j.chb.2015.07.042
4
L. Cheniti Belcadhi / Computers in Human Behavior xxx (2015) 1e9
displayed in this part. The second part references other learning resources or related questions to help the learner assimilate the tested topic. Whether the answer is correct or not, the system will move on to the next question after acknowledging the feedback popup. 3.2. Scenario analysis and requirements According to the proposed scenario, we can identify the following challenges. 3.2.1. Feedback target In an e-learning context, feedback management can be targeted at an individual, a sub-group or a group of students. In our research the target individual is identified by a user profile. By directing feedback to individuals, teachers increase the chances to assess the skills of learners. This is particularly visible in a lifelong learning system. The system would be considered more efficient when the number of positive feedback is greater than the number of negative feedback. This is an indication that the system is helping to increase the overall learner's level. Therefore, it is necessary to recommend the appropriate tools to generate a personalized feedback in which the system highlights the effectiveness of feedback content be it positive or negative. 3.2.2. Dynamic attribution of resources and resources interoperability Interoperability is an IEEE standard defined as “the ability of two or more systems or components to exchange information and to use the information that has been exchanged (IEEE Standard Computer Dictionary)”. Using interoperability is of a great importance in our approach. The goal is to protect the personalized feedback system consistency against the use of redundant data. Interoperability ensures that data are entered only once in our application, allowing more efficient data exchange and defining the rules of interaction among system web services. 3.2.3. Reuse of resources The purpose is not simply to generate a personalized feedback based on the learner's profile but to enhance the flexibility of their administration, in terms of updating, identifying, utilizing, sharing and reusing, which remains a great challenge. The reusability concept can be defined as the reuse of e-Learning objects to generate appropriate feedback according to the learner's skills by allowing them the access to other resources (e-Learning objects and/or assessment objects) to improve their level. 3.2.4. Semantic annotation of resources In the proposed scenario, we can identify different categories of relevant entities: learners, concepts of the knowledge domain, learning resources, assessment resources, and feedback. Thus, it is necessary to retrieve information related to learner learning and assessment resources and generate appropriate feedback. Semantic web approach enables us to solve the problem of finding information by avoiding polysemy and reducing the number of results. The semantic web offers tools and infrastructures for semantic representation by means of ontologies. The latter fosters interoperability at semantic level because it provides a unique meaning for a concept and a relationship in ontology. Ontologies can provide a precise semantic for the learning domain, the learning activities, the different categories of stakeholders, the collected and produced
content, the learning context and all assessment and feedback components. 4. Semantic models for personalized feedback In order to tackle the scenario requirements stated above, we need to maintain a certain number of semantic models, which not only allow us to handle and manage various types of data, but also enable various techniques of assessment and learning over heterogonous resources. These semantic models, and their possible features, are outlined in the following: 4.1. Models description 4.1.1. User model The feedback program generally provides several query results, many of which are likely to be inappropriate to the needs of the user. Therefore, the personalized feedback process must filter those results according to the information that characterize each user so it can help in generating appropriate content for relevant services. As we need to receive timely information on the various users, and in doing so we may have different categories of users, learners, groups, tutors. Information about a user is gathered under a “user profile”. The user profile can be defined as a structure for modeling and storing information characterizing the user. This data may contain personal information, interests, preferences, skill's level and other information relevant to the system. In the field of eLearning, several models have been proposed for the user profile including PAPI (PAPI, 2000), IMS LIP (IMS LIP, 2005), and ePortfolio (IMS, 2005). Having looked into these standards, we concluded that the one that is more adapted to our approach is PAPI (Public and Private Information for Learner). It is one example of standards for maintaining user's data. The data can be classified under the following sections: personal, relations, security, preference, performance, and portfolio information. The personal category contains information such as name, contacts and addresses of a user. The Relations category defines relationships between users (classmate, teachers …) and the Security section specifies the access rights. The Preference section may contain any information related to the knowledge, skills or interests of the user. Based on these elements, we can store data on the learning experience of the user. The Performance category is used to evaluate the learners' responses based in the results reported by the IMS RR data model. 4.1.2. Metadata model The first requirement for the metadata model is the possibility to represent information granularity in the personalized feedback. In our approach, ontology seems to be an ideal solution for such a meta-model because it is a mechanism for representing general purposes domains. The basic elements of the ontology's domain are ontology, data properties, objects' properties, classes and structural relationships (i.e. specialization, generalization and composition). In our case, typical multiple choice question attributes include the question, the number of possible answers, the number of correct answers given by the user, the number of choices that students can choose from, and the choices themselves. Structural relations, namely specialization and composition allow us to create domain models at different levels of abstraction and granularity. We can use these levels with an intelligent user interface that will help users work on one level of abstraction or granularity and only go into the details (a lower level of abstraction or a level of finer granularity) when there is a need to describe these details as well. Such a user
Please cite this article in press as: Cheniti Belcadhi, L., Personalized feedback for self assessment in lifelong learning environments based on semantic web, Computers in Human Behavior (2015), http://dx.doi.org/10.1016/j.chb.2015.07.042
L. Cheniti Belcadhi / Computers in Human Behavior xxx (2015) 1e9
interface would reduce the complexity of the capture process scenario to a great extent. 4.1.3. Feedback model In order to provide personalized feedback in e-Learning environments, we need to be able to select the appropriate feedback strategy and recommend appropriate feedback resources in a given context. The feedback system should be able to generate a personalized feedback by descending in information granularity presented by Learning Object (LO) based on LOM standard (LOM, 2002). LOM contains an abstract data model of descriptors or elements divided into nine categories and used to describe learning objects (LO). In addition the feedback system should generate items granularity or Assessment Object (AO) as feedback content based on IMS QTI standard (IMS QTI). This standard is composed of two data models: the IMS QTI Assessment-Section-Item (ASI) model and the IMS QTI Results Reporting (RR) model. In the QTI standard, we might identify the class of multiple-choice question as a specialization of the question class that requires students to choose one or more correct answers from a number of possible choices. There are several subtypes of the general multiple-choice question class such as single correct answer, true/false question or multiple correct answers. These subtypes provide different default values for some of the attributes of the general concept. In our approach, we adopt multiple-choice questions, which have only one correct answer. 4.1.4. Domain model Using the presented metadata model, we can now develop a number of domain models. Domain model development consists of identifying the key resources, class attributes, and relations with other classes in a particular domain. Note here that such domain models can be at different levels of granularity. The possibility to hierarchically structure different abstraction and granularity levels provided by the metadata model makes it highly expressive. Through specialization, generalization and composition we have the possibility to provide a mechanism for automatic processing of ontology domain models. 4.2. User and feedback ontologies ge open source tool, To describe ontology, we selected the Prote which is used to construct knowledge-based applications using ontology. It provides a platform for creating ontology and form the ontology knowledge base. The tool can display and edit ontology in a graphical mode. It also helps in building OWL-DL (Ontology Web Language e Description Logic) ontology and using a Description Logic Reasoner to check the consistency of the ontology and automatically compute the ontology class hierarchy. To display ontology graph we used OWL Viz (plugin designed to be used with Protege editor) to present learner's ontology and OntoGraph (plugin used to show classes with hierarchy and relationships) tool to present feedback ontology. User ontology and Feedback ontology are described in more detail in the following subsection. 4.2.1. User model ontology The learner's profile ontology was created in order to facilitate the extraction of the learner's personal information, needs and interests in the context of feedback personalization. The learner is requested to register and fill information in a few forms with personal information such as the address, the country, Email, phone number, name and personal skills in different fields. Generating personalized feedback that is adapted to the needs of the learner requires intelligent reasoning and it is necessary to build and maintain appropriate knowledge in the system by using
5
the ontology description. In user ontology, presented in Fig. 3 we find classes such as: Learner, Contact, Interests, Likes, Level and Skills. The Level class defines the level of learners for each proposed skill. 4.2.2. Feedback ontology The development of feedback ontology has been guided by an application ontology written in OWL and based on OeLEplatform (Dagoberto et al., 2012; Frutos-Moralesa et al., 2010). This ontology models the necessary concepts and relationships of domains such as Course, Teacher, Student, Exam, Questions, Answers, Level of question (expert, average and beginner) and so on (see Fig. 4 below). This ontology is composed of classes modeling the operation of personalized feedback in a web based assessment and of two types of properties namely Object property and Data type property. We can create a number of instances for each class or concept and assign values for each instance based on data type property. For example, the Answer_Closed_Question class defining the answer to a closed question has “choice” as Data type property designating the answer chosen by the learner. The Object property “hasAnnotResponse” defines the relationship between the AnswerAnnotation class and the Answer_Closed_Question class. The ontology is focused, on the one hand, on the relations between Courses and Teachers, Learner and Exam and, on the other hand, on the relations between Exam, Learner, Question_Exam, Answer_Exam, Feedback, Positive_Feedback and Negative_Feedback. It also provides the assessment perspective because it shows the different types of questions (open and closed) and their corresponding relations. A closed question has a set of associated choices, whereas an open question has annotations. Fig. 4 also shows that annotations are associated with the answer provided by a student to a question. 5. Personalized feedback process Based on semantic web technology and e-Learning standards, we propose a dynamic approach for learner's assessment that enables an adaptive feedback according to their profile. Our personalized feedback framework provides the learner with two types of feedback: positive feedback and negative. In the following subsections we present the metadata annotations required in our personalized feedback framework based on eLearning standards. We then describe the process and the algorithm of learner's feedback. Finally we propose a short description of the way to redirect learners to Learning Objects and Assessment Objects to access more relevant resources. 5.1. Generating personalized positive and negative feedback To explain our approach, we have used a table matching the question's level of complexity to the learner's level in a given skill. The following assumptions are to be considered: - The learner's level in a particular skill can be: {Expert, Average or Beginner} abbreviated E, A or B. - The question's level in a particular topic can be: {Expert, Average and Beginner} abbreviated E, A or B. The personalized feedback algorithm is based on the following hypotheses: - Assessment system with closed question (Multiple choices): each question can only have one correct answer.
Please cite this article in press as: Cheniti Belcadhi, L., Personalized feedback for self assessment in lifelong learning environments based on semantic web, Computers in Human Behavior (2015), http://dx.doi.org/10.1016/j.chb.2015.07.042
6
L. Cheniti Belcadhi / Computers in Human Behavior xxx (2015) 1e9
Fig. 3. Classes description of user's profile.
- When the learner clicks on the next button (see Fig. 2 above),a popup is displayed showing the correct answer and the appropriate feedback, then the system moves on to the next question whether the answer is correct or not.
- The score associated with the question belongs to the set {0, 1}: {0 for wrong answer, 1 for correct answer}. - The following abbreviations will be used in the algorithm: LL for Learner's Level, LQi for Question i Level and Tx for a given topic.
Please cite this article in press as: Cheniti Belcadhi, L., Personalized feedback for self assessment in lifelong learning environments based on semantic web, Computers in Human Behavior (2015), http://dx.doi.org/10.1016/j.chb.2015.07.042
L. Cheniti Belcadhi / Computers in Human Behavior xxx (2015) 1e9
7
Fig. 4. Class description of feedback.
The algorithm behind the results in Tables 1 and 2 is presented below:
resources (Learning Objects), which is a core part of the comment. To describe the reference lookup process, we define a set of keywords for each question within a test. The system will match these keywords with Learning Objects or test items using a query against the knowledge base within the ontology. For example, if the keyword of one question is “class concept” in Java, the system will look for references to the class concept in different subjects or topics (JAVA, Cþþ, OOD …) and questions or tests related to the same concept. This will help the learner to assimilate and understand the particular issue presented in the question. The query is based on the following rules that are represented in tuProlog (Piancastelli, Omicine):
5.2. Generating references in the feedback As described above in Section 2, a relevant part of the feedback displayed would be the reference to study further learning Table 1 Feedback displayed in case of correct answer.
6. Personalized feedback framework architecture In this framework personalization functionalities are available as Web Services that communicate via Semantic Web technologies without the need for centralized control. This architecture embodies personalized Web Services that generate feedback referring to learning resources proportionally to domain ontology. Based on the above scenario, we can notice that there is a need for a generic feedback framework enabling a personalized feedback process in which the learner is provided with various Please cite this article in press as: Cheniti Belcadhi, L., Personalized feedback for self assessment in lifelong learning environments based on semantic web, Computers in Human Behavior (2015), http://dx.doi.org/10.1016/j.chb.2015.07.042
8
L. Cheniti Belcadhi / Computers in Human Behavior xxx (2015) 1e9
Table 2 Feedback displayed in case of wrong answer.
functionalities from which he can choose independently (ChenitiBelcadhi, Nenze, & Braham, 2005). This framework should be able to: - Provide an infrastructure capable of requesting, searching and selecting learning and assessment resources in an open network relevant to the student knowledge. - Deliver personalized tests according to his performances and preferences. - Provide an infrastructure to search and select the most appropriate evaluation scheme to the learner. - Generate a personalized feedback according to the skill level and the question level of complexity proposed by the assessment service. In personalized feedback architecture (Fig. 5), each Web Service encapsulates a specific functionality and communicates with other services or software components via a components interface. Thus, we designed a flexible architecture by setting up Web Services. The feedback generation service has been integrated in the SWAP (Semantic Web Assessment personalization system) already
Fig. 5. Architecture for personalized feedback framework.
developed and used for some courses to provide personalized assessment tests (Cheniti-Belcadhi, Nenze, & Braham, 2008). The system is composed of various services each of which is dedicated to a special functionality and referred to separately. The authentication of learners is completed via the login Service, which scans the learner's parameters and transfers them to the other services: User Profile service: is responsible for generating a learner's profile with their history. It is based on past interactions recorded by the learner and a set of answers that are provided for the proposed tests. It is a monitoring service using a service learner performance. Visualization service: responsible for the display of resources (learning and assessment) required by the learner. Connector service: is the mediator between the visualization service, the user profile service, the test generator service and the navigation service. It provides research and learning resource descriptions and evaluation of Web ontology. It is also designed to gather all the documents and send them to personalization services. Navigation service: provides personalized navigation maps of resources for the learners based on the information provided by the learner's profile. This service offers two kinds of maps: a lesson map, which contains the lessons that are already visited or recommended for the learner and a test map that allows to have an overview on the selected questions and the status of their evaluation. Approaches based on concepts mapping (Gouli, Gogoulou, & Grigoriadou, 2003) for organizing learning and assessment concepts in a hierarchical manner and forming meaningful relationships between them can be used. Feedback generation service: The feedback engine automatically produces feedback to learners. Two parts of feedback are generated: The first is based on positive or negative feedback domain ontology and feedback annotation, while the second is based on the search of Learning Objects and Assessment Objects corresponding successively to course and test domain ontology. The system is designed in an open architecture that makes it compatible with open standards and protocols and adaptable to different implementation scenarios. It is composed of various services; each dedicate to a special functionality and referred to separately (Fig. 5). The authentication of learners is completed via the Login Service, which scans the learner's parameters and transfers them to the other services. The Connector Service is the mediator between the Visualization Service, the User Profile Service, instances of personalization Services. Its main task is the conversion between the different formats of metadata descriptions used by the various Services instances and the provision of a generic interface for visualization. Besides, the navigation Service is required to fetch the learning and assessment resource descriptions and the ontology from the Web. It is also designed to aggregate all the documents and send them to the test Service. In addition, it is necessary to run a reasoning program that applies the rules required to forward a request to the Connector Service. This reasoning program contains all queries elaborated on our semantic models as stated previously. The Visualization Service is responsible for the display of the resources requested by the learner, i.e. either the learning resources or the assessment resources. The test generation service composing this architecture has already been tested with learners. In particular it has been tested and validated for use of multiple choice questions in a course for objected oriented programming. We carried out an experiment of the personalization of assessment. The system was operational and academically useful. A qualitative evaluation was performed on the framework, with a particular focus on usability and effectiveness aspects. Two categories of learners: beginners and advanced have been defined, based on their initial knowledge of Cþþ
Please cite this article in press as: Cheniti Belcadhi, L., Personalized feedback for self assessment in lifelong learning environments based on semantic web, Computers in Human Behavior (2015), http://dx.doi.org/10.1016/j.chb.2015.07.042
L. Cheniti Belcadhi / Computers in Human Behavior xxx (2015) 1e9
programming. The learners were observed during this experiment by a lecturer of this course. Verbal communication was initiated to get instant feedback from the learners while they were using the system. Their progress and responses to this new assessment methodology were recorded in a series of questionnaires. The system services were used by the learners, and in particular the test and the Visualization Services, which permit to display the learning and assessment resources and evaluate the learners answers to the proposed questions. In addition implementation of feedback service and Queries on the ontological model for feedback have been already tested. The first results that emerged from the experiment with the personalized framework are positive. In particular, they indicate that the framework does contribute more positively to student learning and assessment of academic material than the methods traditionally used in distributed learning environments. 7. Conclusion and future work In the development of e-learning systems it is necessary to take into account both individual needs and requirements, as well as the resources of information technologies. This is particularly important in Lifelong e-Learning environment. Feedback should be aligned, as much as possible, to the learner's needs. In this paper we presented an approach to personalize feedback in these environments based on semantic Web technologies. We have first proposed a Modeling of feedback as part of assessment process in lifelong e-Learning environment. This is based on a formal description of feedback by using of ontologies to provide an assessment object, customized for a specific lifelong learner and have described an Algorithm for feedback personalization process. Semantic models based on IMS/QTI standard to generate feedback personalized to learner specific needs. In addition we introduced the concept of using a personal user profile to adapt the feedback to the learner's needs and expectations. The framework is based on open and reusable resources and components and it can be integrated into different assessment tools to deliver a better e-Learning experience to the users. We also designed a Flexible architecture for a feedback personalization generation, based on semantic Web. In future work we aim to integrate the function of generating positive and negative feedback into other types of questions such as open and closed questions and focus on the automation of feedback in massive courses. References Cheniti-Belcadhi, L., Nenze, H., & Braham, R. (2005). Towards a service based architecture for assessment. In Proceedings Lernen, Wissensentdeckung und €t (LWA) 2005, Saarbrücken, October 10e12, LWA (pp. 20e25). Adaptivita Cheniti-Belcadhi, L., Nenze, H., & Braham, R. (2008). Assessment personalization in the semantic web. Journal of Computational Methods in Sciences and Engineering, 8, 163e182. Collis, B., De Boar, W., & Slotman, K. (2001). Feedback for web-based assignments. Journal of Computer Assisted Learning, 17, 306e313. García-Pen ~ alvo, F. J., Rodríguez-Conde, M. J., Alier, M., & GarcíaConde, M.A., Holgado, A. (2014). Perceived openness of learning management systems by students and teachers in education and technology courses. Computers in Human Behavior, 31, 517e526. http://dx.doi.org/10.1016/j.chb.2013.05.023. n, E., Trella, M., Pe rez-De-La-Cruz, J. L., & Ríos, A. (2004). Conejo, R., Guzm an, E., Milla SIETTE: a web-based tool for adaptive testing. International Journal of Artificial Intelligence in Education, 14(1), 29e61.
9
nchez-Vera, M., Ferna ndez-Breis, J. T., C-N, D., FrutosDagoberto, C. N., Del Mar Sa Morales, F., & Prendes-Espinosa, M. P. (2012). Semantic web technologies for generating feedback in online assessment environments. Knowledge-Based Systems, 33, 152e165. nchez-Vera, M., Ferna ndez-Breis, J. T., Castellanos-Nieves, D., FrutosDel Mar Sa Morales, F., & Prendes-Espinosa, M. P. (2012). Semantic web technologies for generating feedback in online assessment environments. Knowledge-BasedSystems, 33, 152e165. Dirks, M. (1997). Developing an appropriate assessment strategy: research and guidance for practice. In Proceedings of the NAU/web.97 Conference (Vol. 1997). No. 1. Frutos-Moralesa, F., S anchez-Verab, M. M., Castellanos-Nievesa, D., Esteban-Gila, A., Cruz-Coronac, C., Prendes-Espinosab, M. P., et al. (2010). An extension of the OeLE platform for generating semantic feedback for students and teachers. Procedia-Social and Behavioral Sciences, 2(2), 527e531. Zangrando, V., García-Holgado, A., ~ alvo, F. J., Conde-Gonz García-Pen alez, M.A., Seoane Pardo, A. M., Alier, M., et al. (2013). TRAILER project (Tagging, recognition, acknowledgment of informal learning experiences). A methodology to make visible learners' informal learning activities to the institutions. Journal of Universal Computer Science, 19(11), 1661e1683. http://dx.doi.org/10.3217/jucs019-11-1661. Gouli, E., Gogoulou, A., & Grigoriadou, M. (2003). Ascertaining what the students already know. In Proceedings of the ED-MEDIA 2003, World Conference on Educational Multimedia, Hypermedia & Telecommunications Conference, Honolulu, Hawaii, USA (Vol. 2003, pp. 2377e2380). No. 1. Gouli, E., Gogoulou, A., Papanikolaou, K., & Grigoriadou, M. (2004). COMPASS: an adaptive web-based concept map assessment tool. In Proceedings of the first international conference on concept mapping (pp. 295e302). Gouli, E., Papanikolaou, K., & Grigoriadou, M. (2002). Personalizing assessment in adaptive educational hypermedia systems. In Adaptive hypermedia and adaptive web-based systems (pp. 153e163). Springer Berlin Heidelberg. Hancock, M., Shen, C., Forlines, C., & Ryall, K. (2005). Exploring non-speech auditory feedback at an interactive multi-user tabletop. In GI '05: Proceedings of Graphics Interface 2005 (pp. 41e50). Ilahi, M., Belcadhi, L. C., & Braham, R. (2013, November). Competence web-based assessment for lifelong learning. In Proceedings of the First International Conference on Technological Ecosystem for Enhancing Multiculturality (pp. 541e547). ACM. IMS. (2005). IMS ePortfolio Specification. http://www.imsglobal.org/ep/. IMS LIP. (2005). Learner information package. http://www.imsglobal.org/profiles. IMS QTI. Question and Test Interoperability. http://www.imsglobal.org/question. Kerka, S., & Wonacott, M. E. (2000). Assessing learners online. Practitioner file. Kulhavy, R. W., & Stock, W. A. (1989). Feedback in written instruction: the place of response certitude. Educational Psychology Review, 1(4), 279e308. Lalos, P., Retalis, S., & Psaromiligkos, Y. (2005). Creating personalised quizzes both to the learner and to the access device characteristics: the case of CosyQTI. In 3rd International Workshop on Authoring of Adaptive and Adaptable Educational Hypermedia (A3EH). Lazarinis, F., Green, S., & Pearson, E. (2010). Focusing on content reusability and interoperability in a personalized hypermedia assessment tool. Multimedia Tools and Applications, 47(2), 257e278. LOM. (2002). LTSC learning object metadata standard, 2002. Available at http://ltsc. ieee.org. Mason, B., & Bruning, R. (2001). Providing feedback in computer-based instruction: What the research tells us. Retrieved from http://dwb.unl.edu/Edit/MB/ MasonBruning.html. Mayen, P., & Savoyant, A. (2002). Formation et prescription: une r eflexion de didactique professionnelle. In ergonomie-self. http://www.ergonomie-self.org/ self2002/mayen.pdf. PAPI. (2000). Draft standard for learning technology e Public and private information (PAPI) for learners (PAPI learner. Peterson, E. R., & Irving, S. E. (2008). Secondary school students' conceptions of assessment and feedback. Learning and Instruction, 18(3), 238e250. Piancastelli, G. & Omicine, A. tuProlog 2.0: One Step Beyond. ALP Newsletter Digest 20(1). Association for Logic Programming. Saul, C., Runardotter, M., & Wuttke, H. D. (2010). Towards feedback personalisation in adaptive assessment. In Proceedings of the Sixth EDEN Research Workshop, Budapest. Silva, J. F., & Restivo, F. J. (2012). An adaptive assessment system with knowledge representation and visual feedback. In Interactive Collaborative Learning (ICL), 2012 15th IEEE International Conference (pp. 1e4). Vasilyeva, E., Puuronen, S., Pechenizkiy, M., & Rasanen, P. (2007). Feedback adaptation in web-based learning systems. International Journal of Continuing Engineering Education and Life Long Learning, 17(4e5), 337e357. Walker, M. (2009). An investigation into written comments on assignments: do students find them usable? Assessment and Evaluation in Higher Education, 34(1), 67e78.
Please cite this article in press as: Cheniti Belcadhi, L., Personalized feedback for self assessment in lifelong learning environments based on semantic web, Computers in Human Behavior (2015), http://dx.doi.org/10.1016/j.chb.2015.07.042