High-level brokerage services for the e-learning domain

High-level brokerage services for the e-learning domain

Computer Standards & Interfaces 25 (2003) 303 – 327 www.elsevier.com/locate/csi High-level brokerage services for the e-learning domain L. Anido *, J...

2MB Sizes 7 Downloads 56 Views

Computer Standards & Interfaces 25 (2003) 303 – 327 www.elsevier.com/locate/csi

High-level brokerage services for the e-learning domain L. Anido *, J. Rodrı´guez, M. Caeiro, J.M. Santos ETSI Telecomunicacio´n, Campus Universitario s/n, E-36200 Vigo, Spain

Abstract A sizeable percentage of OMG activity is focused on standardizing services and facilities in specific vertical markets through Domain Task Forces. This paper presents a contribution to this process in the e-learning domain where an important standardization effort is currently being carried out. We describe the different stages involved in the definition of software services for specific application domains using an MDA-oriented methodology proposed by the authors. The outcome of the application of this methodology is four specifications for distributed and interoperable high-level brokerage environments where learning objects can be publicized, located and retrieved. D 2003 Elsevier Science B.V. All rights reserved. Keywords: Domain CORBA facility; MDA; E-learning; Standardization; Educational brokerage

1. Introduction The Object Management Group (OMG) is an open membership, not-for-profit consortium that produces and maintains computer industry specifications for interoperable enterprise applications. OMG’s own middleware platform is CORBA. Nowadays, one of the hot topics at the OMG is the specification of services and facilities in specific vertical markets through Domain CORBA Facilities. The application of CORBA technologies improves standardization in a specific domain through the definition of those interfaces that must be provided by the core software components used to build applications in the domain. This paper describes a draft proposal for a Domain

* Corresponding author. E-mail addresses: [email protected] (L. Anido), [email protected] (J. Rodrı´guez), [email protected] (M. Caeiro), [email protected] (J.M. Santos).

CORBA Facility for educational brokerage that defines the software services needed in an intermediation framework for learning objects. Currently, elearning standardization (Section 2) is a very active process with very important actors involved. Domain CORBA Facilities are defined using the OMG’s Interface Definition Language (IDL). IDL itself is not tied to any specific language environment or platform. This is what made it possible for ISO to adopt IDL as a standard without any specific reference to CORBA. A set of IDL modules containing specifications of IDL interfaces, valuetypes and other datatypes is a declarative syntactic model of a system. Such a model can be used to reason about the validity or lack thereof of relationships among the entities specified using the rules of relationships among IDL declarated entities like containment, inheritance, etc. An IDL specification is an object model that can be implemented on a CORBA platform that will implicitly verify the syntactic validity of any attempt to use any part of the system.

0920-5489/03/$ - see front matter D 2003 Elsevier Science B.V. All rights reserved. doi:10.1016/S0920-5489(03)00005-9

304

L. Anido et al. / Computer Standards & Interfaces 25 (2003) 303–327

Nevertheless, such a specification does not contain much formal information about the meaning of the operations of the interfaces or of the elements of the datatypes declared, nor about the constraints that apply to them. In traditional CORBA specifications such information has been included in a normative but informal description in English. A well-conceived facility is based on an underlying semantic model that is independent of the target platform. Nevertheless, the model could not be distilled explicitly, and this is the case with OMG domain specifications because the model for most of them is not expressed separately from its IDL interfaces. As their models are hidden, these facilities have received neither the recognition nor the widespread implementation and use that they deserve outside of the CORBA world. The OMG’s Model Driven Architecture (MDA) [18] is a multiplatform specification that tries to overcome the drawbacks mentioned above. This paper proposes (Section 3) an MDA-oriented methodology to produce standardized vertical facilities with a visible and separate underlying conceptual model. The different stages involved in this process offer a rich semantic model to be used both for developers of implementations compliant with the facility and users of the defined services. The whole process defined by the proposed methodology is developed through Sections 4 – 8. The outcome is a set of IDL specifications that define the behavior of distributed standards-driven and interoperable brokerage systems for learning objects. Finally, from our experience in the development of a product compliant with these specifications, a set of best-practice guidelines are included in Section 9.

Advanced Distributed Learning (ADL) initiative, and projects Alliance of Remote Instructional Authoring and Distribution Networks for Europe (ARIADNE), Getting Educational Systems Talking Across Leading Edge Technologies (GESTALT), PROmoting Multimedia access to Education and Training in EUropean Society (PROMETEUS), European Committee for Standardization, Information Society Standardization System, Learning Technologies Workshop (CEN/ ISSS/LT), Gateway to Educational Materials (GEM), and Education Network Australia (EdNA). The IEEE’s LTSC is the institution that is actually gathering recommendations and proposals from other learning standardization institutions and projects. Specifications that have been approved by the IEEE go through a more rigorous process to become ANSI or ISO standards. In fact, an ISO/IEC JTC1 Standards Committee for Learning Technologies, SC36, was approved in November 1999. Further information and references for all these institutions can be found in [1]. The outcomes of these standardization efforts can be identified into two levels: 1. Specification of the information models involved. Several proposals have been produced to specify the format, syntax and semantics of data to be transferred among heterogeneous platforms (e.g. courses, learner profiles, evaluation objects, etc.). 2. Specifications of the architectures, software components and provided interfaces. So far, results have been scarce. In any case, some proposals have been already identified for the software components responsible for managing the information models in the first level of standardization. 2.1. Standards at two levels

2. Standardization in the e-learning domain The e-learning standardization process is an active, continuously evolving process that will last for years to come, until a clear, precise, and generally accepted set of standards for educational-related systems is developed. Among the main contributors to this effort let us mention the IEEE’s Learning Technology Standardization Committee (LTSC), the IMS Global Learning Consortium, the Aviation Industry CBT Committee (AICC), the US Department of Defense’s

2.1.1. First level of standardization. Data and information models The more mature results correspond to this first level. In most cases, XML is used to define supporting information models enabling interoperability in WWW settings. Standards at this level can be seen as common specifications that must be used by different vendors to produce learning objects. Fig. 1 (left side) illustrates this concept using an example from the car manufacturing environment. In that setting, common specifications define standards to make, for

L. Anido et al. / Computer Standards & Interfaces 25 (2003) 303–327

305

Fig. 1. Standardization in car manufacturing. First (left) and second (right) levels.

example, tires for car wheels. As shown in the figure, the standard allows different vendors to produce tires that can be used by different cars. In the same way, common specifications for learning objects would allow their use by different educational software tools. Relevant specifications at this level address: (1) metadata: information used to define, as precisely as possible, educational contents; (2) learner profiles and records: information that characterize learners, their knowledge and preferences; (3) educational content organization: data models to describe static and dynamic course structure; (4) other standards address content packaging to facilitate course transfer among institutions, question and test interoperability, learner tracking, competency definitions, and many others that are still in their early definition stages. A comprehensive survey on the e-learning standardization process can be found in Ref. [1].

defined by the specification (top of the figure) must be implemented by different manufacturers’ robots (bottom of the figure). Therefore, a production line could be composed of robots developed by different manufacturers provided they are compliant with the defined specification to support interoperability. In the same way, software interfaces for educational components would allow us to build new online learning systems avoiding their development from scratch, and also to provide interoperability among heterogeneous systems at runtime. Up to this date, only some institutions have developed architectures that contain common components for a generic learning environment. Available proposals have not defined interfaces for the proposed architecture components [11] or do not cover the whole functionality needed in a complete e-learning environment [8].

2.1.2. Second level of standardization. Common software components and open architectures At this level, standards define the expected behavior of software components responsible for managing learning objects in online environments. Revisiting the car manufacturing example above, second level standards would specify the interface for the robots that assemble tires in a production line. These specifications let different manufacturers to produce robots with the same behavior (cf. right side of Fig. 1). Performance may vary but the expected behavior as

2.2. Metadata and brokerage in the e-learning domain 2.2.1. Educational metadata Metadata is one of the most prolific fields in the first level of e-learning standardization. Many of the organizations and projects presented in the previous section have made proposals in this area, and a consensus that will serve as the basis for new generally accepted recommendations and standards is expected in the near future.

306

L. Anido et al. / Computer Standards & Interfaces 25 (2003) 303–327

Basically, metadata are data about data. Metadata provides descriptions, properties, information about objects to characterize them in order to simplify their use and management. In our case, educational metadata provides information about educational resources. An educational resource is an entity that can be used or referred along a learning process. Multimedia contents, books, manuals, programs, tests, software applications, tools, people, organizations are examples of educational resources. As available educational resources grow and grow, the need of metadata becomes apparent. The lack of information about the properties, location or availability of a resource could make it unusable. This situation is even more patent in an open, unstructured environment like the Internet. Metadata contributes to solve this problem by providing a standard and efficient way to conveniently characterize resource properties. In this way, resource location, sharing, construction or customization is made simpler. One of the main contributors to the definition of educational metadata is the LTSC from the IEEE. The IEEE Learning Object-Metadata (LOM) [13] specifies the syntax and semantics of learning object metadata, defined as the attributes required to fully and adequately describing a learning object. This includes element names, definitions, data types, taxonomies, vocabularies, and field lengths. LOM is focused on the minimal set of attributes needed to allow these learning objects to be managed, located and evaluated. Relevant attributes of learning objects to be described include type of object, author, owner, terms of distribution and format. Where applicable, learning object metadata may also include pedagogical attributes such as teaching or interaction style, grade level, mastery level and prerequisites. Other available proposals for educational metadata are based on LOM or extend its base schema. Fig. 2 shows the relationship among LOM and other proposals by the above referenced institutions. The interested reader can find an exhaustive study on educational metadata in Ref. [4]. 2.2.2. Brokerage for educational systems Efficient searching and location services for educational contents are a key aspect of distributed, open computer-based educational systems. For this to be possible, we need an efficient description of those

Fig. 2. Relationships among educational metadata proposals.

resources to be located. The previous section introduced the main trends in the description of e-learning resources through standardized metadata. In this section, we show how different institutions use these metadata to offer services for searching and locating educational resources. In the present-day Internet environment, classical search engines like Yahoo!, AltaVista or Excite do not provide an adequate support for computer-based, distributed education. As will be discussed below, a new approach to resource searching, based on the brokerage concept, will provide better service. Brokerage enriches the traditional searching services to include resource acquisition, distribution and billing, and perfectly adapts to a scenery with multiple, independent content providers. It will permit clients to locate, select and access content providers in a rapid and efficient way. Furthermore, brokerage will offer benefits also to content providers, because it will offer support for marketing, customer service, content delivery, and even accounting and billing. Along the next paragraphs, we discuss the most outstanding contributions to educational resource brokerage from the institutions and organizations interested in the standardization of computer-based education. Brokerage, as defined here, is implemented only by GESTALT. Other systems, like GEM, ARIADNE or EdNA, offer a front end to access a relational database where resources are maintained. For GESTALT, searches imply a three-step procedure where the broker role is clearly identified. The broker

L. Anido et al. / Computer Standards & Interfaces 25 (2003) 303–327

receives student queries, and tries to find the corresponding resources in other modules. When resources are located, the broker returns them to the final user. This intermediation step is clearly missing in other approaches. The basic difference between GEM and ARIADNE is the distributed nature of ARIADNE. The relationship between brokerage and metadata definitions is obvious. All these systems have their own educational metadata schemes or contributed to the definition of external specifications. Within the GESTALT initiative, there exists a brokerage service for educational resources to facilitate users the selection of courses, and the preferred institution to access these courses. This service is named Resource Discovery Service (RDS). GESTALT’s RDS services allow users to explore which courses and modules are available from every registered institution. Following the concepts and architecture developed within the GAIA project, RDS is composed by a CORBA-compliant broker and some additional components to support course location. However, GESTALT do not support actions related to course ordering or delivery. RDS architecture is outlined in the left side of Fig. 3. Right side of Fig. 3 shows the relationship among RDS and other system components. RDS can be contacted through a Web gateway. Learners use available broker services to discover the existence of

307

educational resources. It is also possible for the learner to restrict the search by providing a personal profile. User profile information is exchanged using IEEE LTSC’s PAPI [9] format through the LDAP directory service. RDS exchanges metadata information with the Asset Management Service (AMS), which provides access to valued educational resources and publicizes educational resources to the RDS. This service is accessed through the Asset Metadata Server, a database that contains metadata descriptions for educational resources. 2.3. Brokerage in the e-learning domain revisited. A critical analysis The projects introduced above are directly related to the work proposed as the main contribution of this paper. However, they do not cover the stated objectives at the second level of the e-learning standardization. On the one hand, ARIADNE, GEM and EdNA only provide a front end to catalogue learning objects together with a Web-based searching environment. There is no actual intermediation and they cannot be considered as real brokerage systems. GESTALT offers a brokerage framework. However, as the components of their functional architecture are the result of previous ACTS projects, they do explicitly state that internal interfaces are not among the objectives of their work.

Fig. 3. RDS architecture (left side). Relationship among RDS and other system components (right side).

308

L. Anido et al. / Computer Standards & Interfaces 25 (2003) 303–327

Nevertheless, these internal interfaces are extremely important from a software reuse point of view because they are needed to build new systems in a scalable way. In addition, after the project deadline they only published part of the external interfaces. However, published documentation [11] does not completely define the behavior that must be provided by the components compliant with the specification. Besides, in our opinion interfaces are under specified and not adequately documented. From this section, we can conclude that the first level of standardization has already produced mature specifications, and some proposals are considered defacto standards (e.g. LOM metadata specification). However, results at the second level have been scarce so far. There is still a lot of work to be done to provide suitable standards for the business logic tier. Our purpose is to contribute to this second level of standardization by the identification of common services for e-learning brokerage that will be supported by a Reference Architecture. This architecture will be composed of reusable software components with open and clearly identified interfaces. All interfaces in the architecture define their externally visible properties to facilitate the development of distributed, interoperable, and standards-driven online educational brokerage systems. The proposed implementation/deployment environment is CORBA, for which our work can be considered a draft proposal for a Domain CORBA Facility for educational brokerage (CORBAlearn).

3. Methodology In order to identify the most suitable services to be included in a draft proposal for a Domain CORBA Facility for educational brokerage, we defined a strict and systematic methodology for domain-specific service specification. The proposed methodology follows the guidelines established by the OMG’s Model Driven Architecture (MDA) [18] and is based on the Unified Software Development Process [15] together with the recommendations by Bass et al. [6]. Nevertheless, the Unified Process guidelines are oriented to obtain a, possibly large, final product. We need a methodology to define standardized service architectures to build those final applications upon it.

On the one hand, the Unified Process identifies a set of iterations in the software development process: requirements, analysis, design, implementation and test. It is an iterative and incremental process, centered on the architecture and driven by use cases. Designers begin by capturing customer requirements as use cases in the Use Case Model. Then, the next stages consist on analyzing and designing the system to meet the use cases, thus creating first an Analysis Model, a Design Model and a Deployment Model. Then, final development phases are modeled through an Implementation Model. Finally, developers prepare a Test Model that enables them to verify that the system provides the functionality described in the use cases. On the other hand, Bass states that a Reference Model and a Reference Architecture are previous steps toward an eventual software architecture. The Reference Model is a division of the functionality, together with the corresponding data flow between the identified components. Whereas a Reference Model divides the functionality, a Reference Architecture is the mapping of that functionality onto a system decomposition. Our proposed methodology (cf. Fig. 4) includes a Reference Model as an initial stage of the development process to improve capture of requirements, and a Reference Architecture as an intermediate stage with a visible and platform independent model of the service architecture. An underlying platform-independent semantic model is used as a starting point for further, particular software service definition. Therefore, we are following the guidelines proposed in the MDA. The MDA separates certain key models of a system, and brings a consistent structure to these models. Models of different systems are structured explicitly into Platform Independent Models (PIMs) and Platform Specific Models (PSMs). Our PIM will be the Bass’ Reference Architecture, which is obtained, in turn, from Unified Process PIM artifacts (Use Case Model and service packages from the Analysis Model). PSMs will correspond to subsequent Unified Process phases. How the functionality specified in a PIM is realized is specified in a platform-specific way in the PSM, which is derived from the PIM via some transformation. PIM artifacts are modeled using the Unified Modeling Language (UML) [14]. This is the ubiq-

L. Anido et al. / Computer Standards & Interfaces 25 (2003) 303–327

309

Fig. 4. Development methodology.

uitous modeling notation used and supported by every major company in the software industry. PSMs artifacts are modeled using the UML profile for the target platform. In our case, we used the UML profile for CORBA [20].

4. Reference model As stated before, the first stage in the methodology is the identification of a Reference Model (step 1 in Fig. 4). This is a standard decomposition of a known problem into parts that cooperatively solve the target problem from the clients’ viewpoint, i.e., it is a business model where the key agents participating in the domain under study and the data flows among them are identified. For this identification, an exhaustive analysis of the most outstanding e-learning systems (some comparatives from other authors were taken as a reference [16,10]), the proposed specifications from the main contributors to the e-learning standardization [5], and previous authors’ experience [2,3,12] have

been used. We have identified three elements in a Reference Model for e-learning (cf. Fig. 5): 1. Educational Content Providers (ECPs) are responsible for developing learning objects. They are equivalent to publishers in conventional education. 2. Educational Service Providers (ESPs) are the schools, universities, or academic institutions in these virtual environments. ESPs use ECPs to access, maybe under payment, online courses to be offered at their institutions. This could be done directly or indirectly using a Broker, which is the third party involved and the stakeholder this paper is focused on. 3. Brokers are responsible for helping both learners and ESPs to find suitable learning resources: learners use Broker services to find and locate those institutions (ESPs) that offer courses they are interested in; ESPs use Broker services to find learning resources offered by the ECPs to enrich their catalogues. To offer these services, Brokers

310

L. Anido et al. / Computer Standards & Interfaces 25 (2003) 303–327

Fig. 5. Reference model for e-learning.

must maintain metadata information about courses offered by associated ESPs and ECPs.

5. Use case model Once established the high level components of the system in the Reference Model, their functional requirements from clients’ viewpoint must be captured and documented. Firstly, it is necessary to model the clients of the domain under consideration. We decided to use the ERILE recommendation [17], which is the pioneer specification identifying the actors that should be considered in standardized e-learning environments and it is widely recognized among the e-learning community. ERILE identifies six actors: Tutor, Teacher, Reviewer, Learner, Author and ESP Administrator. For the purposes of this paper, the last three are relevant since they interact with the Broker. They perform two different roles (cf. Fig. 6), Searcher and Advertiser, depending on whether they publicize learning objects in the Broker or look for them. The Author can publish those learning objects developed in ECPs using the Broker services. The ESPAdministrator looks for new contents (Searcher) developed in ECPs and, at the same time, publishes (Advertiser) those courses and educational services offered at his institution. These educational services will be located by Learners acting as Searchers in front of the Broker. Secondly, a requirements analysis for these actors is needed. This is mainly modeled using use case

diagrams, where the interaction of a particular actor with any of the elements from the Reference Model is depicted. Just the common functional aspects of the target domain must be considered because the eventually defined software services should be as general as possible, and therefore suitable for most final elearning applications. The Use Case Model (step 2 in Fig. 4) was derived from a successive refinement process. As the use cases mature and are refined and specified in more detail, more of the functional requirements are discovered. This, in turn, leads to the maturation of more use cases and, even new actors may show up as generalization or specialization of others. Use cases are not selected in isolation, they drive the system architecture and the system architecture influences the selection of the use cases. Therefore, both the system architecture and the use cases mature together.

Fig. 6. Actors that interact with the Broker.

L. Anido et al. / Computer Standards & Interfaces 25 (2003) 303–327

Fig. 7. Example of use case diagram.

311

312

L. Anido et al. / Computer Standards & Interfaces 25 (2003) 303–327

For the Searcher we have identified use cases related to: (1) registering in the Broker: customized services need Searchers to provide individual data and register each time they access the Broker; (2) profile management: searches can be improved using the Searcher profile, which must be introduced and updated periodically; (3) course searching: several use cases capture Searcher requirements to search, locate and retrieve those learning objects they are interested in; and (4) subscription to the notification facility or asynchronous search: this group captures requirements related to notification requests. Whenever new learning object matching the Searcher preferences are located by the Broker a message would be sent by this facility to the subscribed Searchers. Advertiser’s requirements are grouped in two different use case scenarios. The first one encapsulates those requirements related to Advertiser registering and data management. The second group encapsulates course cataloguing and publication. Fig. 7 presents the use cases for this second scenario. The whole use case model [1] is described in a twofold way: (1) an English text description is provided as a first approach to user requirements; and (2) a more formal description is offered following the table-based format proposed by Cockburn [7].

classes, maybe adding new responsibilities. Three types of classifiers are used at this stage:

6. Analysis model

6.2. Realization of use cases

The Use Case Model is the starting point to define an Analysis Model (step 3 in Fig. 4). In analysis use cases are further studied from the designer’s point of view. As a consequence, a deeper knowledge of the system is achieved. From an initial analysis of the use cases, a set of analysis packages for each element of the Reference Model is defined. Analysis packages groups those use cases that are functionally related. Then, every package is analyzed separately, possibly by different teams, in order to identify analysis classes for them. We identify those classes that are able to realize cooperatively the functionality captured in use cases. Identification of analysis classes for every package is an iterative process. First, obvious classes are directly identified from the domain analysis. Then, from the realization of every use case new classes came up. However, whenever possible, we must reuse existing analysis

In each iteration of the application of the proposed process, all use cases identified must be realized through collaborations among analysis classes. Analysis classes may come from the analysis of the domain (e.g. Course Description), or they may be a consequence of a use case realization whenever there is a need that cannot be covered by previously identified classes. Designers must assure that every use case is realized through collaborations among analysis classes. As an example, Fig. 9 summarizes the collaborations needed for use cases Publicize Course, Provide Course Description Data, Provide Course Services and Notify Subscriber, which were identified in the use case diagram shown in Fig. 7. During analysis the project team detected the need of allow different Brokers to collaborate in order to improve the search results. In this way, the brokerage facility would use data stored at different

Boundary classes model the interaction between the system and its actors and other external systems. Control classes represent the coordination and sequencing among analysis classes in order to allow the realization of use cases. Entity classes are used to model persistent data. Basic access (read/write) operations are allowed from a boundary class. More complex operations need the intermediation of, at least, one control class.

6.1. Analysis packages Analysis work has been divided into two different packages (cf. Fig. 8): 1. Searching Environment. In this package cataloguing and searching requirements are analyzed. 2. Value-Added Service (VAS) Environment. The configuration of the notification facility, management of Searchers’ profiles, and advanced customization of searches are analyzed in this package.

L. Anido et al. / Computer Standards & Interfaces 25 (2003) 303–327

Fig. 8. Analysis packages for the Broker.

locations and therefore, the probability of obtaining better and more relevant results is higher. In the Unified Process this situation is called analysis of special requirements. To allow a Broker to forward searches to external Brokers using a federated network, the project team identified the following set of analysis classes (cf. Fig. 10): (1) Federation Data, an entity class that encap-

313

sulates all data needed to forward a search (e.g. federation topology, federation policies, etc.). This data would be entered and updated through a boundary class (3) Federation Admin UI, which is used by a new actor that appears as a consequence of these special requirements: the Federation Administrator. Finally, the (4) Federation Handler is a control class that acts as an engine for federated searches, managing and deciding to which Brokers the search should be forwarded according to the data stored at Federation Data. This scheme is open enough to allow for different propagation policies. For example, a federated search can be forwarded to external Search Engines in other Brokers with no further propagation (cf. Fig. 11). An additional possibility (cf. Fig. 12) is to forward the search to an external Federation Handler (in Broker C), which, in turn, propagates this search to a third set of Federation Handlers or to Search Engines (Brokers D and E) with no further propagation depending upon its policy (Federation Data in Broker C).

Fig. 9. Example of use case realization.

314

L. Anido et al. / Computer Standards & Interfaces 25 (2003) 303–327

Fig. 10. Analysis classes involved in federated searches.

6.3. Service packages A final step at this stage is the identification of service packages. A service package groups those analysis classes offering basic common services useful for the development of distributed and interoperable systems. Classes in a service package have a strong relationship among them, manage the same underlying information models and, therefore, tend to change together. They offer the basic functionality required

to build final systems and allow those responsible for developing new systems to reuse service packages. In short, classes that can be easily reused to build a wide range of different applications for the domain in question are included in a service package. We propose to include in service packages all those analysis classes that model and manage standardized data models from the first level of standardization, represent basic services and whose services are general enough to be considered system-independent.

Fig. 11. Federated search. First scenario.

L. Anido et al. / Computer Standards & Interfaces 25 (2003) 303–327

315

Fig. 12. Federated search. Second scenario.

Classes representing application-specific functionality or that deals with data whose underlying model has not been standardized yet are kept out of service packages and included in the wrapper software layer (cf. Fig. 4). Fig. 13 presents the service packages and classes assigned to them for the Searching Environment analysis package: 1. Broker Core. This package offers the basic functionality in a brokerage facility. It includes those entity classes needed to encapsulate information about the courses being offered (Course Description using standardized educational metadata) plus the services (Service) the advertiser (Advertiser Information) that published the learning object offers for it. In this model, an Advertiser offers a set of services and a set of learning objects. However, not every service has to be provided for

each learning object offered (see composition and aggregation relationships in the figure). Finally, the Search Engine control class accesses the Course Information stored to retrieve relevant results to Searchers. 2. Federation Management. This service package allows a Broker to forward searches through a federated network. Only the boundary class involved in the federation facility is kept out of the service package. In general, all boundary classes representing user interfaces or gateways to external systems were assigned to the wrapper. Two additional service packages were identified for the VAS Environment analysis package (cf. Fig. 14): 1. Profile Management. Searches performed on the Broker can be adapted to the particular profile (e.g.

316

L. Anido et al. / Computer Standards & Interfaces 25 (2003) 303–327

Fig. 13. Broker service packages (I).

educational preferences) for each Searcher. Whereas the Profile entity class encapsulates this data, using information models coming from the first level of standardization, the Profile Query Adapter control class is responsible for query customization. Educational Metadata and Learner Profiles are the two information models from the first level

of standardization that should be managed by this package. 2. Notification Management. This package offers the basic services needed to build a notification facility. The Notification Finder control class uses the Notification Data that encapsulates all needed information to decide to which Searchers the

L. Anido et al. / Computer Standards & Interfaces 25 (2003) 303–327

317

Fig. 14. Broker service packages (II).

notifications should be sent. Additional classes, Notification Handler responsible for implementing the particular notification policy in each Broker; and Notifier, responsible for sending actual notifications (e.g. by e-mail, SMS, etc.), are kept out of the service package as they are considered too system-dependent.

7. Reference architecture Service packages from the Analysis Model are the foundation for the Reference Architecture (step 4 in Fig. 4), which is the Platform Independent Model in our proposed methodology. It is a decomposition of the Reference Model into a set of components that provide the functionality identified along the previous stages. Note that, in a general sense, the objective is

the definition of a set of services to facilitate and speed up the development of standardized distributed and interoperable final systems. To build such a final system, a possible approach is to acquire, or develop, an implementation compliant with the defined services and compatible with its development platform. Over it, the wrapper layer (cf. Fig. 4) provides valueadded system-specific features to meet the particular needs of their customers. The basic properties of a well-constructed Reference Architecture are: o

Service architecture based on reusable subsystems. Architecture subsystems correspond to reusable service packages. The only necessary additional elements for building final systems are boundary classes representing user interfaces and control classes encapsulating value-added system-specific services.

318

L. Anido et al. / Computer Standards & Interfaces 25 (2003) 303–327

Fig. 15. Broker reference architecture.

Standard-driven architecture. Adopting appropriate industry standards is a vital decision. Subsystems in the architecture must correspond to components implementing business logic and management of information models identified by standardization bodies and institutions whenever possible. However, no dependency to specific proposals must be established. Actual bindings to existing information models are left for the implementation stages. Therefore, the subsystems should offer introspection mechanisms to find out the particular supported models. o Architecture oriented to local changes. Most modifications and updates should affect just to one component. Dependencies in the existing information models and user requirements must be thoroughly analyzed prior to component decomposition. Changes in both of them should be reflected as updates of as few components (ideally one) as possible. o Scalability. Construction of final systems can be done as an incremental process through the successive introduction of new elements from the architecture. Architecture subsystems must be loosely coupled, and dependencies have to be directed from more complex to more basic components. o Adaptability. The clear identification of the role of each subsystem through use case realizations, decomposition and dependency analysis must support the replacement of specific subsystems by functionally equivalent ones. o Interoperability among heterogeneous systems. Open software interfaces and a common Reference Architecture support interoperation, provided final systems conform to the architecture. Value-added features are then implemented through the elaboration and improvement of wrapper classes. Full interoperability needs defining component interfa-

ces using a concrete interface definition language, usually tied to a particular implementation environment. From here, platform-independent semantics are used as the supporting framework for the subsequent phases in the methodology.

o

Fig. 15 shows the Reference Architecture identified for educational brokerage systems. Subsystems in the Reference Architecture follow a one-to-one correspondence with the service packages identified in the previous section: Broker Core. It provides basic services to implement searching and retrieval of educational resources. o Profile Management. It supports customized searching for Searchers according to their profiles. Its responsibilities include profile management and o

Fig. 16. BrokerComponent and BrokerComponentAdmin interfaces.

L. Anido et al. / Computer Standards & Interfaces 25 (2003) 303–327

the construction of search specifications to be passed to the Broker Core. o Notification Management. It provides notification services for the Broker. o Federation Management. It supports the construction of brokerage services based on federation topologies to perform collaborative searches among federated brokers. The figure shows the only dependency among subsystems in the Reference Architecture: the Federation Handler class (Federation Management subsystem) needs to access the Search Engine (Broker Core subsystem) control class services to forward searches through the federated network. As the Broker Core will be present whenever we incorporate a Federation

319

Management in our system, this dependency is compliant with the Scalability property as defined above.

8. Design model As the Reference Architecture includes only classes from the Analysis Model, it is implementation independent and purely conceptual. The next step lies in the elaboration of a Design Model (step 5 in Fig. 4) using the constructs and concepts of a specific development environment for building distributed applications. As our final objective is the definition of a new Domain CORBA Facility for e-learning brokerage we chose CORBA as the implementation/deployment environment.

Fig. 17. CL_BrokerFederationManagement (left) and CL_BrokerProfileManagement (right) specifications.

320

L. Anido et al. / Computer Standards & Interfaces 25 (2003) 303–327

The Reference Architecture based on Analysis’ service packages allows the same model specifying system functionality to be realized on multiple platforms through auxiliary mapping standards, or through

point mappings to specific platforms. It allows different applications to be integrated by explicitly relating their models, enabling integration and interoperability and supporting system evolution as platform technol-

Fig. 18. CL_BrokerCore specification.

L. Anido et al. / Computer Standards & Interfaces 25 (2003) 303–327

ogies come and go. At this stage, the Reference Architecture subsystems derive in a set of design service subsystems that materialize the responsibilities of the analysis classes in a concrete implementation environment. Unlike previous phases where the UML generic metamodel was used, at this stage we have used the UML profile for CORBA to model the resultant artifacts. Since new systems can be developed incrementally, implementations of Brokers compliant with the defined interfaces do not need to cover the four subsystems defined by the Reference Architecture. Clients accessing the Broker would use the navigation mechanism provided by the BrokerComponent interface (cf. Fig. 16) to know which subsystems are actually implemented. This introspection mechanism follows the pattern

321

used in the Domain CORBA Facility for Healthcare (Lexicon Query Service Specification [19]). Also, the figure shows the BrokerComponentAdmin that defines services to add or replace subsystems in a particular implementation of a Broker-compliant system. Figs. 17, 18 and 19 show the CORBA IDL interfaces specified for the Reference Architecture subsystems. These interfaces define the concrete software services provided by CORBA-based components compliant with the above presented Reference Architecture and analysis’ service packages. Table 1 presents the correspondence between IDL interfaces and analysis classes assigned to service packages. Implementations compliant with these interfaces need to manage data models coming from the first level of standardization (e.g. educational metadata and

Fig. 19. CL_BrokerNotificationManagement specification.

322

L. Anido et al. / Computer Standards & Interfaces 25 (2003) 303–327

Table 1 Analysis model vs. design model Specification Interface

CL broker federation management

Federation Searcher Analysis class Federation Handler Reference architecture subsystem

Federation Management Federation Data

Federation Management

CL broker core Searcher Advertiser Management Search Advertiser Engine Information + Service Broker Core

learner profiles). The Reference Architecture imposes that no dependency to concrete information models must be established. Actual bindings to existing information models are left for the implementation stages. Defined interfaces offer introspection (getSupporrted. . .Schemas()-like) methods to find out the particular models supported by implementations compliant with them. An XML-DOM-like [21] specification is used to encapsulate DOM-like information models regardless to which concrete proposal from the first level of standardization they are compliant with. CORBAlearn specifications include a total of 16 interfaces with 184 IDL methods defined. All services and data structures are a consequence of the strict and academic application of the methodology presented in Section 3. At this stage, interfaces and data structures are modeled using the UML profile for CORBA. Also, for each method an English-text description is provided. In addition, a set of sequence diagrams is offered to provide a further insight into the service descriptions. These modeling artifacts have a twofold objective: On the one hand, to help developers of implementations compliant with the interfaces to fully understand what must be offered to their customers. On the other hand, to help users of CORBAlearn-compliant products to understand what they can expect from these and how they should be used.

Cataloguer Course Information + Description

CL broker notification management

CL broker profile management

Notification Manager Notification Data

Profile Manager Profile

Notification Finder Notification Finder

Notification Management

Profile Translator Profile Query Adapter Profile Management

As an example, Fig. 20 presents the description for the method getSubscriberInfo, which provides the whole stored information about a Searcher subscriber to the notification facility. The parameters, exceptions and purpose of methods are fully described. These method descriptions are complemented with models of those data structures involved. Fig. 21 presents the UML profile for CORBA model for the SubscriberInfo structure returned by the above-mentioned method. The SubscriberInfo structure is composed of two main parts: 1. A sequence of NotificationMethodInfos. This structure maintains the notification methods (e.g. e-mail, SMS, etc.) selected by this subscriber (notification_method) together with additional configuration properties (notification _ properties, e.g. e-mail address, phone number, etc.). 2. A sequence of Triggers. The Trigger structure includes: (1) an identifier (trigger_id); (2) a predefined query that defines the characteristics of those learning to be notified by the Broker to this subscriber (this structure is defined in another part of the CORBAlearn specification); (3) additional configuration properties (trigger_properties), e.g. maximum number of notifications raised by this trigger in a given period of time; and (4) the

Fig. 20. NotificationManager interface and getSubscriberInfo method signature.

L. Anido et al. / Computer Standards & Interfaces 25 (2003) 303–327

323

Fig. 21. UML Profile for the SubscriberInfo structure.

particular notification method (notification_methods) to be used for this trigger from the selected ones by this subscriber (notification_method_templates, see above).

9. Prototype Different vendors claiming compliance with the CORBAlearn specification will offer products that provide the services defined in these interfaces. Over these basic services, developers of final applications build the wrapper, where user interfaces are embedded

and additional value-added services may be included. There is no restriction in terms of wrapper technology. The only assumption is that it is able to use CORBAlearn services through a CORBA software bus, which may be accessed from most WWW browsers. Also, the wrapper may be distributed among the server and client, or it may be totally embedded in the client. Fig. 22 shows two different scenarios for a final system implemented in a WWW setting using CORBAlearn facilities. On the right side of the figure, we assume that the wrapper is distributed between server and client. The client side of the wrapper provides the presentation

L. Anido et al. / Computer Standards & Interfaces 25 (2003) 303–327

Fig. 22. Two alternative implementation approaches.

324

L. Anido et al. / Computer Standards & Interfaces 25 (2003) 303–327

tier (i.e. user interfaces) from where commands are sent to the server side of the wrapper. The latter uses CORBAlearn facilities to implement basic commands and, probably, it also may encapsulate some other additional features. On the left side of the figure, the whole wrapper is downloaded to the client, both the presentation tier and the business logic implemented by the wrapper. HTTP is used for downloading, while Internet Inter ORB Protocol (IIOP) is used to access CORBA objects for subsequent accesses to CORBAlearn services. The whole set of CORBAlearn specifications have been implemented and tested [1] from a Web-based prototype wrapper following the approach shown in the right side of Fig. 22. As part of this work, a set of best-practice guidelines [1] for implementations compliant with the CORBAlearn specifications are provided. These in-

325

clude proposals for the underlying relational databases for each subsystem. Fig. 23 shows a UML representation for the proposed NotificationManagement subsystem’s database.

10. Summary Nowadays, one of the hot topics at the OMG is the specification of services and facilities in specific vertical markets through Domain CORBA Facilities. These specifications consist of interfaces written in OMG IDL with accompanying semantic description in english text. A well-conceived facility is based on an underlying semantic model that is independent of the target platform. Nevertheless, the model could not be distilled explicitly, and this is the case with OMG

Fig. 23. Proposed database for the BrokerNotificationManagement subsystem.

326

L. Anido et al. / Computer Standards & Interfaces 25 (2003) 303–327

domain specifications because the model for most of them is not expressed separately from its IDL interfaces. As their models are hidden, these facilities have received neither the recognition nor the widespread implementation and use that they deserve outside of the CORBA world. This paper proposes (Section 3) an MDA-oriented methodology to produce standardized vertical facilities with a visible and separate underlying conceptual model. The proposed methodology has been applied to the development process (Sections 4 –8) of a draft proposal for a Domain CORBA Facility for educational brokerage: CORBAlearn. CORBAlearn includes the definition of data structures to encapsulate outcomes from the first level of standardization, exceptions and service interfaces, which are distributed among four different sub-specifications that cover the whole spectrum of functionality included in full-fledged e-learning brokering environments. Open publication of these interfaces within the OMG framework will assure interoperability among products developed by different vendors, provided they are built over CORBAlearn-compliant products. Also, software reuse is promoted as new systems can be built over components offering the basic and fundamental services defined by the CORBAlearn specifications. CORBAlearn specifications are compatible with current available standards for the first level of standardization (cf. Section 2). Information models defined by standards at that level are properly managed and transferred using CORBAlearn-compliant products, with no tie to any specific proposal. This work is an example of how the use of CORBA technologies contributes to the standardization of a specific domain. References [1] L. Anido, Contribucio´n a la Definicio´n de Arquitecturas Distribuidas para Sistemas de Aprendizaje Basados en Ordenador utilizando CORBA, PhD Thesis, Departamento de Ingenierı´a Telema´tica, Universidad de Vigo, 2001. [2] L. Anido, M. Llamas, M.J. Ferna´ndez, Internet-based learning by doing, IEEE Transactions on Education 44 (2) (2001) (Accompanying CD-ROM). [3] L. Anido, M. Llamas, M.J. Ferna´ndez, Labware for the Internet, Computer Applications in Engineering Education 8 (3 – 4) (2000) 201 – 208.

[4] L. Anido, M. Ferna´ndez, M. Caeiro, J. Santos, J. Rodrı´guez, M. Llamas, Educational metadata and brokerage for learning resources, Computers and Education 38 (4) (2002) 351 – 374 (Elsevier). [5] L. Anido-Rifo´n, M.J. Ferna´ndez-Iglesias, M. Caeiro-Rodrı´guez, J.M. Santos-Gago, J. Rodrı´guez-Este´vez, M. LlamasNistal, Virtual Learning and Computer Based Training Standardization. Issues and Trends, submitted for publication. ACM Computing Surveys. Electronic version available at http:// www-gist.det.uvigo.es/~lanido/congresos/anidoSurveys.zip. [6] L. Bass, P. Clements, R. Kazman, Software Architecture in Practice, Addison-Wesley Reading, MA, USA, 1999. [7] A. Cockburn, Basic UseCase Template (Humans and Technology) Electronic version available at http://members.aol.com/ acockburn/papers/uctempla.htm, 1998. [8] P. Dodds, Sharable Content Object Reference Model (SCORM), Technical Report, Version 1.2, ADL Initiative, 2001. [9] F. Farance, Draft Standard for Learning Technology-Public and Private Information (PAPI) for Learners (PAPI Learner), Technical Report, IEEE’ LTSC, 2000. [10] S. Fredrickson, Untangling a tangled web: an overview of web based instruction programs, T.H.E. Journal 26 (1999) 67 – 77. [11] Getting Educational Systems Talking Across Leading edge Technologies (GESTALT) project. Web site at http://www. fdgroup.co.uk/gestalt. [12] F.J. Gonza´lez-Castan˜o, L. Anido, J. Vales, M.J. Ferna´ndez, M. Llamas, P. Rodrı´guez, J.M. Pousada, Internet access to real equipment at computer architecture laboratories using the Java/CORBA paradigm, Computers and Education 36 (2001) 151 – 170. [13] W. Hodgins, Draft Standard for Learning Object Metadata (LOM). Proposed Draft 6.4, Technical Report, IEEE’ LTSC, 2002. [14] I. Jacobson, G. Booch, J. Rumbaugh, The Unified Modelling Language User Guide, Addison-Wesley, Boston, MA, USA, 1999. [15] I. Jacobson, G. Booch, J. Rumbaugh, The Unified Software Development Process, Addison-Wesley, Boston, MA, USA, 1999. [16] S. Landon, Comparative Analysis of On-line Educational Delivery Applications, Electronic version available at http://www. ctt.bc.ca/landonline, Technical Report, 2001. [17] R. Lindner, Expertise and Role Identification for Learning Environments (ERILE), ISO/IEC JTC1 SC36 Technical Report, 2001. [18] J. Miller, J. Mukerji, Model Driven Architecture (MDA), Technical Report, OMG, 2001. [19] OMG, Lexicon Query Service Specification, Technical Report, OMG, 2000. [20] OMG, UML Profile for CORBA Specification, Technical Report, OMG, 2000. [21] OMG, XMLDOM: DOM/Value Mapping Specification, Technical Report, OMG, 2001.

L. Anido et al. / Computer Standards & Interfaces 25 (2003) 303–327 Luis Anido received the Telecommunication Engineering (1997) degree (with Honors by the Spanish Department of Science and Education and by the Galician Regional Government) and a PhD in Telecommunication Engineering (2001) degree from the University of Vigo. He has been also awarded by the Galician Royal Academy of Sciences. Currently, he is an Associate Professor at the Telematics Engineering Department of the University of Vigo. His main interests are in the field of New Information Technologies applied to distance learning and in the area of Object-oriented distributed systems. He has been hired by the European Committee of Standardization (CEN) to develop two CEN Workshop Agreements (CWA) related to learning technologies.

Judith Rodrı´guez received her Telecommunication Engineering (1999) degree from the University of Vigo. In addition to teaching, her main interests are in the Educational Metadata and Information Retrieval. Currently, she is pursuing her PhD in the area of learning objects brokerage.

327

Manuel Caeiro received the Telecommunication Engineering (1999) degree from the University of Vigo. Currently, he teaches Computer Architecture and Software Engineering at the Telecommunication School of the University of Vigo where he is working towards his PhD in the area of Workflow Management in elearning environments.

Juan M. Santos is a Telecommunication Engineering (1998). He has been involved in several projects related to distance learning and e-commerce. Now, he is Assistant Professor at the University of Vigo. His PhD is related to the application of the Semantic Web concepts to improve personalized learning environments.