Compum Educ. Vol. 18, No. l-3, Prmted in Great Britain
THE
pp. 101-107,
0360-1315/92 $5.00 + 0.00 Pergamon Press plc
1992
ARTIFICIAL INTELLIGENCE APPLICATIONS LEARNING PROGRAMME NOEL
Communication
and Information
TO
WILLIAMS
Research Group, Sheffield City Polytechnic, Sheffield SIO 2BP, England
36 Collegiate
Crescent,
Abstract-The Artificial Intelligence Applications to Learning Programme has been funded since 1987 by what is now known as the Training, Enterprise and Education Directorate (TEED). The Programme aimed to explore and accelerate the use of AI technologies in learning, in both the educational and industrial sectors. The ten demonstrator projects were evaluated for their impact on industry and on further and higher education, while the project was in progress and later during its dissemination phase. The most useful outcomes of evaluation emerged during the latter phase, when the innovations had had an opportunity to become established and brought to market. There were issues related to technologydrive and the need to find problems for which the solution existed through group collaboration. One profitable line of development was to use AI to enrich training systems. An ideal training system must feel “good” to the client, should go beyond existing adaptive training systems, offer a high rate of training, be cost effective, have visual appeal, give sophisticated feedback, should fit closely with the user’s current practice, and should have a shelf life of more than 3 years.
BACKGROUND
TO
THE
SEMINAR
The Artificial Intelligence Applications to Learning Programme has been funded since 1987 by the Training, Enterprise and Education Directorate (TEED, previously the Manpower Services Commission, the Training Commission and the Training Agency). It was run by the Learning Technology Unit (LTU) of TEED and funded in three separate phases over 3 years. The Programme aimed to explore and accelerate the use of AI technologies in learning and involved projects in both the education and industrial sectors. Finance was curtailed by Treasury towards the end of the Programme, so not all planned activities have taken place. In particular the dissemination phase of the Programme has been substantially delayed. The Programme was conceived as a number of related projects. Ten demonstrator projects were commissioned to illustrate different applications of artificial intelligence to training. Two further projects ran in parallel, evaluating the impact of the Programme on industry and on further and higher education. A number of smaller activities were also funded to disseminate information about AI in training. The outcomes of the Progamme will be integrated into the future work of the LTU. Outlines of all these projects and the evaluations can be found in Refs [1,2]. EVALUATION
OF
THE
PROGRAMME
The Programme evaluations ended in March 1990. However, few dissemination activities had taken place by this date so necessarily little evidence of awareness was found. Subsequently several activities have taken place which have raised awareness, but these could not be formally taken account of in the evaluation. Impact on further and higher education: thejndings Evaluation of the impact of the Programme on further education (FE) and higher education (HE) had been carried out by census covering all FE institutions and HE departments; interviews with project teams at the beginning, end, and 6 months after completion of projects; and follow-up surveys for three categories of institution, namely those using AI at the start of the Programme, those planning to use AI and those without AI. The evaluation showed that there was more AI in HE (47%) than FE (28%) but these figures were probably inflated due to the difficulties of defining AI. It suggested that most AI in HE was in research, but in FE most was in teaching, that the main area of interest was in expert systems and that projects were usually funded internally. Follow-up surveys showed continued positive attitudes to AI even though there was a decreasing 101
IO?
NOEL WILLIAMS
use of AT over the period of evaluation. Responses showed less use, fewer projects and that many institutions who has said they planned to use AI, did not. The most common reasons given for this decline were lack of funds, lack of expertise and turnover of staff. In terms of the impact of the Programme on individual projects, the evaluators found increased knowledge and awareness of AI, as well as exploration of new and unique techniques and improvements in learning and teaching. They found that projects provided a stimulus to technological development and increased the marketability of staff. However, against these strengths the evaluators identified several problems. For example, there was little use of project expertise outside the project. Applications tended to stay with the project team. There were also problems of marketing and dissemination. Colleges and HE institutions had little expertise in marketing, so did not know what to do once they had a product. Evaluation also suggested that few new links with industry had been developed and that evaluation of the effectiveness of individual projects was not as complete as it might have been. Thus the FE and HE evaluation team had identified several issues to be considered in future similar activities. In the first place, reactions to the Programme had been disappointing and there was little evidence of raised awareness or perceived impact, largely because dissemination activities had not taken place. Much of the demonstrable awareness had, in fact, come through the evaluation activities themselves. As a related problem, the evaluators believed that defining AI itself was problematic. They had found that AI always needs further definition. Hence there was a problem for marketing Al applications, tools, techniques and expertise. The evaluators also believed the Programme had experienced severe problems with continuity, due to difficulties with Treasury funding which had complicated the planning and execution of individual projects. Consequently the time scale appeared too optimistic for all the activities the projects were supposed to carry out. Finally evaluation suggested that there were problems determining the balance of development and dissemination within the available time and with the balance of development and evaluation. The evaluators derived three general issues for education from these observations. First were the problems of focusing an initiative. as it was not clear where the best place would be to concentrate a new initiative in AI and Learning. Second were the problems of developing expertise. Generally. academic staff acquired Al (and IT) expertise in Cm Jzoc and haphazard ways, which led to a lack of strategy and coherence in the building of AI learning applications. Third were the problems of infrastructure. It appeared that FE and HE did not have the right structures to pick up new technology in staff development and support, in marketing, in dissemination or in links with industry. Impact
on indusrr!l nnd commerce:
the findings
The National Council for Educational Technology (NCET) evaluated the impact of the Programme on the industrial sector. It had, as with the educational evaluation, ended in March 1990, though the final report had been able to take some account of the potetztiul impact of subsequent activities. Initially a baseline of awareness was established, using a variety of sources. The evaluation concentrated on key sectors and conducted random surveys through lead bodies within those sectors, supplemented by media surveys, analysis of exhibitions and informal interviews. All these sources conveyed similar messages. As a result of the Programme many technologies had been explored, and much material had been produced which could be valuable to industrial developers. There appeared to be a need to adapt the AI technologies for them and present appropriate development advice. However, there was also a need for more trials of the demonstrator projects to establish robustness in the workplace. The Programme also aimed to accelerate appropriate application of AI technologies to learning, but. due to lack of dissemination, the evaluators believed two more years would have to elapse before signs of such acceleration might be seen. Nor was the evaluation able to find evidence from the demonstrators of cost effectiveness in training. In any case, evidence suggested that industry would not make training decisions based on comparative costs of learning outcomes. Industry’s decision-making appeared much more pragmatic. The evaluators also assessed advances in the application of AI technologies in training in industry. High awareness was found in training supply firms but not in demand firms. As might
AI Programme
to3
be expected, the highest demand awareness was in large firms. The two major factors determining this awareness of AI in training were the experience of a firm in AI applications to business and its prior use of computer-based training. NCET also estimated the growth of the applications of AI technologies in industry and commerce after the Programme. They believed such growth would depend on applications that can show hard evidence of business benefits, such as cost saving, flexibility, new training and improved business performance. The evaluation raised a number of issues for further discussion, as possible future development from the Programme. These included the need to cross the boundary between system trials and true industry applications trials, the need to identify the criteria industry used in taking on and using new technology, the need to develop practical guides for industry and the need to investigate business and commercial applications of AI systems that could also be used for training. Both evaluations showed that the Programme had been significantly affected by the way it was funded. Both also identified a strong relationship between computer assisted learning (CAL) or CBT and AI. Typically trainers and educators would not consider AI if they had not already CAL or CBT expertise. Discussion
of the evaluations
The industrial evaluation had shown a considerable lack of awareness on the demand side, with many firms believing AI actually was CBT. Rediffusion, for example, had disseminated information extensively through its sales force and after 2 years, still found a general ignorance of CBT. Part of the difficulty seems to be with marketing training innovation generally. For example, System Applied Technology does not use the terms “AI” or “ITS” to sell training systems to clients. Instead they market by demonstrating the reduction in training time, the consequent staff savings and the comparative castings for conventional CBT. So, in presenting AI to trainers, it has to be explained in the context of existing CBT, not as something separate, and in most contexts there is no value in making a distinction between them. Expecting positive outcomes from evaluations conducted concurrently with the Programme itself seemed unrealistic. In particular, dissemination, marketing and further development only took place after the Programme had ended. The most useful evaluation could therefore not take place until between 6 and 18 months ufter the Programme. Meaningful evaluation depends on a realistic sense of what is possible, but expecting useful results from evaluations concurrent with a Programme is not realistic. As a model for evaluating future initiatives in learning technologies, concurrent evaluation may only be worthwhile as a formative influence on individual projects. However, because different objectives can be served by evaluation, a clearer definition of the intended evaluation and its outcomes in each particular project will give everyone concerned a clearer sense of how the evaluation is to work in each case. The relationship between overall Programme evaluation and individual project evaluations also needs clearer specification, and project evaluations can be more effective if they address specific characteristics and techniques of the work on which developers need feedback, rather than projects as a whole. Exploration
of dual use systems
NCET’s industrial evaluation had seen a growth in the use of business AI applications, especially expert systems. In some cases the business expert diagnostic system is also now used for training (e.g. by using fictional case study material), backed by paper-based training. Expert system marketeers are moving in on training applications, using expert systems for training because they can be shown to reduce training time. Meeting
industry adoption
criteria for training
Some of the problems addressed by training can be solved by AI, by removing the need for training. Solutions should be sought for performance problems generally, not just training problems, addressing real business objectives; e.g. by hiding AI within otherwise conventional training or performance support. Dissemination by promoting the surface effects of AI not the underlying causes addresses general issues of performance so is more likely to open up budgets.
104
NOEL WILLIAMS
Dissemination activities Since the end of the evaluations, dissemination had continued. Recent and continuing dissemination activities have included the Intelligent Learning Technology Initiative, which has established five user clubs to develop training applications using AI; the Adaptive Training open learning awareness package, developed by System Applied Technology; AI in Training Awareness Seminars, held jointly by System Applied Technology and Rediffusion and the production of an AI in Training video. Dissemination proved to be important. The topic encompasses how the ideas, outcomes and techniques developed by the Programme might become more widely available, and how specific products could be marketed. The LTU intends to include the outcomes of the Artificial Intelligence Applications to Learning Programme in any future seminars and general dissemination activities and will integrate any future AI work into its other activities. Separate dissemination of the outcomes of AI projects is not intended, however. Nor does the LTU have a marketing role, for it brings projects to prototype stage but leaves marketing to project institutions and other interested organisations. The LTU is not able to support total market-place dissemination but rather dissemination at conferences, workshops and other public events. There is some concern about the lack of specific dissemination. Project teams indicated a lack of awareness of the activities and outcomes of other projects. Representatives of other interests wished to know what had taken place and how they could find out about it. There is also some concern that the many positive outcomes of the Programme will be lost if not targetted for specific audiences. In some cases, the Programme has resulted in products, such as the DUBS small business advisor, which are now being sold. The LTU supported any such dissemination activity by project teams, whether as publication or product. Whilst TEED has to sanction any such activity, which exploits or employs the results of the Programme, such agreement is not normally withheld. In this context several people have proposed independent publication of edited versions of the project reports. e.g. as a book documenting successes of Technology Based Training, focusing perhaps on the tools, techniques, educational principles and the nature of the teams needed. However. LTU as yet have no plan for a synthesis of individual project reports. Murketirg Although the LTU can influence marketing it cannot implement general marketing strategies itself. Currently the key concepts to market are neither AI nor CBT on their own, but open and flexible learning (OFL) which includes the use of technology-based training (TBT). Small institutions may not be able to market without external funding, as they lack the skills, management and facilities to do so. For example, at Castle College, despite highly successful trialling of the CUSTOM package (a package for teaching customer care skills to catering students), it cannot be taken into the market-place without external support. The LTU sees consortia as the best way to achieve this. with TEED owning little of each consortium. Others suggest that if there is value in the Programme then entrepreneurs will exploit it, but it cannot be left to the market-place. For example. System Applied Technology would create a market by raising awareness of clients, which means taking technological solutions to industry looking for appropriate problems to solve. Consequently the best activity for the LTU might be to create a climate in which individual entrepreneurs and project teams can promote the products, skills and knowledge which they now have as a result of the Programme, through awareness-raising activities. The LTU might indeed be able to create a climate conducive to successful marketing on the project firms’ behalf, especially for applications in the electronics and finance areas. However, this climate might not particularly serve the interests of educational institutions. O~t3ership of‘ irzformation Educationalists are also interested in access to the information which came out of the Programme. The ownership of information from projects which were joint-funded by public and private interests is a barrier to the possibility of dissemination through shareware because of the private element. Public investment means the information is freely available, but source code remains proprietary. However, educationalists interested in obtaining the ideas developed by LTU funding should have no difficulty as the prototypes are Crown Copyright, even where intellectual
AI Programme
105
property rights belong to the creator. Many collaborating firms (such as Rediffusion), in any case, operate an open door policy and are keen to provide information generally. With further development, prototypes can be sold, providing the Crown agreed (e.g. by arranging to recover its costs) so it is likely that some additional products might appear. Consortia Although consortia might not work for education, such an approach would produce cost benefit in industry, which is the LTU’s primary measure of success. Perhaps industrial consortia would not release information created by such projects, causing other projects to go over already explored ground. Against this, such explorations would usually be needed in any case, producing significant variations in each particular context. Educational institutions could benefit from consortia if they were willing to back marketable products (e.g. Bradford University’s ELF system). Perhaps there should be different consortia with different ranges of expertise with a view to exploitation. For example, by building in explicit exploitation points of different kinds into projects, different ends could be served. Academics, for example, might be more interested in the generic features of interface design or in the techniques, which are the benefits meaningful to them, but not a directly marketable product. However, academic institutions typically could not join in consortia as heavily investing clients. So consortia would not generally feed academic needs. Whilst the project developers believe that AT technology did produce worthwhile results, there are still only a few applications around. This is probably because companies take a high risk, for training is seen as an absorber not a producer of resources, and trainers knew nothing of AI and distrust it. Those barriers must be overcome before applications could become widespread. The problem is perhaps even wider than this, as trainers generally do not use CBT. LTU research suggest that OFL is used only in about 9% of U.K. firms, CBT in 10 % of those, and AI in only a small percentage of those. In contrast, however, 33% of large firms are exploring OFL, because they have training professionals and can provide cost-effective training. Small and medium firms cannot do so without specialist help. Furthermore the advisors, such as small business counsellors, are not recommending technology-based training solutions to small and medium firms because of their own lack of knowledge. So a priority is to convince such intermediaries of the value of OFL and TBT. Convincing intermediaries is difficult because they typically rely on their early training, belonging to an era before OFL and TBT. Nor can small firms command the budgets needed for solutions of the sophistication of the Iccarus type. They need simpler and cheaper tools. LESSONS
LEARNED
Technology led development In principle, development should be led by finding solutions to problems, but there is a wide belief that a substantial line of innovative development has to be technology led. Otherwise new techniques would not be tried, discoveries could not be made, and no “visionary” or exploratory work was likely to take place. There is a need for “collective envisaging”, allowing projects to pool expertise and skills in a collective view of what might be possible. The LTU could perhaps fund a forward-looking group to promote the results of the Programme for precisely this purpose. A development route which identifies a solution, then looks for a problem, sometimes works. Once it has worked, and a match between solution and problem can be shown, then others are able to find similar problems. Richer training systems One profitable line for development is to build on the U.S. systems approach to training (the conventional sequence of tutorial, practice and test) by enhancing such systems with some simulation and logical decision making to give richer training systems. Some new developments, such as hypertext, have also caused trainers to reflect on the nature and quality of training in new ways and discover both inadequacies of conventional provision and new forms of training provision. Such developments do pay for themselves because they are reusable. For example, different learner needs and experience can be satisfied by such a “rich” system in ways which are beyond the scope of conventional CBT.
106
NOEL
WILLIAMS
An ideal training system which uses AI must feel “good” to the client, should go beyond existing adaptive training systems, should offer a high rate of training, should address an application area where it is cost effective, should possess only some state of the art technology, should have visual appeal, should give sophisticated feedback, should fit closely with the user’s current practice. should be able to reason a little more competently than a user and should have a shelf life of more than 3 years. AI can also be useful in restricting the problem space in complex models of machines for fault diagnosis. Rather than representing human expertise AI can be used to prune the search space in, for example, a model of a faulty machine.
There is a need for high level authoring tools which experienced teachers can use to develop teaching materials which include an AI component and enable people to work with a familiar educational vocabulary and with educational objectives and prerequisities. Educational software development may only expand when more tools, such as HyperCard, are available. However, even HyperCard uses a language of “cards” and “buttons”. not a vocabulary familiar to educationalists. AI tools should perhaps be given to trainers and domain experts, rather than software specialists, but suitable tools are not likely to appear for some time. AI might also be used to provide a set of tools for developing instructional materials, e.g. by using existing expert system shells. However, existing expert systems are either inadequate or require additional programming to create appropriate interfaces. Most tools are far too di~cult for the average trainer. Creating such tools is a genuine AI research problem, particularly tools that will enable people to express their expertise comfortably. The complexity of mapping high level educational concepts onto robust AI routines suggests that such tools will not be available for several years. There is also a need for constant underpinning of research, or the whole development of AI in training will decline. Academics are not generally motivated to take material into the marketplace, but are intellectually fired by such research. and so need that support. If this passion is not capitalised upon to create many “vignettes” of what might be done with the technologies, U.S. and European AI project consortia will simply fill the need, as they are already beginning to do.
It is now realistic to expect cost benefits of AI technology. Robust exampfes demonstrating those benefits are available both from the Programme and elsewhere. One such benefit is that AI allows the building of better training, and is no more expensive than conventional CBT and demonstrably more cost effective than other means. Most AI training activities cost the same as conventional CBT. AI is also able to show benefit in specific areas, notably in simulation and fault diagnosis. For example, in the Rediffusion Tate & Lyle training system, expert assistance during the training event proved to have many fewer bugs compared with conventional authored CBT. Building and updating declarative components in training software (as in the IKBS component) is also much easier than the equivalent procedural tasks (e.g. in building the user interface). One area for improvement is in building good quality text for the interface. This can be more expensive than building the knowledge base, as the correct, most appropriate wording for the audience is expensive to achieve. Knowledge for TKBS is relatively easy to get but the communication is much harder. Thus, implementing AI is faster than conventional programming, but sophisticated dialogue takes longer. Cost savings can be clearly demonstrated, as is shown in the data in Table 1. Table
I. Projects run by System Aplied Technology
Application Running boilers Disk training ASSiUJXlCX Buildine sacietv
Costs
160,000
94.500 I 16.000 I50.000
Application sawngs in year I 450,000 430,000 85,000 250.000
(all data in poundi Cost of comparable training 196,000 lOS,7SO I47.000
IS5.000
sterhng) Saving on tM”l”g 36.000 ‘6.250 3 I.000
5000
AI Programme
107
A further identified benefit of AI in training is an increase in efictiueness. In the SAT boiler system, Waterfront, training is given when trainees needed it, i.e. when they have a problem. With an expert system by the elbow of the operator it can act as an adviser and diagnostic tool in normal use, until its output is not understood by the operator. At that point training can bring the operator’s understanding to the necessary level. So in the Waterfront expert system there is a resident tutor which at any time in consultation with the expert can give definitions and can offer tutorials on the terms used (including 2.5h of embedded CBT). This has the function of training at the moment of greatest need. The same system can be used differently for induction of people who began with no knowledge. Training Needs Analysis (TNA) identifies the start point (in the Waterfront system, by using another expert system which knows about training in the areas of boiler technology), giving the learner a profile scored on different components of the training. With that knowledge the tutoring system can then select the training needed, and refine the model by monitoring the learner during the course of training, giving an individualised training route. System Applied Technology believe there are savings of 35% on training time because all the redundant information is omitted from the training. Effectively this saves three staff per year and it could not have been done as cost effectively without AI. OUTCOMES There is some worry that the positive outcomes of the Programme, of which many have been identified, will be lost. Perhaps the LTU could disseminate existing information more widely, improving access to project and Programme outcome information and creating a climate of awareness in which existing successes could be developed and marketed, especially by disseminating the training and cost benefits of AI. Funding strategies for any future Programmes might also be reviewed, along with projected timescales, and the role of project evaluations could be clarified at the start of projects, whilst summative Programme evaluations might be delayed until after the Programme. The LTU will investigate ways of incorporating AI training in other applications, of building AI into future projects, in different contexts, and of developing AI training tools for the non-specialists. Whilst AI still proves an exciting area for many involved in learning technology, it will only succeed by appropriate integration within other technologies, such as in intelligent simulation. REFERENCES I. Williams N., The Training Agency Conference for Contractors on the Ar@cial Programme. Learning Technology Unit, TEED, Moorfoot, Sheffield (1989).
Intelligence Applications to Learning
2. Goodman L. M., Evaluation of the Further and Higher Education (FHE) Section of the Training Agency’s “AI Applications to Learning Programme”. Educ. Train. Technol. In?. 26, 322-334 (1989).