Futures 42 (2010) 427–433
Contents lists available at ScienceDirect
Futures journal homepage: www.elsevier.com/locate/futures
Directions for open source software over the next decade John C. Nash * Telfer School of Management, University of Ottawa, Ottawa, ON K1N 9B5, Canada
A R T I C L E I N F O
A B S T R A C T
Article history:
Open source software lets users study, modify and redistribute the source code. It has shown a surprisingly robust level of activity and importance in the computing world despite extreme dominance of Microsoft operating and office software in the workstation marketplace and the strength of commercial players in the server and industrial sectors. Possible evolutionary drivers are presented for open source software for the next decade, looking at the nature as well as level of use, with preliminary discussion of how the open source approach might be applied to other idea-based technologies, including foresight methods. ß 2009 Elsevier Ltd. All rights reserved.
Available online 16 December 2009
1. Introduction and definitions The generation and application of knowledge is central to modern society, giving importance to the mechanisms by which we carry it out and govern it. In this paper, we consider the open source approach to knowledge creation and management, using the most familiar sphere in which it has been applied, namely open source software. Some workers use the acronym FLOSS for Free/Libre and Open Source Software. At the time of writing, the information technology sector as a whole is just beginning to realize the importance of open source software and the methods used to generate it. We will argue that it could, over the next decade, take quite a prominent role in the industry. Space limits the detail that can be provided of both the forecasts and how they were generated, but hopefully the key elements are presented along with arguments that foresight methods, including scenario planning, may benefit from open source approaches. For this discussion, we will use ‘‘open source’’ to mean mechanisms used to create knowledge that also grant anyone the right to: - access the ideas and content, e.g., source code of software; - modify such material; and - redistribute the material or ideas.
Typically, while redistribution is encouraged, there are conditions, usually to require that all new materials are redistributed under the same or similar conditions of use and redistribution. This is the essence of the Gnu General Public License (GPL, see [4]), which is the archetypal open source license. The Free Software Foundation and Richard Stallman, its core activist, have played a lead role in establishing such licensing frameworks (see http://www.fsf.org). Given the strongminded nature of the people involved, there are alternative viewpoints, such as that of Bruce Perens and the Open Source Initiative, who provide their definition and a list of alternative licenses for open source software [14].
* Tel.: +1 613 236 6108. E-mail address:
[email protected]. 0016-3287/$ – see front matter ß 2009 Elsevier Ltd. All rights reserved. doi:10.1016/j.futures.2009.11.027
428
J.C. Nash / Futures 42 (2010) 427–433
While software development has been the main arena for open source mechanisms, other knowledge-based activities may employ the same general approach. Examples include the Open Law project (http://cyber.law.harvard.edu/openlaw/) wherein lawyers are providing legal arguments in an open source environment, Wikibooks (http://en.wikibooks.org/wiki/Main_Page) a collection of open content textbooks (note that the Free Software Foundation also lists a number of ‘‘free’’ as in free speech (rather than no cost) books at http://www.gnu.org/doc/other-free-books.html), and Wikipedia (https://www.wikipedia.org). For brevity in this paper, open source software will serve as a surrogate for these ventures upon the sea of ideas. 2. History and motivations The antecedents of the open source movement are ancient, founded on the recognition that contributing to a shared effort often delivers benefits greater than the contribution. Even among those fiercely independent settlers of the USA in the 19th century, there was a strong tradition of barn raising, where competitors helped each other build the infrastructure of their industry. Ignoring for the moment purely altruistic motives, barn raising illustrates the sophistication and subtlety of the decisionmaking that leads people to participate in open source ventures. That is, there must be a forecast or estimation of the returns from what is ‘‘given away’’ in resources to generate the content of the venture. Some of the ‘‘returns’’ verge on the intangible. In my own case, as a developer as well as a user of software, I find I have benefited enormously from many exchanges of ideas with members of the open source community, largely in getting timely help and advice with software and hardware problems. While this is fine for an individual, and even small companies that have become large such as Red Hat, it is clear that multinational companies such as IBM have made a much more careful analysis, for example, in putting the Eclipse project (http://www.eclipse.org) into an open source framework [16]. Despite the open source activity of IBM and others, commentaries continue to appear that imply open source is essentially a field of amateurs and hobbyists. This may be a view that is advantageous and even promoted by some proprietary software interests, but many of the significant open source software projects have paid, well-trained core workers, and many of the rest have such people contributing on a part-time basis. Indeed, the motivations for contributing to and using open source projects have begun to be better understood, with business or enterprise models and economic arguments to back them up [3,5,11–13,15,17–20,25,26]. As with most human endeavours, there are a multitude of ‘‘small’’ projects involving only a few people and less than a few thousand lines of code, and a handful of large projects where many workers are involved with significant collections of code and documentation. Some of the small projects are or become important even if they remain small, and these may fit the ‘‘dedicated amateur’’ stereotype. Most of the large, well-resourced projects tend to provide what we may call infrastructure for an industrial sector. Thus we see: - the Linux and BSD operating-system kernels and their related distributions that package the kernel with other software such as the Gnu tools and compilers, What people commonly view as ‘‘Linux’’ is typically a particular packaging or ‘‘distribution’’ of these components such as those of Red Hat, SUSE, or Ubuntu; - various Internet applications (Apache, Firefox, ftp clients and servers, php scripting language); - programming environments (Eclipse, FPC/Lazarus), databases (PostGreSQL, MySQL) and similar facilities where shared effort pays more dividends than individual struggle.
Sourceforge (www.sourceforge.net) hosts a great many open source projects. Few become large or even viable [29]; most – including my own – serve to document initiatives that either are truly dead ends or are absorbed by other projects, either by appropriation of code, of ideas, or even the project members. In this, the open source allows others to see approaches that were less successful. Proprietary failures that do not impact users are generally unseen. We have about half a century of experience with a form of open source software that began with the IBM 704 SHARE library. In the introduction to the manual [6] we find: ‘‘The mutual respect that the participants in these discussions had for the programming competence of the others soon brought the realization that an ‘‘isolationist’’ attitude no longer existed, and almost all professed themselves as quite willing to accept the ideas of others, even to the extent of obsoleting things already done within their own installations. It was unanimously agreed that a full-scale attempt should be made to bring SHARE into being.’’ Clearly many users of open source content such as software or documentation do not contribute, but simply receive and use the content. However, even these passive users may benefit the contributor by enlarging the pool of those familiar with the venture and having a stake in its success, much as a large number of citations of a research paper is considered a measure of the ‘‘impact’’ of that paper. 3. Possible influencing factors Let us list some of the issues that are likely to promote or inhibit the evolution of open source. These issues will underlie the scenarios we can propose.
J.C. Nash / Futures 42 (2010) 427–433
429
3.1. Greater appreciation of the models by which enterprises may benefit from open source processes This has been dealt with in the previous section. 3.2. Managing volunteer resources to create content Open source projects are decentralized volunteer organizations, and volunteers can walk away. Unlike paid employees, they can ignore instructions, and sometimes they can be a big nuisance. From the point of view of the project, the key is to maximize the value of community while minimizing the distractions. Better understanding of how to marshal disparate volunteer resources to create coherent and high-quality software or similar content is critical to large open source projects, and it is my opinion that this understanding is still evolving. There is a very dynamic but very delicate balancing act that must go on between the need for management of the ‘‘project’’ and the openness to volunteer input. Let us look at two examples from my own background and perspective. Gnumeric is the spreadsheet tool of the GNOME desktop initiative (http://www.gnome.org/projects/gnumeric/) that nevertheless runs on other operating systems. My experience in trying to contribute (by invitation of the lead developer, Jody Goldberg) has been that the size and complexity of the Gnome project, with considerable interaction between different components, makes it very difficult to learn enough about the project to make useful contributions quickly. The existing ‘‘management team’’ is reluctant to allow input from all but a few trusted participants because the entire project might be compromised. The cost is a loss of the resources of volunteers simply because there is no good mechanism to welcome them appropriately and channel their energy productively. There are ways this situation could be remedied, but that discussion is outside the present focus. R: a statistical language and package (http://www.r-project.org) came about as a substitute for the (expensive) S language/ statistical package that was started by an imaginative team at Bell Labs in the 1970s. Two academics at the University of Auckland (Robert Gentleman and Ross Ihaka) wanted their students to have some idea of how S worked. Their ‘‘toy’’ grew so much that some members of the S team are now part of R [8]. R has in many ways been more successful than GNOME in harnessing volunteers. I believe this is because the core of R is quite small and can be developed and maintained by a modest-sized team. What gives R its ‘‘punch’’ is literally hundreds of contributed packages as well as contributed documentation which are relatively easy to build and submit. Some of these are not very useful nor very well-done. Indeed there are often two or three contenders for any particular job. Users vote with their mouse/pointer, or sometimes will modify and improve things. This does not mean that everything goes smoothly in R. There are plenty of ‘‘glitchy’’ problems, and it does not yet have a really good graphical interface for users who hate the keyboard. Building packages is only relatively easy, and could be a lot more friendly to non-developers who nonetheless can program R scripts. However, R does seem to have a structure that is more conducive to allowing volunteer contributions. 3.3. Movements to extend ‘‘intellectual property’’ rights There are sporadic campaigns by the entertainment and commercial software industries to extend copyright, trademark and patent laws or use existing ones in an aggressive manner. Here we are dealing with ‘‘Intellectual Property’’ fog. The choice of the word ‘‘fog’’ is deliberate, as are the quotation marks for ‘‘IP’’. Let us be clear, ‘‘Intellectual Property’’ is an artificial construct, much like sqrt(-1). Thoughts and ideas cannot be owned, despite the best efforts of tyrants and villains. While the purpose of IP is ostensibly to reward those who generate ideas or expressions of ideas, there are immense volumes of discussion and argument showing that most benefits go to those people or companies who are adept at manipulating the governmental machinery that controls how copyrights, trademarks and patents are run. A few good examples are found in the Wikipedia list (http://en.wikipedia.org/wiki/List_of_software_patents#Business_methods) of software patents and the links therein. From the open source projects point of view, there are some serious negatives in the current IP machinery. First, many smaller open source projects are almost entirely volunteer-supported while IP holders (who are generally not the creators) are centralized corporations. This means that threats of legal action may be enough to inhibit development of software or similar output even if there is no substantive case to answer. The VirtualDub (http://en.wikipedia.org/wiki/VirtualDub) issues in withdrawing support for importing Microsoft ASF format and with an apparently fraudulent attempt to seize the name as a word mark in Germany highlight this sort of trouble. Of course, even strong companies have such problems, as in the $612.5 million payout by RIM to NTP for ‘‘violation’’ of a patent that was already invalidated at the time of payment [http://www.eweek.com/article2/0,1895,1931062,00.asp]. On the other side of the argument, we now find many very large organizations, of which IBM is possibly the leader, whose businesses are increasingly nourished by open source outputs. This does not mean that revenues are derived entirely from open source initiatives, but rather that they are important drivers of revenues [16]. More troubling than real legal proceedings is the silent avoidance of the use of open source software ‘‘in case’’ of possible trouble. Surprisingly, it is rarely mentioned that most proprietary software is at least as vulnerable. Note that Microsoft lost to Stac Electronics, had to pay $120 million and recall many copies of MS DOS 6.2 [http://www.vaxxine.com/lawyers/ articles/stac.html].
430
J.C. Nash / Futures 42 (2010) 427–433
A related concern is that use of components that are licensed under the Gnu General Public License (GPL, see [4]) may cause the developer to have to release his work under the same license. That, in essence, was one of the goals of the GPL. Largely, this is an issue of knowing your sources, but many companies may prefer non-GPL licenses. Moreover, there are periodic ‘‘warnings’’ about the use of ‘‘viral’’ licenses. These are possibly attempts by proprietary tool and library vendors to scare up business that would otherwise be lost to the open source. The amplification of concern related to such matters is commonly termed FUD, for ‘‘fear, uncertainty and doubt’’ [9]. On balance, it appears that the open source movement will likely continue to be harassed by issues relating to ‘‘intellectual property’’. These are unlikely to be fatal, but the process of their resolution may seriously alter the rate of growth of open source activity and use. 3.4. Prosecutions of monopolist players for abuse of market dominance The various and ongoing legal actions against Microsoft in the USA and Europe are too complex to be discussed here. The following points, which summarize my own opinion, are just a part of the ongoing controversy: So far Microsoft has escaped penalties that will substantially hurt their operations, even including the recent fine of approximately 0.5 billion Euros in Europe [24], and seems likely to continue to prevail in foot-dragging on allowing a level playing field for non-Microsoft developers. Despite periodic announcements by manufacturers, I have only been able to verify that a laptop has been delivered with Linux on the Asus Eee PC which is packaged with Linux as default. Eurocom and some smaller manufacturers have webordering pages with Linux as a choice, but Dell and Lenovo sales people have expressed ignorance of how to offer their machines with Linux when confronted with the press releases. Dell has asked to present what they plan to offer to the Ottawa Canada Linux Users Group in February 2008. In Europe, however, links provided by a referee suggest that a Dell machine may be purchased with Ubuntu installed. While providing only anecdotal evidence, complaints made by Windows users about the high-handedness of Microsoft and similar players seems likely to offer a subtle perception on the part of the public in favour of alternatives such as open source offerings, but only if they exist, are accessible and are deemed credible. 3.5. Adoption of open source software on the desktop by ordinary users We will consider any user who does not develop software as ordinary. Such users are beginning to use applications such as the Firefox browser and the OpenOffice.org office suite. A very few folk have become disenchanted with the paternalism of Windows updates and the difficulty of maintaining a stable desktop through updates and have switched to Linux or Macintosh. This is, however, a minority at the moment, and growth seems to depend on knowing someone who is already using a system and can ‘‘help out’’. Thus numbers may increase greatly in percentage terms without appreciably changing the absolute levels of participation. The packaging and support provided by Ubuntu appears to be having some influence in getting people to try Linux, and this is likely to be accelerated by the increasing prices of Windows Vista and the push by Microsoft to have users update applications such as Microsoft Office. We return to this in the next section. 3.6. Impact of government and institutional choices to use or promote open source There have been a number of initiatives by governments and institutions to use open source software in their own operations, of which but one example is the South African venture http://www.oss.gov.za/. In my experience, these have so far had limited impact, as the public servants who must implement the policies are not conversant with either the processes or the products of open source. Risking the many dangers of generalization, Those who choose public employment are often risk averse. Using something unfamiliar is perceived as risky. ‘‘If we develop it ourselves, we cannot blame (name your proprietary vendor)’’. Indeed, we prepared a study for a client (subsequently published in summary form as [1]) pointing out the need for good learning tools so public servants could acquire the necessary understanding to allow them to recognize appropriate situations for the use of open source software in government applications. A special impetus for open source comes from the developing world. Its main attractions in those areas of the world that are technology-poor are that its acquisition costs are low and that it may work well with limited computing or similar assets. Whereas Microsoft has little interest in supporting MS DOS or Windows 98, which will work quite well on computers considered ‘‘too small’’ by any North American or European teenager, there are several variants of Linux that perform admirably. The One Laptop Per Child (OLPC, http://www.laptop.org) project clearly envisages the use of an open source operating system. Nevertheless, there are rumours that some countries, Brazil in particular, may wish to equip their machines with Microsoft Windows in some form. Whether this interest stems from a feeling that the ‘‘real’’ world uses Windows or if it is simply a fear that third-world children are getting a less-than-modern operating system is unclear. Since the OLPC software
J.C. Nash / Futures 42 (2010) 427–433
431
is being developed alongside the hardware, and the design is being specialized for children to use, including programming its software, this very much smells of pensioners prescribing playground games. The high cost structure for using the Windows software ‘‘stack’’ has been amply detailed by Deugo [2], where the principal cost components arise from the server software needed to support collaboration of the client components. The cost per user of such a choice is well in excess of the overall cost for the OLPC laptop. A possible counter-current may be the recent agreements between Microsoft and China [10] that offer Microsoft software to Chinese students and others at very favourable prices, though the Chinese then voted ‘‘no’’ to Microsoft’s OOXML proposed standard [7]. 3.7. Software for internal use An often overlooked aspect of software is that the vast majority of code is written for internal use. That is, code is prepared to run websites, manipulate private databases, link certain equipment together, and so on. Despite considerable effort, I have been unable to obtain data I consider reliable on the proportion of programmers who work on such internal and usermaintenance tasks, as opposed to preparing software packages or repairing them. As a developer myself, I estimate from a brief review of code on my laptop that I have written, that over 90% of my software work is for local or internal software. Even my most used public open source project (etutor.sf.net) was prepared for my own institution’s benefit. It makes little sense for workers trying to prepare ‘‘quick fixes’’ to write software from scratch if there are suitable building blocks of sufficient quality available. There is, moreover, a large and readily available collection of open source tools. For the most part, installation from the Internet is a one-line command (for example, using the apt tool in Debian-family distributions of Linux such as Xandros or Ubuntu). The time cost and delay to ‘‘try things out’’ is small. In my experience, there are generally two or three reasonable choices, and decisions are likely to be made on the ease with which the chosen package fits with local requirements and style of usage. Proprietary software must also be tried out and tested, and while it may offer – my experience is totally to the contrary – more support, there is very little that can be done to avoid the learning costs. Moreover, in most institutional environments, software acquisition that involves money outlay needs documentation and approvals, with delays of days or weeks. Here the clear advantage is with open source. Furthermore, any open source package with fairly widespread applicability will have commentaries or reviews as well as forum contributions. For packages that give difficulties, there will be more extensive forum entries detailing problems and possible solutions. One also generally learns who wrote the software and what else they have done. Note that in this paper, and particularly this section, I have not dealt with embedded software such as that designed to run a security camera or a mobile telephone. Both proprietary and open source options exist in the embedded software domain, but because the choice of software to install is not made by the device owner, I will not treat this topic here. 4. Open source concepts beyond software The bulk of applications of open source methodologies, that is, approaches to knowledge generation that are cooperative and encourage reuse and distribution of the ideas, have been in software. Outside this domain, ‘‘open source’’ is still largely experimental and seeking to exploit an altered cost or organizational structure brought about by information technology, possibly along with an element of infrastructure building where an individual gains more by contributing to a community than acting alone. Thus the Open Law and Wikipedia projects build useful tools that no individual could afford to create. In academic domains, open journals exploit the vastly reduced costs of electronic editorial and distribution processes [27]. Open processes and standards have, under different names, existed for a long time in the form of professional accreditation mechanisms and communities of practice, though viewing them through an open source lens is unusual. Of course, some of these ventures are privatized and rendered proprietary via trademarks or similar constraints (e.g., law societies, colleges of medicine). Open hardware projects, or more likely, open designs for real objects, are beginning to appear, as in the OpenMoko mobile phone [28]. However, as we move away from ‘‘tools’’ and infrastructure, there exists a greater need for well-planned business models for extracting either revenues or cost savings. When versions of this paper have been presented, the non-software applications of open source ideas have dominated the questions and discussion. This interest has been mirrored in the referees’ suggestions. Readers should note, however, that these applications are very much in their formative stage, as evidenced by the dates of references to them in this paper, many of which post-date the conference where the original presentation of the ideas was made. 5. Foresight methods and open source ideas Foresight and forecasting methods are, of course, quintessentially about ideas, and as such lend themselves to open source development. Traditional surveys such as the overview of scenario methods in current use [23] must try to take a snapshot of what is available and attempt to interpolate between several similar approaches by different authors. Viewing methods as software, and in particular as open source projects, allows for their evolution and refinement, though at the moment academic credit for contributions, even to proprietary software, is seldom commensurate with its cost in effort nor its value to other workers. The issue of attribution and credit for work on open projects is the subject of ongoing investigations we are doing at the time of writing.
432
J.C. Nash / Futures 42 (2010) 427–433
Technology foresight in particular is about the decisions to adopt a particular type of technology, and the decision-making does not follow straight line paths. Typical approaches to modeling trends do not apply. Open source software can be installed in parallel to proprietary software, but it may be difficult to learn which packages are being actually used, so that market penetration curves (sigmoid curves) may be unsuitable to the bumpy data that results from users switching back and forth. Moreover, what we really want to look at is the trend on the unobservable inclination of the population to choose an open source approach rather than the package that they are using at one particular moment. We want to know which way the control lever is being moved, rather than simply the result of that movement. People and organizations can make sudden switches from one approach to another, leading to the possibility of a market collapse for a particular software product. This is akin to pressures building within a volcano—not much happens for a very long while, followed by an eruption. Predicting the time of the eruption is a difficult but important task. There are parallels here with Nelson’s [21] strategic foresight ideas concerning ‘‘change, evolution and transformation of human consciousness and culture’’. Furthermore, following Tevis’ [22] ideas, we may consider that open source software as a low-entry-cost technology open to a wide spectrum of the population. This allows us to enact the future—not react to it. 6. Out on a Limb—one worker’s forecast It is tempting to try to prepare a 10 year forecast by using numbers of packages in Sourceforge, proportion of users choosing Firefox, share of desktop or laptop operating systems and similar measures of activity. As someone heavily involved with the open source community but who also observes the more general panorama of the use of software and similar ideabased content, I believe that what we really want to measure, should such a measure be feasible, is the level of general understanding of the possibilities of community-based development of ideas such as software. Open source methods are highly suited to situations where collaboration and consensus are key to success. Open source methods may, however, be unsuited where the content creation is particular to an individual or coherent team, or where the economics of the market segment favour a proprietary model. At the time of writing, entertainment software such as video games appears to be such a sector, though open source tools exist to create such products. Despite a wide variety of open source software, we still appear to be in the early phase of understanding the relevant choices between open source and other modes of development. In the last few years, more software has been platform independent. Many of the major open source packages have versions for all the main hardware and operating-system platforms. By contrast, many proprietary packages seem to be ‘‘only on’’. While corporate IT departments sometimes try to enforce a ‘‘standard’’ architecture, these departments do not control the actions of the shareholders who will almost certainly force two incompatible IT departments to adapt to a merger or acquisition. An obviously better approach is to standardize on portable file formats (e.g., the ODF used by OpenOffice.org) and methods of access, such as web-based collaborative applications (such as Google Docs, docs.google.com), where the operating system used by workers on their work-stations is largely irrelevant. The power of the personal computer is that it can be personalized to our needs, not that it forces us to accept an inconvenient work method defined by someone who has no knowledge of our situation. The advantage here is with open source, and my forecast is that platform-independent, flexible systems will see considerable growth, with a good proportion, possibly half or more based on open source components by 2020. This proportion, for information, is partly based on an informal poll of my own departmental academic colleagues, of whom only about half were using Microsoft Word for document preparation in early 2007. It is difficult to predict the ebb and flow of different operating systems. For the server community, Linux and BSD seem likely to continue running a large proportion of machines. Unfortunately, there will be many articles such as [19] that shout ‘‘Windows bumps Unix as top server OS’’, when they are only considering sales rather than installations. I personally run six Linux servers, one of which is a network storage unit with Linux embedded, and have only purchased one Linux distribution package, the rest being downloads. Similarly, laptops are almost all pre-loaded with Windows whether it is wanted or not, so a ‘‘sale’’ will be counted on three out of four of my laptops even though Linux is installed instead or as well. http://www.sqlspace.com/viewtopic.php?p=162561#162561, posted April 26, 2007, provides an interesting, though potentially unreliable, snippet that suggests Linux is used twice as frequently as Windows Vista as the operating system in a sample of 125,000 unique machines that accessed a number of web sites related to general sports and political subjects. The same item reported similar results from another attempt (http://www.w3schools.com/browsers/browsers_os.asp) to assess what OS were used. Both reports, however, continue to show the market share for all the Windows family of operating systems at about 90%. Thus we are forecasting from a very low percentage for both Vista and Linux. The potential for Linux and/or BSD to take significant portions of the market from the Windows family is now possible given the technical equality of the offerings. There remains, however, the emotional decision to switch from what is perceived to be the usual or common choice, even if that choice is very expensive. My own guess is that sufficiently many people are becoming aware of the marketing ploys and are willing to try a change that we could see a large scale change similar to that witnessed in the collapse of IBM’s market dominance in the 1980s. Whether this will happen is an open question. In summary, my forecast is mainly for growth in the platform independent applications with potential – but possibly not realized – growth in the operating software area. There will be more use of open source software, some well-considered and some not. I anticipate that by 2020 roughly 30% of any segment of activity that can be called ‘‘infrastructure’’ will be open source, such as server and client tools, office applications, utilities and operating systems. This 30% figure is based on the current success of the Firefox web browser and several of the open source programming environments such as Eclipse. Clearly, there will be a great deal of variation about this figure.
J.C. Nash / Futures 42 (2010) 427–433
433
This paper is, of course, but one view. To follow-up the concepts raised in this paper, I have established a wiki to allow for the ongoing discussion, redirection and refinement of this exercise in forecasting at http://nash.management.uottawa.ca/ffwiki/. Access is open, but contributing edits or additions requires a username and password which will be supplied on request to the author (or, in fact, any registered user). The wiki is, of course, hosted using open source software, namely Debian Testing, Apache2, php5, MySQL and Mediawiki. Acknowledgements My participation in Foresight 07 was supported by the Telfer School of Management Travel Funds. The School also supplied the (virtual) server nash.management.uottawa.ca that hosts the ff-wiki. References [1] J. Calof, J.C. Nash, Learning experiences—open source, in: Dan Remenyi (Ed.), Proceedings of the 2005 International Conference on e-Government, Academic Conferences Ltd., Reading, UK, 2005, pp. 69–76. [2] Deugo, Dwight, Open source software development, Alchemy of open source, in: OCRI Partnership Conference, April 19, 2007, 2007, http://www.ocri.ca/ events/presentations/partnership/April1907/DwightDeugo.pdf (accessed 9 May 2007). [3] Rishab Aiyer Ghosh, et al., Economic impact of open source software on innovation and the competitiveness of the Information and Communication Technologies (ICT) sector in the EU, United Nations University, Maastricht, NL, 2006, ec.europa.eu/enterprise/ict/policy/doc/2006-11-20-flossimpact.pdf (accessed 22 January 2007). [4] Gnu.org, Gnu General Public License, 2006, http://www.gnu.org/copyleft/gpl.html (accessed 22 January 2007). [5] Golden, Bernard, Succeeding with Open Source, Addison-Wesley, Boston, MA, 2005. [6] Greenstadt, John, et al. (1959) SHARE Reference Manual for the IBM 704, 1959, PDF form of scan at http://www.piercefuller.com/library/share59.html (accessed 7 May 2007). [7] Groklaw, The results of the ISO voting: Office Open XML is Disapproved—Updated: It’s Official, Tuesday, September 04 2007 @ 08:26 AM EDT http:// www.groklaw.net/articlebasic.php?story=20070904082606181 (accessed September 12, 2007). [8] Ihaka, R., R: Past and Future History, Computing Science and Statistics, vol. 30, 1998, pp. 392–396. See also http://cran.r-project.org/doc/html/interface98paper/paper.html (accessed 24 January 2007). [9] Irwin, Roger, What is FUD?, 1998, http://www.cavcomp.demon.co.uk/halloween/fuddef.html (accessed 24 January 2007). [10] Kirkpatrick, David, How Microsoft conquered China, http://money.cnn.com/magazines/fortune/fortune_archive/2007/07/23/100134488/ (accessed September 12, 2007). [11] Koenig, John, Seven open source business strategies for competitive advantage, IT Manager’s Journal, May 13, 2004, http://www.itmanagersjournal.com/ feature/314 (accessed 7 May 2007). [12] J.C. Nash, Spreadsheets in statistical practice—another look, The American Statistician 60 (August (3)) (2006) 287–289. [13] J.C. Nash, J. Calof, Open Source Enterprise Models: Motivations, Mechanisms and Myths, in: A Discussion Paper, School of Management, University of Ottawa, 2007 (Working Paper 07-06). [14] Open Source Initiative, The Approved Licenses, 2007, http://www.opensource.org/licenses/ (accessed 22 January 2007). [15] Perens, Bruce, The Emerging Economic Paradigm of Open Source, 2005, http:/perens.com/Articles/Economic.html (accessed 24 January 2007). [16] Pamela Samuelson, IBM’s pragmatic embrace of open source, Communications of the ACM 49 (October (10)) (2006) 21–25. [17] Ted Schadler, Open Source Moves into the Mainstream, Forrester Research Inc., Cambridge, MA, 2004. [18] Shankland, Stephen, IBM: Linux Investment Nearly Recouped, January 29, 2002, http://news.com.com/2100-1001-825723.html (accessed 24 January 2007). [19] Shankland, Stephen, Windows Bumps Unix as Top Server OS, February 22, 2006, http://news.com.com/2102-1016_3-6041804.html?tag=st.util.print (accessed 24 January 2007). [20] Steven Weber, The Success of Open Source, Harvard University Press, Cambridge, MA, 2004. [21] Nelson, Ruben, Extending Foresight to Include Long-Term Change, Evolution and Transformation Of Human Consciousness and Cultures, August 16, 2007, http://www.gsb.strath.ac.uk/foresight/papers2007/Extending%20Foresight%20Nelson%20-%20Foresight%20Conference%20Presentation%202007.pdf (accessed 9 September 2007). [22] Tevis, Robert E., Creating the Future, August 15, 2007, http://www.gsb.strath.ac.uk/foresight/papers2007/Creating%20the%20Future%20Tevis%20for%20Foresight%20Conference%20Presentation%202007.pdf (accessed 9 September 2007). [23] Peter Bishop, Andy Hines, Terry Collins, The current state of scenario development: an overview of techniques, Foresight 9 (n. 1) (2007) 5–25. [24] Williams, Chris, Microsoft will not appeal EU monopoly fine, 22 October 2007, 15:00, http://www.channelregister.co.uk/2007/10/22/microsoft_europe_agreement/ (accessed 20 January 2008). [25] Justin P. Johnson, Collaboration, peer review and open source software, Information Economics and Policy 18 (4) (November 2006) 477–497. [26] D’Antoni, Massimo, Rossiz, Maria Alessandra, Copyleft licensing and software development, 2007, https://www.econ-pol.unisi.it/dipartimento/files/GPLfinal-version.pdf (accessed 20 January 2008). [27] Paul G. Haschak, The ‘platinum route’ to open access: a case study of E-JASL, The Electronic Journal of Academic and Special Librarianship, Information Research 12 (October (4)) (2007) (accessed 20 January 2008). [28] Paul, Ryan, OpenMoko FreeRunner: a win for phone freedom, Ars Technica, Published: January 07, 2008-10:00AM CT, http://arstechnica.com/news.ars/post/ 20080107-openmoko-freerunner-a-win-for-phone-freedom.html (accessed 20 January 2008). [29] Singh, Param Vir, Fan, Ming, Tan, Yong, An Empirical Investigation of Code Contribution, Communication Participation, and Release Strategy in Open Source Software Development: A Conditional Hazard Model Approach, 2007, http://opensource.mit.edu/papers/singh_fan_tan.pdf.