Richard Wallis who blogs as Panlibus for Tallis (the Solihull UK-based library automation specialist which seem to be going places after 40 years of quietly honing their LMS in the black country), has an interesting post on the way that library suppliers are moving into the cloud. Of course that is going to happen, and it will be interesting to see how OCLC, "the 500 lb" gorilla impacts the traditional library automation market. Especially with Google "the 15 million book" gorilla, hobbled by obvious metadata shortcomings, lurking in the background. Following on from Richard's post, I started to wonder how the content aggregators are going to react to the opportunity and challenge of selling content services within a cloud computing framework.
A recent survey paper from Berkeley summarises the hardware innovations of Cloud Computing as follows:
1. The illusion of infinite computing resources available on demand, thereby eliminating the need for Cloud Computing users to plan far ahead for provisioning.
2. The elimination of an up-front commitment by Cloud users, thereby allowing companies to start small and increase hardware resources only when there is an increase in their needs.
3. The ability to pay for use of computing resources on a short-term basis as needed (e.g., processors by the hour and storage by the day) and release them as needed, thereby rewarding conservation by letting machines and storage go when they are no longer useful. Above the Clouds: A Berkeley View of Cloud Computing (p 1)
A service model and a charging system of this kind would be very attractive to content users if it could achieve the radical cost savings typical of cloud computing. Service subscribers would jump at the promise of a service which gives them the 'illusion' of access to infinite information, (this is where the Google library of at least 10 million books comes in) and which eliminates the need for upfront commitments. The first and second capabilities are straightforward, but there does not appear to be a rationale for 3. The problem is that Information is not rivalrous. From the supplier's point of view, whether as creator or as intermediary, there is nothing saved when users do not use a resource. The intrinsic value of copyright resources does not increase because less use is made of them. Paradoxically the value of a scientific resource such as Science Direct actually increases if its usage is widespread. From the information suppliers standpoint the 'trick' is to maintain the illusion that a resource is effectively available wherever it is needed, even though a great deal is being charged for it and the barriers to entry and easy use are high. Paying for the resource on a short-trem or intermittent basis is unlikely to appeal to the rights holder. I suspect that the Books Rights Registry will be slow to sanction Google in the introduction of an hourly 'pay as you go' access model to its main collection.
A solution to this conundrum will emerge, and I suspect that it will evolve in the direction that publishers and rights holders want their information to be accessible, searchable, citeable, and to a limited extent viewable for free. But they do not want to give it all away. The necessity and the attraction of charging for data and content will be limited to services which are in some extent premium, whether by virtue of extreme topicality, of outstanding readable quality, or of additional value services. The Exact Editions service already deploys a cloud-based content management system it will be interesting to see how our partner publishers evolve solutions for end-user pricing and public access (all our public-facing services are now searchable without need for a subscription or controlled access). In fact every page can be viewed at thumbnail size without the need to register an account. Perhaps we need to evolve towards a stage where every citation at least delivers a thumbnail view even of 'closed' pages.
No comments:
Post a Comment