• IFLA websites IFLA WLIC 2023 Library Map of the World IFLA Repository IFLA Library IFLA Namespaces Ideas Store DA2I Trend Report
  • Member login
  • Our Vision and Mission
  • Our governance
  • Our Members
  • Honours and awards

Professional structure

Regional structure

Advisory Committees

Advocating for libraries

Inspiring and enhancing professional practice

Enabling and connecting libraries

Events calendar

Ifla congress.

IFLA President’s Meeting

Become a member.

Engage with IFLA

Support IFLA

Bibliographic control

Efficient collaboration between libraries and other data providers relies upon standardisation. One key issue in this context is bibliographic control (also known as information organization or bibliographic organization). 

Universal Bibliographic Control is grounded on sharing the effort of resource description, eliminating redundancy by encouraging sharing and re-use of bibliographic data.

 “A National bibliographic agency (NBA) has the responsibility for providing the authoritative bibliographic data for publications of its own country and for making that data available to other NBAs, libraries, and other communities (for instance archives and museums) through appropriate and timely services with the goal of increasing open access to the bibliographic data; 

NBAs, as a part of the creation of authoritative bibliographic data, also have the responsibility for documenting authorized access points for persons, families, corporate bodies, names of places, and authoritative citations for works related to its own country and for making that authority data available to other NBAs, libraries, and other communities (for instance archives and museums).“ ( IFLA Professional Statement on Universal Bibliographic Control:2012 )

The International Congress on National Bibliographies ( ICNBS ) recommended that bibliographic records included in a national bibliography should be based on internationally recognised standards. 

National bibliographic agencies should adopt national and international standards and principles for cataloguing, identification systems such as ISBN and ISSN, character encoding , authority control, classification schemes, metadata and persistent naming of digital objects;

National bibliographic agencies should encourage work on the harmonization of bibliographic standards established in respect of all forms of publications.

Encyclopedia Britannica

Universal Bibliographic Control and International MARC

Our editors will review what you’ve submitted and determine whether to revise the article.

Learn about this topic in these articles:

National bibliographies.

British Museum: Reading Room

The program, called Universal Bibliographic Control and International MARC, aims to encourage national libraries, or groups of libraries, to institute methods of recording their national publications in a standard format and, wherever possible, of entering them into computer files. This program is accompanied by two additional programs, the…

Please note that Internet Explorer version 8.x is not supported as of January 1, 2016. Please refer to this page for more information.

Bibliographic Control

Related terms:.

Bibliographic information organization: a view from now into the past

Mirna Willer , Gordon Dunsire , in Bibliographic Information Organization in the Semantic Web , 2013

Universal Bibliographic Control – the traditional view

‘The UBC programme is the ultimate expression of IFLA’s newly discovered maturity’, wrote Herman Liebaers, President of the International Federation of Library Associations and Institutions (IFLA) in 1974, in the foreword to Dorothy Anderson’s book Universal Bibliographic Control : A long term policy – A plan for action . He continued: ‘There is no need to repeat here the main points set out in the historical introduction which follows, but one conclusion is obvious: many parts of the UBC system existed long before that name was invented.’ 1 Drawing attention to the continuity and evolution of efforts in the field of bibliographic control was further emphasized by his observation: ‘The total UBC programme can be considered as an intellectual construction, yet practicals aimed at realities, directed at known problems: and at the same time imaginative, seeking out future areas of need which have yet to be satisfied.’ 2

The concept of UBC is based on the objective of ‘promotion of a world-wide system for control and exchange of bibliographic information. The purpose of the system is to make universally and promptly available, in a form which is internationally acceptable, basic bibliographic data on all publications in all countries.’ 3 In this system national bibliographic agencies as well as IFLA, as the international bibliographic standards body, each have to take some responsibility.

Introduction to cataloguing classification

Fotis Lazarinis , in Cataloguing and Classification , 2015

1.6 Review questions

What is bibliographic control and what is a bibliographic record?

What is a library catalogue? Describe some of the functions which you consider fundamental and timeless for catalogues.

What are the functions of catalogues, according to IFLA’s 2009 statement?

Briefly describe the card catalogue: how is information organized on the cards, what information is entered on a card and approximately how many cards are needed for one item?

What is an access point and what is the main access point? How are access points handled in a card catalogue?

You have the following information about a book that you must catalogue:

Stand up, Mr. Dickens : a Dickens anthology

Presented by Edward Blishen

Illustrated by Jill Bennett

English literature.

Dickens, Charles, 1812-1870 -- Juvenile literature.

How many cards would you need in a card catalogue and which heading would you use in each case? (Authorized forms of headings are not important for this question).

Compare card catalogues and OPACs.

What is a dictionary catalogue and what is a systematic card catalogue?

What is an authority file?

Describe briefly the main divisions and subdivisions of cataloguing.

What standards are used in descriptive cataloguing?

What is copy cataloguing and what is cooperative cataloguing?

What are the main reasons for using standardized encoding schemes for cataloguing and classification?

Describe briefly the two main classification standards.

What do you know about the MARC 21 format?

Publishing bibliographic element sets and value vocabularies

Multilingual environment.

The ‘universal’ aspect of UBC requires it to cover metadata created in languages other than English, using scripts other than the Roman alphabet. There is in-built accommodation for this in RDF, based on the XML language attribute noted previously in this chapter. This attribute can be attached to any literal string used as the object of a triple by adding the indicator ‘@’ followed by a code specified by the Internet Engineering Task Force in RFC 5646. 64 The codes cover scripts as well as dialects and other variants of languages, and are likely to meet all the current needs of UBC to identify the languages and scripts of metadata content. There is no need, of course, to indicate the ‘language’ of a URI because it is not intended for human understanding. However, the W3C has recently developed the Internationalized Resource Identifier (IRI) which allows URIs to include characters other than the English alphabet, European numerals and some symbols that were previously allowed. 65 This reinforces the idea that URIs should be opaque; transparency is only perceptible to humans who can read the language and script of the IRI. With language information attached to literals only, there is no need to attach labels and definitions in different languages to different URIs for the same thing. Instead, one URI is sufficient, as shown in Figure 3.9 .

Figure 3.9 . RDF graph of ISBD Content form term using literals in different languages

Figure 3.9 shows part of the RDF graph for the ISBD Content form ‘music’. The SKOS preferred label is given in English and Croatian using the language codes ‘en’ and ‘hr’ respectively. SKOS assumes that there is one preferred label for a concept, but the constraint only applies within a language and not between different languages, so multiple preferred labels are accommodated provided there is only one per language. The graph also includes SKOS definitions in English and Spanish, the latter using the code ‘es’. The important feature of the graph is that the concept itself is identified by a single URI, isbd:T1004 . When this is used to link data, its multilingual aspects are also linked. This functionality is of vital importance for IFLA as it operates in a multinational, multilingual and multi-script environment with seven official languages. Also, the many additional languages into which IFLA bibliographic standards are being currently translated should be given the opportunity to be published in this new environment and provide equivalent services in the language of the user. This is in line with the vision of the current IFLA strategic plan, which is that ‘IFLA is the trusted global voice of the library and information community, and drives equitable access to information and knowledge for all’. 66

UBC itself had encountered language issues in the terminology used in its standards and translations, with semantic and linguistic confusion arising from multilingual homonyms and synonyms. IFLA’s Cataloguing Section developed the Multilingual Dictionary of Cataloguing (MulDiCat) during the first decade of the new millennium to alleviate some of the problems, resulting in the publication of a text version in 2011, rapidly followed by its representation as an RDF value vocabulary in the OMR. 67 The MulDiCat namespace provides preferred labels and definition in more than 25 languages for 40 cataloguing terms, including those used in the latest UBC standards such as the FR family and ICP. MulDiCat is intended ‘to be used for authoritative translations of IFLA cataloguing standards and related documents’ so the future linking of URIs for its concepts to the URIs for related classes and properties in the IFLA element sets will provide significant benefits for the multilingual development and application of the underlying bibliographic models and schemas as well as the namespaces themselves.

During the discussions and analyses of representing ISBD in RDF, the ISBD/XML Study Group had already identified the issue of translations of ISBD by its second meeting in Gothenburg, Sweden, in August 2010, when ‘the issues of translations of ISBD elements and the dangers of possible semantic drifts were preliminarily discussed within two scenarios: all translations of the element held within the same URI vs. each language its [own] URI’. 68 It was decided that within the scope of the current project, tests would be carried out on the value vocabularies of area 0 Content form and Media type using available translations of the vocabularies in Chinese, Spanish, Russian, Croatian, Italian and French. It was also agreed to liaise with, monitor and report on development and implementation of MulDiCat in SKOS/ RDF. The ISBD/XML Study Group met in February the following year in Edinburgh and further discussed the treatment of translations in the OMR. The meeting agreed to the following recommendations:

Spanish translation of registered elements raised various issues, some of which can be applied as some general recommendations: the translation should be based on the registered elements’ labels, definitions and scope notes; partial translations are allowed, but should be applied in phases: (1) labels, (2) definitions, and (3) scope notes in this order, because the OMR provides good version control and independent publication statuses for translations. 69

This work coincided with a project initiated by the National Library of Spain to publish its catalogue as linked open data in RDF. The Library contracted the Universidad Politécnica de Madrid Grupo de Ingeniería Ontológica (Polytechnic University of Madrid Engineering Ontology Group), which was studying the language issues that affect other scientific fields, and generalizing them to global issues affecting the multilingualism of the Semantic Web. The subsequent collaboration of Spanish experts with representatives from the ISBD/XML Study Group and IFLA Namespaces Task Group, based in part on the work on the Spanish translation of the RDF representations of ISBD and FRBR, 70 resulted in the publication of the article by Elena Montiel-Ponsoda and others Style guidelines for naming and labeling ontologies in the multilingual Web . 71 In its introduction the authors position the issue of multilingualism in the Semantic Web thus:

In the context of the Semantic Web, interoperability has become a major issue, not only because of the diversity of formats in which knowledge resources are expressed, or the differences in granularity or coverage of models, but also because of the linguistic descriptions associated with semantic representations. The labels assigned to classes and properties have proven to be of unquestionable assistance for human understanding, supporting ontology developers and adopters in checking consistency and avoiding inaccuracies. They have also been shown to be of great assistance in tasks such as ontology mapping (Svab and Svatek, 2008), information extraction (Müller et al., 2004), or natural language generation (Boncheva, 2005), to mention a few.

Although the pilot translations work was carried out formally by the ISBD/XML Study Group as one of its tasks, and by the end of 2011 it had decided that it was ‘necessary to develop the guidelines for translators’, 72 it became apparent that publishing guidelines for translating RDF representation of ISBD properties alone was not an efficient solution, but that issues of the translation should be generalized to cover RDF representations of other IFLA documentation. Draft guidelines titled Translations of RDF representations of IFLA standards were accepted in principle at the ISBD Review Group’s meeting in Helsinki in 2012, but it was decided to transfer the work to the Namespaces Task Group. 73 This group continues to work towards a final version.

Figures 3.10.1 , 3.10.2 and 3.10.3 show OMR screens with examples of translated literals. Figure 3.10.1 shows the FRBR entity Corporate body with a Spanish translation of its label. Figure 3.10.2 shows the ISBD element Title proper with Spanish translations of its label, definition and scope note. Figure 3.10.3 shows the ISBD area 0 Media type term ‘unmediated’ with Bulgarian, Croatian, Italian and Spanish translations of its preferred label, and Italian and Spanish translations of its definition.

what is universal bibliographic control

Figure 3.10.1 . OMR screenshot showing the Spanish translation of the FRBR class label ‘Corporate Body’: http://metadataregistry.org/schemapropel/list/schema_property_id/1568.html

what is universal bibliographic control

Figure 3.10.2 . OMR screenshot showing Spanish translations of the ISBD property label ‘has title proper’ and its definition and scope note: http://metadataregistry.org/schemapropel/list/schema_property_id/1945.html

what is universal bibliographic control

Figure 3.10.3 . OMR screenshot showing translations of the ISBD area 0 Media type term ‘unmediated’: http://metadataregistry.org/conceptprop/list/concept_id/3372.html

What are the multilingual issues? IFLA’s decision to use opaque local parts forestalls any issues of linguistic bias and ambiguity in its URIs, with the possible exception of the use of the ‘English’ alphabet and numerals. This is a minor issue, since the de facto lingua franca of the Internet has been English since its inception. The main issues are therefore associated with element set and value vocabulary literals: the labels of classes, concepts and properties, and definitions and scope notes. In the FR family and ISBD namespaces, these have been based on the documentation of the original underlying standards, which are written in English. As we have seen, not all element names, definitions or scope notes can be transcribed directly from the documentation, so there can be significant differences in the text of the standard and the text of the namespace. Minor differences also arise from the labelling conventions used in some Semantic Web communities, often based on English idiom. The fundamental issue for a translation of a namespace, then, is the choice of source text: is it the standard itself, or is it the namespace?

If the standard is translated, then rules for deriving namespace literals from the standard, similar to those used with the original language version, should be applied to ensure linguistic consistency. Simply translating the namespace literals directly may result in impaired readability for native speakers, including ungrammatical and awkward phrases. The draft Guidelines suggest that both sources need to be consulted. In the case of the Spanish translations of the FRBR and ISBD elements sets, the primary source was the English namespace, cross-checked with a published translation of FRBR and an unpublished translation of ISBD. As Figure 3.10.1 shows, Spanish prefers to capitalize only the first word of the class label, whereas English follows the ad-hoc convention used by some Semantic Web communities of capitalizing each word: ‘Corporate Body’ vs. ‘Entidad corporative’. This is one of the minor issues identified by IFLA, where the Guidelines suggest taking into account language and cultural norms. There was no problem translating the verbalized property names into Spanish, as shown in Figure 3.10.2 : ‘ tiene título propiamente dicho ’ is an exact translation of ‘has title proper’. Other languages may need to add a preposition, ‘has a title proper’, or will be unable to use the English method of verbalizing attribute and relationship labels. In many cases, the concept of verbalization may not be accommodated, or required, at all. The disambiguation of labels creates similar minor issues. It is good practice to use distinct labels for element sets, not least for vocabulary management purposes where the label serves as a human-readable identifier. In some cases, standards documentation uses the same label for different elements, relying on the context to resolve ambiguity. For example, FRBR’s ‘has a successor’ relationship links a work to a work , an expression to an expression , or an expression to a work . The relationship is represented by three different RDF properties with URIs frbrer:P2043 , frbrer:P2067 and frbrer:P2095 respectively. Each property has a different combination of domain and range, and a different definition. Their labels are differentiated using the technique already encountered in Figure 3.4.2 of adding the domain and range class label to the relationship label, giving ‘has a successor (work) (from work)’, ‘has a successor (expression) (from expression)’ and ‘has a successor (work) (from expression)’ respectively. The result is shown in the OMR screenshot in Figure 3.10.4 . Although this technique yields acceptable results in English, Spanish translations such as ‘ continuado por (obra) (de obra) ’ are less user-friendly, although tolerable. The impact of these labelling issues is minor with respect to RDF element sets, where labels are intended for consumption by application programmers rather than end-users. For the same reason, however, it is very important that translations of definitions and scope notes are linguistically comprehensible and accurately reflect the meaning of the originals.

what is universal bibliographic control

Figure 3.10.4 . OMR screenshot showing disambiguated labels in the FRBR element set: http://metadataregistry.org/schemaprop/list/sort/label/type/asc/schema_id/5.html

Value vocabularies in UBC are usually aimed at application users such as cataloguers and catalogue users, so all aspects of translations become critical, including the choice of preferred label and the definition. A particular problem was identified when translating the namespaces for the ISBD area 0 Content form qualifiers into inflected languages such as Spanish and Croatian. The terms used for the qualifiers are adjectives, such as ‘olfactory’ shown in Figure 3.6.3 , which qualify the noun terms of the Content form vocabulary, such as ‘object’. In gender-inflected languages, the gender of an adjective must be the same as the gender of the noun it qualifies, so adjectives tend to have at least two different forms. But SKOS only permits one preferred label in any specific language, so which form is it to be? The issue is not yet resolved, although the Spanish research suggests it may require using a lexical namespace which can describe variant forms and how they are linked. For the time being, IFLA has used the ‘alternative label’ property skos:altLabel as shown in Figure 3.10.5 .

what is universal bibliographic control

Figure 3.10.5 . OMR screenshot showing alternate labels for translations of the concept ‘olfactory’ in the ISBD Content Qualification of Sensory Specification value vocabulary: http://metadataregistry.org/conceptprop/list/concept_id/1229.html

Promotion of IOD Activities

Amitabha Chatterjee , in Elements of Information Organization and Dissemination , 2017

W.4.2.4 Core Activities of IFLA

IFLA has been playing its leadership role through a number of core programs: Universal Bibliographic Control (UBC), Universal Availability of Publications (UAP), Preservation and Conservation (PAC), the Advancement of Librarianship in the Third World Programme (ALP), and Universal Dataflow and Telecommunications (UDT). The core programs, most of which were hosted and supported by national libraries, provided professional leadership and coordination of international work in strategic areas of the profession. During the 1990s the core programs were re-conceptualized and renamed core activities [22] . New core activities were taken up, while some of the older ones were renamed or modified (ALP, ICABS, and UBCIM) or were phased out (UAP, UBC, and UDT). The earlier core programs were:

Universal Bibliographic Control and International MARC: In 1973 the IFLA General Conference in Grenoble made Universal Bibliographic Control (UBC) a core program. It was initially hosted by the British Library. The search for guidelines for machine-readable cataloguing which started in the 1970s led in 1983 to the establishment of a universal MARC format and its inclusion in the UBC Core Programme. In 1988, the title of the Programme changed to Universal Bibliographic Control and International MARC (UBCIM) and the headquarters moved to the Deutsche Bibliothek in Frankfurt, Germany [4] . The UBCIM Programme achieved a great deal over the thirty years of its existence. It has been responsible for the creation of the ISBDs as well as UNIMARC, and for maintaining a full publishing and seminar program. It was closed on March 1, 2003 [23] .

Universal availability of publications: The Core Programme for the Universal Availability of Publications was started in the late 1970s. The objective of UAP was the widest possible availability of published material (that is, recorded knowledge issued for public use) to intending users, wherever and whenever they needed it and in the format required. Published materials included not only printed materials, including so-called “gray literature,” but audio-visual materials and publications recorded in electronic (digital or analogue) form. To work toward this objective, the program aimed to improve availability at all levels, from the local to the international, and at all stages, from the publication of new material to the retention of last copies, both by positive action and by the removal of barriers. UAP aimed to ensure that improved access to information on publications was matched by improved access to the publications themselves. Under this project several valuable seminars and workshops were organized in different countries and a good number of publications were brought out, including a Model National Inter-library Loan Code. The program was closed on March 31, 2003, and the coordination of bibliographic standardization was moved to the IFLA—CDNL Alliance for Bibliographic Standards (ICABS), which was later, changed to IFLA—CDNL Alliance for Digital Strategies (ICADS). In 2011, it was decided to discontinue ICADS [23–25] .

Universal dataflow and telecommunications: A Core Programme under the title Transborder Dataflow was hosted by the National Library of Canada, Ottawa in 1986. In 1988, it became Universal Dataflow and Telecommunications (UDT) [4] . UDT has been working toward the establishment of an electronic information infrastructure for IFLA that would permit enhanced communications and information exchange. UDT’s well-regarded web service, which provides information about IFLA and about trends and issues of concern to the library community as a whole, was the starting point for this effort. When IFLANET was first proposed by the UDT Core Programme in 1993, one of the possibilities was to distribute electronic newsletters and electronic journals, which has now become a reality. IFLANET used a combination of networking and communications technologies to provide IFLA with an unprecedented opportunity to deliver information services to members and non-members alike. The UDT Core Programme initiated a series of electronic Occasional Papers to address current technological developments. It created LIBJOBS, a new mailing list that provides a moderated employment listing service for library and information professionals around the world [26] . However, the program was closed in 2003 and some functions were taken over by ICABS [27] .

Creating and building digital collections;

Managing digital collections; and

Accessing digital collections.

ICADS, and its predecessor ICABS, made a significant contribution to promoting and embedding digital awareness within the IFLA community. However, in December 2011 the ICADS Advisory Board decided to close ICADS and cease its activities [28] .

The present core activities of IFLA are [21] :

Action for Development through Libraries Programme (ALP): The IFLA Action for Development through Libraries Programme (IFLA ALP), launched in 1984, works in collaboration with libraries, library associations, partner organizations and library professionals in developing and emerging countries to deliver relevant, sustainable activities for equitable access to information and better library communities. IFLA ALP delivers community-led change through its training programs, online learning activities, and other opportunities, and access to IFLA’s international network. IFLA ALP is based on a platform of policies and standards developed and endorsed by IFLA at the international level, and local priorities at the grassroots level. IFLA ALP’s two main programs are the Building Strong Library Associations program, and IFLA ALP Small Projects. The centerpiece of ALP is the Building Strong Library Associations program. This is a comprehensive program offering a strategic and coordinated approach to capacity building and sustainability of library associations.

Preservation is essential to the survival and development of culture and scholarship;

International cooperation is a key principle; and

Each country must accept responsibility for the preservation of its own publications.

The activities undertaken under PAC program are [29] :

Raising awareness among library professionals, the public and the authorities, of the urgent need to preserve our endangered documentary heritage;

Publishing and translating preservation literature in order to make it accessible to a larger professional audience around the world;

Disseminating information through printed and online publications;

Organizing training courses, workshops, seminars, etc.;

Promoting research on best preservation practices; and

Fund raising.

IFLA UNIMARC: As a successor to the IFLA UBCIM Core Activity, the IFLA UNIMARC Core Activity (UCA) was started in 2003 with the responsibility for the maintenance and development of the Universal MARC format (UNIMARC), originally created by IFLA to facilitate the international exchange of bibliographic data. The purpose of UCA is to coordinate activities aimed at the development, maintenance and promotion of the UNIMARC format, now a set of four formats-Bibliographic, Authorities, Classification and Holdings, and related documentation, through the Permanent UNIMARC Committee (PUC). By agreement with IFLA, the UCA has been hosted by the National Library of Portugal since 2003 [30] .

Copyright and other Legal Matters (CLM): It is the only IFLA core activity which currently does not have a permanent office or staff, but relies entirely on voluntary efforts for its extensive and very successful work of international advocacy on intellectual property and related matters. This involves current awareness, policy analysis, awareness rising in the profession, and representation and interventions at international meetings of bodies such as the World Intellectual Property Organization (WIPO) and UNESCO. An important element of its success is partnerships with other bodies such as the European Bureau of Library, Information and Documentation Associations (EBLIDA) and Electronic Information for Libraries (eIFL) [22] .

Publishing reports, participating in national and international conferences and organize workshops;

Monitoring the state of intellectual freedom within the library community world-wide and publish newsletters and online news;

Responding to violations of free access to information and freedom of expression and make press releases; and

Supporting IFLA policy development and cooperating with other international human rights organizations.

FAIFE supports and co-operates with relevant international bodies, organizations or campaigns such as UNESCO, PEN International, Amnesty International.

Library 3.0

Tom Kwanya , ... Peter G. Underwood , in Library 3.0 , 2015

3.1.2 The library is organised

The information explosion (infobesity), the computer revolution, proliferation of new media and the push towards universal bibliographic control have jolted the foundations of conventional information management ( Svenonius, 2000 ). The situation has been exacerbated by the growing ubiquity of the Internet and related communication technologies, which are placing more information management responsibilities on users. Whilst the Internet facilitates users in publishing and accessing more information, it also burdens them with the need to organise it if the full benefits of its availability are to be realised. Emerging information and communication technologies are acknowledged as valuable tools of information management, but they can also lead to an unprecedented information overload as thoughts are spread thin and scattered, in many respects. Herbert Simon, a pioneer advocate of attention economy, warned of information overload as early as 1971, when he stated that a wealth of information creates poverty of attention ( Simon, 1971 ). His prediction appears to be fulfilled daily because of the information revolution. Typically, an individual has to deal with multiple information streams simultaneously. It is not strange to encounter someone chatting with more than two people on Skype, Instant Messenger or Google Chat, while also talking on a mobile phone and reading Twitter feeds at the same time. This information overload affects people’s ability to discriminate and process available data into useful information. This is largely because information is currently presented to its potential users in a scattered and overwhelming manner. Consequently users have to consume diverse pieces of information delivered via myriad platforms and devices to be able to make decisions. It is not unusual that one has to shuttle back and forth between applications, browsers or feeds to complete simple tasks. As Levasseur (2013) points out, this information consumption pattern is costly and unsustainable because as information continues to grow in quantity, scope and complexity, the pressure to manage it effectively also builds up ( Robu, 2008 ). Kelly (2008) explains that findability is a key element of effective information management in the modern era and emphasises that when there are millions of books, millions of songs, millions of films, millions of applications, millions of everything requesting one’s attention – and most of it free – findability is a major determinant of whether the information will be used or not. The pressure to manage the vast information pathways effectively creates the need for innovative content management strategies that can cope with the prevailing ambiguity, heterogeneity and differences in perspectives ( Morville and Rosenfeld, 2006 ) of the current information environment.

Effective information organisation involves classification. However, most information resources today exist within shared, complex and uncertain boundaries that change rapidly, making their permanent classification difficult. Effective information organisation also requires explicit description of the information to enhance how it is understood. Another important facet of information organisation, as pointed out earlier, is findability, the quality of being locatable or navigable. Information is organised to enable users to find the right answers to their questions easily. This is currently becoming more difficult, since information management is increasingly becoming decentralised with the role of librarians in labelling, organising and providing access to information being reduced remarkably as more and more users strive to manage their own information.

The Library 3.0 model is designed to turn the unorganised web of information into a systematic and usable body of knowledge by describing and linking every piece of data to enable ease of access. This approach also removes the need to duplicate data. The Library 3.0 model creates an information platform on which users, experts and librarians collaborate to create, sift and share credible information ( Schultz, 2006 ). Library 3.0 information organisation strategies provide a way of unifying scattered information and accessing even the Invisible Web. Library 3.0 uses information organisation approaches that facilitate user participation, collaboration, usability, remixability and standardisation ( Blyberg, 2007 ). It goes beyond mere key-word searches to knowledge-based information retrieval strategies that rely on relationships, connections and association to draw conclusions. This is achieved through ontology-rich semantic systems that facilitate intelligent and targeted information searching and discovery.

The basic foundation of information organisation in Library 3.0 is ontology, which represents knowledge as a set of concepts within a domain bound together in a web of relationships. Although ontologies are sometimes confused with taxonomies, the former are broader in scope than the latter. In fact, in some circumstances, taxonomies can be considered as subsets of ontologies. Ontologies are based on defined and controlled sets of vocabulary and relationships. Common components of ontologies include individuals, classes, attributes, relations, function terms, axioms and events. Ontologies enable information managers to specify meaning and leave no room for guessing. Thus ontologies provide a defined vocabulary to describe a domain as well as an explicit specification of its intended meaning. They capture a shared meaning of the domain and provide a formal and machine-manipulable model. They facilitate a shared understanding of the structure of information among human beings and software agents; enable reuse of domain knowledge; make domain assumptions explicit; distinguish domain knowledge from operational knowledge; and analyse domain knowledge ( Sudarsun, 2007 ). On the other hand, taxonomies are hierarchical representations of concepts in terms of parent/child, class/subclass or broad/narrow relationships. Taxonomy is a classification, while ontology is a system of description going beyond mere classification.

Ontologies facilitate semantic interoperability, in which people create content in a format that is open and reusable by others who can add on to it, reassemble it and ultimately build something new out of the pieces they are given (McDonnell, 2012). Ontologies are developed using Web Ontology Language (OWL), which gives explicit meaning to information, making it easier for machines to automatically process and integrate it. OWL enables the creation and application of a well-defined syntax, semantics, efficient reasoning support, expressive power and convenience of expression. These enable humans and machines to classify and interpret knowledge objectively and precisely. OWL builds on the Resource Description Framework (RDF) and the RDF Schema (RDFS), which generally describe the structure of information rather than its semantic relationships and meaning ( Antoniou and van Harmelen, 2003 ). Ontologies can enhance information organisation by binding items of content to relevant metadata which enable the content to be findable, portable and adaptive to different platforms. Effective metadata accurately reflect the content substance, have attributes that organise content in an intuitive way, and are consistent across content types and topics ( Halvorson and Rach, 2012 ).

Other information organisation concepts which can be useful in research and academic 3.0 environments are content curation and content aggregation. Content curation is the process of sorting through the vast quantities of information on the web and presenting it in a meaningful and organised manner. It involves sifting, sorting, arranging and publishing information in a way that best meets the interest and context of the users. Content aggregation, on the other hand, is the process of collecting content automatically from diverse sources on the web. This can be done through specialised software such as Really Simple Syndication (RSS) feeds or tailored algorithms that pull content based on specific key words or phrases ( Halvorson and Rach, 2012 ). Research and academic libraries may use the curation and aggregation tools listed in Table 3.1 below.

Table 3.1 . Curation and aggregation tools for research and academic libraries

Research and academic library users working in 3.0 environments can also utilise a number of social networking solutions to organise the information they generate or use. Some of these may include bookmarking solutions, such as BlinkList, Delicious and StumbleUpon; highlighters, such as Clipmarks, Diigo and iLighter, which enable users to create digital clippings or highlight web content, and Hooeey, and Success Life Share, which can help users to organise their surfing history for research purposes. Other social networking solutions include Evernote, which enables users to capture and share moments or ideas; Instapaper, which can be used to save web pages for later reference; 280Daily, enabling users to summarise their daily activities into 280 text characters; Thoughtboxes, which can be used to organise and store thoughts on issues of interest; Skloog, which can enable research and academic library users to create shortcuts to bookmarks and favourite websites; and Netvibes, which users can apply to personalise their web experience.

Research and academic libraries can also use Quick Reference (QR) codes – two-dimensional barcodes introduced in 1994 by Denso-Wave, a Japanese company – to direct users to library resources such as Uniform Resource Locators (URLs) of electronic data, instructional videos or useful websites, as well as applications or contact information from their mobile phones ( Rouillard, 2008; Walsh, 2009 ). QR codes can also be used to provide virtual reference services through Short Message Service (SMS), directions to a physical library or virtual library tours, context-appropriate information resources, supplementary information, or to store information for future reference as well as other forms of user support at the point of need ( Walsh, 2010 ). QR codes can be stored on library posters, bulletin boards, catalogues, staff directory pages, study-room doors, receipts, magazines or business cards. The use of QR codes removes the need for the user to memorise or type the URL of a resource. The fact that QR codes are scanned using mobile devices, which are steadily becoming ubiquitous in research and academic environments, also makes them handy for library users. QR codes are also decoded fast and save time in obtaining the information or help needed. Further, QR codes are low-cost, are easy to implement and use simple technology ( Ashford, 2010 ). Walsh (2009) identifies a lack of appropriate knowledge and hardware devices (smartphones) to encode and decode QR codes effectively, lack of awareness of QR codes amongst librarians and users, and potential prohibitive data charges on users’ mobile phones as some of the challenges of applying QR codes in most research and academic libraries.

Tasks, skills, and attributes

John Azzolini , in Law Firm Librarianship , 2013

Information management

Most of the skills I emphasize are intimately related to each other. The successful application of one depends on the simultaneous performance of one or more of the others. Like all professional skill sets, they interact and feed into each other in hidden but significant ways. An understanding of the various frameworks for organizing and displaying information is as important to carrying out expert research as the first two skills are. These frameworks are among the librarian’s most reliable finding aids: bibliographic classification schemes, structured search fields, indexes, and citators.

If the world of potentially relevant knowledge sources is strikingly large, the domain of information ‒ that almost chaotic strata taken to be the building blocks of knowledge ‒ can be said to be vast. This vastness is never truly tamed. As a whole, it is always expanding. Its creators and consumers interface in untold ways. The exchange of data and information is constant. What is amended or transformed (and when and how) and what stays the same (and retains its original value) are perpetual dilemmas. Of all people, librarians are keenly aware of this fact. They also know that the proliferation of knowledge is not hopelessly beyond the reach of dedicated rational efforts. It can be beneficially structured. Its scope and accessibility can be managed in the service of professional aims if one holds the proper tools of organization and retrieval.

Laying one’s hands on actionable knowledge is a matter of reducing information access points to a governable number so that relevant sources can be readily found, evaluated, and utilized. This distillation process is implemented through systems both well-established in traditional librarianship and those found more commonly in law library practice.

Classification schemes and related arrangements (e.g., taxonomies, ontologies, and thesauri) have long been fixtures of all types of libraries. Law firm libraries in the United States commonly organize their physical collections according to the Library of Congress classification system ( http://www.loc.gov/catdir/cpso/lcco/ ) and its subject headings ( http://id.loc.gov/authorities/subjects.html ). Firm libraries also turn to the Anglo-American Cataloguing Rules ( http://www.aacr2.org/ ) for arranging how information is displayed in their online catalog records.

Evolving industry thinking on metadata creation and discovery stresses the importance of collaborative standards. Because of these concerted efforts, contemporary bibliographic control methods are sometimes drawn from heterogeneous sources. 8 This has implications for firm librarians. They must be aware of the systems used to structure their own collections, for obvious day-to-day research and collection development purposes. They must also recognize the important metadata fields found in outside institutional sources, such as large university and national library catalogs and major bibliographic utilities like WorldCat (OCLC) ( http://www.worldcat.org/ ), AMICUS (Canadian National Catalogue) ( http://www.collectionscanada.gc.ca/amicus/index-e.html ), and Copac (Research Libraries UK) ( http://copac.ac.uk/ ). These sources are regularly queried for knowledge-prospecting objectives and inter-library loan requests.

One need not master the rules of catalog construction like a technical services librarian. But it is valuable to know that resource discovery can be facilitated by a moderate understanding of an information item’s uniformly structured metadata. One can leverage discrete parts of a bibliographic record (e.g., call numbers, title and author fields, scope notes, and subject headings) to execute a more comprehensive search for topics both frequently asked about and those never encountered before.

One of the more common methods of leveraging these bibliographic elements is the information-seeking practice known as pearl growing. This is done when a researcher needs to find material on a certain topic but is not certain which subject headings or even title words would be attached to that topic. The aim is to find one on-point resource on the topic and then extract the main headings from its catalog record for a subsequent search of any items also classified under those headings. To find subject-related books, the first few letters and digits of a known item’s call number can be searched on. The underlying principle is that conceptually similar resources are assigned similar classification codes and this intellectual linking can be exploited for more thorough knowledge retrieval. Relevant books whose titles and authors are not known in advance can be discovered by focusing on this single shared characteristic 9

Cataloged items, in print and electronic formats, claim only a modest span of a firm librarian’s information universe. Incessantly produced journal articles, industry and company reports, news stories, cases, laws, regulatory filings, and transcripts tend to dwarf the compass of standard bibliographic records. Some of this material attains findability though formal indexes and aggregated databases, but much remains scattered across websites, its retrieval dependent upon a few good terms entered into a search engine or the possession of the right URL.

Although locating internet material is often the result of sheer perseverance, librarians who rely on the Web as an essential tool in their daily work inevitably acquire search fluency. More than the average Web surfer, these reference librarians take a savvy approach toward finding publicly accessible information. They use advanced search screens and operators, quickly gauge a site’s credibility, and have an evolving grasp of a topic’s best internet sources. These skills, however, are not beyond the reach of many people who habitually turn to the Web for their information, news, and entertainment needs. What sets firm librarians apart is their skill at utilizing sophisticated tools to pinpoint highly relevant information. These tools are not usually found in the typical person’s search kit.

Specialized indexes and high-end databases, while not exclusive to firm librarians, are essential finding aids in the pursuit of legal and business information. Each one isolates a manageable subset of resources from a much larger set, allowing the searcher to find more relevant possibilities in a time-saving manner. They parse information items into usable access points (such as author, title, publication, or topic) and allow the searcher to readily view or manipulate these points to determine if retrieval is desirable.

Most indexes that originated in print are now in electronic form, and this is the format of choice for most seekers. The wholesale migration to online platforms has blurred the functional distinction between indexes and full-text databases. Often the only real difference between them is the immediate availability of the entire digitized item in databases, although many indexes contain full-text material. An electronic database has always been able to be exploited as a type of index. Conversely, a print index loaded online can be searched and displayed as if it was a database. However, the formal definitions of either are moot when you are working intently on a project and your information-gathering tools only need to be effective, ethical, and cost-efficient.

For firm librarians (at least in the United States), Westlaw ( http://www.westlaw.com/ ) and Lexis ( www.lexis.com ) are the leviathans of the research terrain, outstripping all others in scope of content, search sophistication, staff support, and marketing machinery. Due to their large size and topical diversity, these two database aggregators are sometimes referred to as integrated legal research platforms. One or the other, or both, are obligatory subscriptions for American firm librarians. I will more fully discuss them and other systems in Chapters 5 (The legal publishing world) and 6 (Research sources and systems).

The key to being an expert user of an advanced database is knowing the extent of its resources and the search language that will best retrieve what you are looking for once you determine a database might contain it. Westlaw and Lexis are illustrative of this skill’s significance because they offer the most wide-ranging content and robust search options. Being aware of the options and when to advantageously use them can be challenging when confronting such complex, source-rich research platforms.

Familiarizing yourself with the ever-growing contents of Westlaw or Lexis is a daunting but necessary initial step. The most straightforward path to this end is to take time to explore their many databases by browsing their directories, which incurs no cost. Both platforms offer open Web-based directories of sources that can be searched without entering a user name and password. Spending time with these directories, visually registering their titles, and reading each database’s explanatory notes go a long way towards remembering what a system has to offer. Another useful practice is to regularly read their user guides and service announcements. Both vendors consistently disseminate tip sheets, overviews, and lists of newly added sources. Subscribing to these email updates, especially those published with librarians in mind, is highly recommended.

An accomplished Westlaw or Lexis user will appreciate the significance of structured fields and Boolean operators for his or her search outcomes. Each database document, whether a review article, judicial decision, or treatise section, is divided into distinct fields, sometimes called sections. Each one is searchable alone or in combination with several others. The fields vary in number and precision with each content type, but a few prominent ones are title, author, and publication date. Combining field searching with a proficient application of query syntax (i.e., proximity operators, wildcards, truncation, parenthetical nesting, and quotation marks) allows one to delve into voluminous resources and recover on-point material in a timely manner.

Other than separately published indexes to hardcopy titles such as multi-volume treatises, legal encyclopedias, and digests, firm libraries will rarely have print indexes on their shelves. Those most instrumental for legal research are the periodical indexes. Major examples are Current Index to Legal Periodicals, Index to Foreign Legal Periodicals, Legal Resource Index ( LegalTrac ), Legal Journals Index, and Index to Canadian Legal Literature. Many are available on Westlaw or Lexis (or the versions of these vendors licensed in a particular country) or offered as a subscription choice from among an independent aggregator’s selection of databases, as offered on Hein Online ( http://home.heinonline.org/ ).

In practice, however, librarians frequently bypass these formal indexes to search directly in the various full-text secondary source databases on Westlaw and Lexis. The largest of these databases can encompass practice guides, treatises, and forms books as well as journals, so the opportunity to gain such comprehensiveness is not easily resisted. But such a turn to prefabricated secondary source aggregations does not lessen the importance of indexes to legal research. Not everything can be found on Westlaw or Lexis. Some indexes, not law-related but nonetheless salient to the practice of law in an interdisciplinary society, are freely accessible on the Web or via one’s local public library.

Citators are systems or tools that retrieve all mentions of an identifiable information item and gather them in a single place for convenient analysis. You turn to them when you have a case or statutory section and want to find out if any other subsequently published sources have cited your document. A cited document is usually one that has been commented upon, criticized, interpreted, or used as part of a supporting or dissenting argument. Therefore, reading those citing references is an invaluable way of learning the strength of a judicial decision’s or legislative act’s legal standing. Citators are prized by librarians and lawyers alike because of their ability to indicate whether a law is still “good law.” That is, they collect instances of any other case or statute that threaten the status of your document by amending it, overruling it, or calling it into question. If all the citing references analyze it positively or simply mention it without assessment, it is likely still “good law”.

Geeks and Luddites: library culture and communication

Dan Gall , Donna Hirst , in An Overview of the Changing Role of the Systems Librarian , 2010

Technical services

Technical services departments are experiencing shifts in what they do, day-to-day, along with significant shifts in the resources used to accomplish their tasks. Technical services staff increasingly struggle with providing bibliographic control to electronic resources, e-books, and even relocated print materials as space shortages define availability. Technology assists in providing control through data-loads and exporting data to various vendors and consortia; electronic templates organize incoming work. New metadata standards like Dublin Core and METS require additional staff education. Shelf Ready processing reduces the impact that staff has in preparing books for shelving. Budget cuts and outsourcing often result in reductions in technical services staff.

The collection

Jean Dartnall , in A Most Delicate Monster , 1998

Cataloguing and classification

So far this chapter has considered choosing and acquiring library materials and the physical handling of them. The best collection is of no use unless clients can identify the items they need, unless there is bibliographic control . This is perhaps the area where special libraries vary most from each other and from larger libraries. In a small collection it is possible to have arrangements that are eclectic and idiosyncratic and still splendidly successful. A small special library can choose a classification system and a style and level of cataloguing that is appropriate to the use that will be made of that collection. Here are a few things to consider before making a choice.

Do you need a catalogue at all? Some materials lend themselves to self indexing arrangements, for example, alphabetical, arrangement by jurisdiction for legal materials, or to the use of their publisher's catalogue. (The publications of the Australian Bureau of Statistics provide a good example of the latter.) If all or most of your collection is like this, maybe you need nothing more. Your decision will depend on who will need access to the collection, how they will use it and how expert they are in the use of that material.

Even if a catalogue is needed, perhaps so that it can be printed or networked to remote users, this still does not commit you to having a classification system. This decision will depend on the potential use of the collection. Do not be hijacked by other peoples' expectations if you have been hired ‘to catalogue the collection’. Non-librarians may use that phrase as a substitute for ‘fix up the library’ or ‘get something useful out of the library’. Compel them to think about the use of the collection and what they really want before you borrow a copy of the cataloguing rules. The topic of arranging an unattended library is taken up in chapter 6 .

There may be a connection between the physical arrangement of the collection and its bibliographic control. Will materials be shelved in a number of sequences and, if so, do all the types of materials need the same cataloguing and classification treatments? The never ending debate on whether to classify serials or shelve them alphabetically can be extended in the pragmatic special library environment to consider whether manuals need a separate sequence, or internal publications or patents or other special materials. How much material needs conventional shelving and what is better kept in, for example, pamphlet boxes or filing cabinets? What can be achieved by labelling, colour coding and other visual clues? These decisions need to be made in conjunction with the decisions about the physical layout of the library discussed in chapter 5 . Whatever decision you make initially may need to be reassessed in the light of further experience or as circumstances change. Flexibility is the only rule in special libraries.

In choosing a classification system more options can be considered than are usually available to a larger library. If the subject field covered is broad and/or if you aim to purchase catalogue records, you probably need to use Dewey Decimal Classification. This, and Library of Congress, are the only classification systems regularly used on machine readable records that can be purchased, for example, through the Australian Bibliographic Network. However, as will be the case in most small special libraries, if the collection is focussed and is catalogued in-house, you can consider subject specific classification schemes. Look at the literature of relevant subjects and see what is available. Consider how current it is, how much a copy of the schedules will cost, and how easy it will be to use. A few other things need considering too.

Will your choice of systems be acceptable to librarians who come after you? I once classified a collection by Universal Decimal Classification because I considered (and still consider) this much the most suitable classification scheme for the material in that library, only to have a librarian who took over the job a few years later spend time and effort reclassifying it all to Dewey. There may be value in adhering to commonly used systems.

Will the system you choose be easy for the clients of the library to use? Again there is value in using something they are familiar with unless they are quite unhappy with existing systems. There are still special libraries that create and operate their own classification systems. There are perhaps some subject areas where this is needed but I would not recommend it to anyone who has not carefully estimated the time and effort required for this task.

Will the system you choose be similar to that used by other special libraries in your subject field? The law, medical and health libraries are probably the most developed in their use of recognised standards. Adopting such systems has the advantages of developing portable skills both for yourself and your clients and of making exchange of ideas and information with other special librarians easier. The final decision is individual and must reflect an assessment of the needs and capacity of your library and its clients. History is also important. If there is an existing operating system, the benefits of change to a new system must be extensive and clear, to justify the expense and inconvenience of change.

It is important to keep systems simple. The special librarian must not be seen spending time on sophisticated cataloguing. Everything you are seen doing must be of obvious benefit to the clients. Librarians know that good cataloguing can be invaluable to the clients but the clients are not always convinced. If your cataloguing is part of a union catalogue, you will need to use the standards and authorities specified by the parent body. In a stand alone catalogue, you can make your own decisions and I would recommend leaning towards simplicity.

If you are going to have a catalogue, will it adhere rigidly to standard cataloguing rules? Typically small special libraries adopt a fairly low level of physical description of materials coupled with a more intense level of subject description. Matthews describes a way to enhance subject access by using relatively unskilled labour to enter contents tables into catalogue records. [ 11 ] Analytical entries for the contents of conference proceedings or edited compilations may be useful. Consider including nicknames and whatever descriptions of material are familiar to your clients as added entries.

Are you going to use Library of Congress or some other standard subject headings? Perhaps there is a standard thesaurus in your subject area? If so it might offer more relevant subject headings and some transparency of use between systems for both yourself and the clients. Think too about how you will handle authorities for authors. Perhaps this is something that will have little relevance to your catalogue and can be dealt with on an ad hoc basis if problems arise.

What will be included in the catalogue? If the library has an old or patchy collection, it may not be appropriate to include it all in the catalogue. If you can't or don't wish to weed the collection, at least be selective in what you promote. Will the catalogue include records for electronic resources which are not locally owned but are accessed remotely? There are arguments in favour of the catalogue as a single finding tool for all library resources but inclusion of non-owned resources will probably involve a lot of catalogue maintenance as resources change and develop. You will need to establish a procedure for regularly checking that the electronic sources you catalogue are still available. This might be easy for paid subscriptions but more trying for free Internet sites. However, if these resources are not listed in the catalogue, you may need to produce some other list of them. The maintenance problems are the same but with the added disadvantage of an extra finding tool.

Almost every successful special librarian has methods that they attempt to hide from vi siting librarians because they are, in standard terms, inadequate. Do not be ashamed of these bits. These are the useful and valuable short cuts that have been created by the individual librarian working for an individual clientele. Be proud of them and share them with colleagues. Special librarians are at their most creative when they are minimalist.

Classification education

Rajendra Kumbhar , in Library Classification Trends in the 21st Century , 2012

Classification education in different countries

Harvey and Reynolds (2005) presented an overview of the status of education for cataloguing and classification in Australia (considered broadly and encompassing descriptive cataloguing, subject access, classification, metadata, knowledge organisation, bibliographic control and other related areas for all formats of library resources). The overview is based on the data collected through websites, printed handbooks and informal discussions with practising cataloguers and classifiers as well as library and information science educators. Britain is rightly credited for the propagation of library and information science education in the Commonwealth countries in general and classification research (e.g. through Classification Research Group) in particular. However, after the introduction of automation in libraries, the basic skills of classification and subject indexing have been little taught in the UK, observed Broughton and Lane (2000) . This observation is further confirmed by Bowman (2005a) , who, based on a survey conducted in 2003, concluded that cataloguing and classification have become largely invisible in professional education, but it appears that most courses still include something about them, though not always as a compulsory module and usually without much practical work. Both recent graduates and chief cataloguers complained that what is taught about cataloguing and classification is inadequate. These observations are based on a survey of postgraduate education and training for cataloguing and classification in the UK. Data for the survey was gathered by web content analysis and by sending e-mail requests. Considering the critical status of cataloguing and classification education in the UK, Bowman explored the possibilities of training through commercial providers.

The status quo of cataloguing and classification education in China is discussed by Si (2005) . Discussions covered programmes, their curricula, degree offered, textbooks, etc. Problems related with cataloguing and classification education in China are discussed and solutions thereto are recommended. The author projects the probable improvements expected in the next five to ten years. Education of cataloguing and classification in China includes university education, continuing education, professional training and is provided at basic training, junior college, undergraduate and graduate levels. Cataloguing, classification and subject analysis are generally the core courses in the university curricula and offered with other required courses ( Ma, 2005 ). Some of the recent changes introduced in cataloguing and classification education in China are the application of computer technology, the increase of practice, update of course contents and the improvement of teaching methods. Like Si, Ma hoped that there would be constant improvements in the teaching methods adopted for teaching cataloguing and classification.

Like China, the syllabus of classification in the Department of Information Science, University of Zagreb, Croatia, is also updated from time to time and it included (in 2001), apart from the routine contents, the topics on classification of Internet information sources and application of classification for information retrieval and discovery ( Slavic, 2002 ). At this department the wider objective of classification curriculum was to teach content analysis and classification as a process and to demonstrate how to adapt and use classification for different purposes and different environments. Unlike the situation in the UK ( Bowman, 2005a ), classification has an important place in the curriculum in the Croatian library school at the Department of Information Science, University of Zagreb, because classification is the most important indexing language in Croatian libraries, documentation centres and services, and its role has not been undermined by automation as is the case elsewhere ( Slavic, 2002 ).

In Egypt, too, all library and information science courses are inclusively related to cataloguing and classification of library materials. Abdel Hady and Shaker (2005) analysed the curricula of cataloguing and classification in Egypt and also gathered views of the LIS faculty through a questionnaire and interviews and summarised the changes made in the cataloguing and classification education. Based on the survey the authors also recorded the changes expected in cataloguing and classification education in Egypt in the next five years. They expressed that cataloguing and classification education in Egypt must add more practicals and facility should be made available for the continuing education of the faculty.

As far as India is concerned, two documents (i.e. the LIS Curriculum Development Committee’s Reports) – one issued in 1965 and another in 2001 – have influenced university-level courses of knowledge organisation ( Raghavan, 2005 ). These CDC reports formed the basis of LIS curricula in India.

The status of cataloguing and classification education in Iran can be seen by referring to Kokabi’s (2005) article. The article is useful for knowing: cataloguing and classification curricula taught in Iran, classification schemes, subject heading lists and cataloguing codes taught during practicals, number of faculties teaching cataloguing and classification courses, number of students taking cataloguing and classification related coursework, etc.

Like many other countries, in Japan the library and information science programmes are offered by library and information science schools (i.e., departments in traditional universities) and by colleges and other universities. Similarly, cataloguing and classification education in Japan has its own issues and future. The specialty of Japan in this context is that there are facilities of on-the-job training and continuing education in cataloguing and classification ( Taniguchi, 2005 ). A very important observation of Taniguchi is that the practices in librarianship influence the cataloguing and classification education. Indirectly this is an advice that while designing the curricula of cataloguing and classification its existing practices should be considered.

Continuing education is essential for updating the skills of working LIS professionals. Continuing education is provided through or accomplished by on-the-job training and self-training. Kwak (2005) conducted a survey of the current status of on-the-job training and self-training provided for cataloguing and classification librarians in 98 Korean academic libraries. All respondents emphasised the need for on-the-job training and 64.3 per cent were trained. The training was given mainly by using print media. The survey also indicated that most academic libraries provided financial support for staff training. Observing the benefits of on-the-job training and self-training for classification and cataloguing staff, Kwak suggested that more such programmes should be organised and more support should be given for cataloguers’ and classifiers’ self-training.

The Mexican library and information science schools also have traditionally given considerable importance to cataloguing and classification learning, which still continues. As the recent technological developments have influenced every walk of life, it has influenced cataloguing and classification education in Mexico ( Martinez Arellano, 2005 ). The author compared the different Mexican LIS schools and their curricula and depicted general trends in education for cataloguing and classification.

The curricula taught and teaching methods adopted in teaching cataloguing and classification in Pakistan are dominated by the practices of the 1960s and 1970s. Cataloguing and classification education in Pakistan is also characterised by a lack of new technology, non-availability of competent teachers and poor laboratory facilities. According to Haider (2006) , some of the solutions to improve this situation are: (i) revise the curricula immediately; (ii) arrangements should be made for the training of cataloguing and classification teachers (for this, help should be sought from developed countries); (iii) cataloguing and classification laboratories should be updated; and (iv) facilities for the continuing education of cataloguing and classification teachers should be made available.

What can be observed from the above review is that classification education is losing attention compared with some other subjects such as information technology applications. The philosophy, psychology and practices of education are continuously developing. These developments should be tracked and adopted to provide quality classification education. The outcome of the review brings forth so many new concepts and topics which may be incorporated in the curricula. Innovative teaching methods, such as those suggested by Prescott (2001) and Hider (2004) should be developed or adapted and tested for their suitability for teaching classification. So also innovations should be implemented in the evaluation system. Research along these lines will definitely make the teaching and learning of classification more interesting and fruitful. Based on the above review, another observation is that most of the literature related to classification education treats ‘cataloguing and classification’ together. This joint treatment further proves co-association and strong symbiosis between cataloguing and classification. This symbiosis between these two subjects should be used beneficially and optimally in all aspects of education.

In Too Deep

Crystal Fulton , Claire McGuinness , in Digital Detectives , 2016

7.5 Traditional Publication Process for Academic Books

In the beginning, an author might have an idea for a book and approach a selected publisher themselves, or, alternatively, they may be actively recruited by a publisher for their expertise on a subject or reputation in a field. Publishers often send out targeted emails, which aim to encourage prospective authors to publish with them. This, in a way, is the first line of quality control to ensure that reputable authors are recruited. Once an author has been recruited, a contract is negotiated between author and publisher, outlining the terms and conditions under which the manuscript will be prepared and submitted, as well as the royalties that will be due from any sales, and other formal issues that need to be established. The author submits a completed manuscript, although it is common practice to send individual chapters to the publisher so that quality can be monitored on an ongoing basis. The publisher then facilitates the production of the physical book, including copyediting, proofreading, printing the physical copy or developing the e-book, distribution, marketing, and publicity. Strong editorial guidance is provided throughout the process to ensure that the book meets the required standards. Finally, the book is given a unique ISBN (international standard book number) and placed on the market. Book reviews in the appropriate channels (e.g., scholarly journals) serve as an additional indicator of quality following publication, as well as sometimes at the prepublication stage.

7.5.1 What Are ISBNs?

An ISBN is a “13-digit number that uniquely identifies books and book-like products published internationally” ( ISBN.org, 2014 ). The purpose of an ISBN is to support effective universal bibliographic control by identifying one title or edition of a title from one specific publisher; it is a unique identifier that prevents confusion with other books, perhaps with similar titles or by the same author. ISBNs are assigned to publishers by an ISBN agency; a publisher receives a publisher prefix and an associated block of numbers, which it can then assign to publications for which it holds the publishing rights. Publishers that can request ISBNs are not just publishers of traditional print books, but also include e-book publishers, audio cassette and video producers, software producers, and museums and associations with publishing programs. When publishers assign ISBNs to publications, they should then be reported to R. R. Bowker as the database of record for the ISBN agency.

Universal Bibliographic Control

From wikipedia, the free encyclopedia.

Universal Bibliographic Control (UBC) was a concept championed by the International Federation of Library Associations and Institutions (IFLA). Under the theoretical UBC, any document would only be cataloged once in its country of origin, and that record would then be available for the use of any library in the world.

During the 1970s, IFLA established an office for Universal Bibliographic Control. [1]

Dunsire, Hillman, Phipps, and Willer have suggested that Semantic Web technologies, including BIBFRAME may allow UBC. [2] [3]

Association for Library Collections & Technical Services

ALA User Menu

Breadcrumb navigation

ALCTS is now part of Core: Leadership, Infrastructure, Futures! Visit us on our new Core website .

5. Bibliographic Control and Access

Chapter 5 of Managing Microforms in the Digital Age

Because microforms cannot be browsed in the same way as printed materiaks, bibliographic support in the form of finding aids or catalog records is vital to browsing and accessing microform collections. Collection managers are encouraged to plan for appropriate bibliographic control at the point of selection and acquisition.

Microforms, like other library collections, should be made accessible via comprehensive bibliographic control. Detailed bibliographic description helps clarify the content of large microform sets, which in turn helps increase the visibility and subsequent use of these specialized research collections.

Providing bibliographic control for microform collections is important because it facilitates access to specialized research materials and reduces duplicative purchases. Unfortunately, prior to the 1980s, microform collections were not given the same level of attention as their print counterparts. Several factors contributed to the inadequate bibliographic control of microform collections. First, many microform sets contained hundreds of titles. Many of these titles had not been cataloged as hard copy; therefore, each microfilm title required original cataloging. Institutions acquiring these sets simply did not have the human resources or, in some cases, the special subject expertise to properly address these unique materials. Nor did they have the time to use readers to painstakingly review each title. Many libraries had no choice but to set aside this costly task. This lack of bibliographic control sorely handicapped access to microform materials. As a result, many valuable and expensive sets were unfortunately underutilized, since their existence was not readily apparent to the public or even to library staff. The problems inherent in these hidden collections are myriad and include the costly duplication of materials, the unnecessary processing of interlibrary loan requests and, ultimately, the devaluation of an institution’s role in support of scholarly research.

Types of Bibliographic Access

At one time, access to microform collections was limited to the use of commercial microfilm lists, consortial holdings lists, printed guides, or finding aids. With the emergence of online catalogs, the behavior and expectations of researchers have changed over the years. Researchers prefer easy access and quick results. They appreciate what the online environment has to offer. It has become clear that the use of printed microform guides, which at one time played a vital role in microform usage, has become outdated. Researchers now consult them only as a last resort.

The minimum expectation today is that a unique bibliographic record for each title in a microform set or series resides in the online catalog. As the divide between the local online catalog and the World Wide Web diminishes, patrons increasingly expect to locate titles through the Internet. Bibliographic control continues to evolve to meet these research expectations.

The following subsections illustrate various methods that have been used by libraries over the years to provide access to microform collections.

Microform Title Lists from Micropublishers

Commercial microform lists prepared by micropublishers may provide individual titles within a set, or specific serial title/issue runs with or without reel or fiche numbers for each title. One drawback is that the quality of these lists varies considerably. Some guides may have good descriptions for each title and may provide corresponding reel or fiche numbers. Others may contain less-than-full information or employ a poor organizational system. In some cases, useful access points (e.g., author, subject indexes) are limited or not provided. Few institutions have the time or human resources to perform extensive verification upon receipt of a microform set with accompanying guide; therefore, the quality of these lists is rarely assessed.

Commercial Microform Guides

Many microform guides are regularly published by commercial indexing companies. This type of finding aid usually covers a large range of microform publications made available during a specific time period. Similar to other index resources, libraries that have acquired the guides might not necessarily own all the sets listed in the guides. Since publishers of this type of guide are reputable professional indexers, their products in general contain information of good quality.

Institutional Microform Guides

In the past, it was popular practice for libraries to prepare institutional guides to microform sets owned by the library to serve their local users. These locally compiled finding aids serve as inventories providing information such as set publisher, date of publication as well as a brief description of each set. Some inventories provide more detail, listing individual titles with a title description and general subject terms.

Compiling such guides could be very time-consuming. Knowing this, some libraries make their guides available to other institutions. This mechanism started a trend of interlibrary cooperation, which improved microform access.

Consortial Holdings Lists

Consortial holdings lists are typically compiled and used by libraries in the same geographical area. Like serials union lists, microform union lists serve interlibrary loan and collection development purposes. They are usually regularly updated although the information provided in union lists is usually brief. Individual titles in microform sets are not typically provided.

Individual Bibliographic Records in Online Catalogs

Unquestionably, providing analytic bibliographic records has proved to be the most effective approach for microform access. Its resulting ease of access outweighs the labor that has to be invested in taking such an approach. Recognizing the merits of providing analytic bibliographic records, collaboration among libraries to create such records for large microform set as described below has become the current trend.

Historic Bibliographic Control

The importance of bibliographic control of microform materials has been recognized and acknowledged since the mid-twentieth century. Effective action was not taken until the early 1980s, when several institutions and agencies started to steadily and systematically catalog microform sets of hundreds of items.

A major effort towards the improvement of microform access in the early days was the establishment of the National Register of Microform Masters (NRMM). The NRMM, undertaken by the Library of Congress was established to provide bibliographic control for microform masters and was published annually between 1965 and 1984. Its purpose was to establish a clearinghouse for the bibliographic control of microforms. It can also be considered the starting point of intensive progress in and development of bibliographic control of microform in the United States.

In 1984, the Association of Research Libraries (ARL) conducted a survey to identify microform sets that the survey respondents wanted to see cataloged. The survey results indicated that machine-readable bibliographic records were not available for many important microform sets. Based on the survey results, some libraries received grants to develop a cooperative cataloging project. Some libraries used their own resources and contributed to regional cooperative projects. Many micropublishers (like UMI) also joined this movement by providing cataloging records either through OCLC or contracted out for other libraries to catalog microform titles. This trend significantly changed and unquestionably enhanced the world of microform bibliographic control. It was then that comprehensive bibliographic control for microform materials started to become a reality.

In 1986, through grants from The Andrew W. Mellon Foundation and NEH, the Library of Congress and ARL launched the NRMM Retrospective Conversion Microform Project. In 1990, OCLC started to convert records in the NRMM into machine-readable format for the international community to access. The project was completed in December 1997. Approximately 579,000 records, representing monographs, serials and music scores, were converted. They are now available on OCLC.

In addition, OCLC also undertook its own Major Microform Project as part of its WorldCat Collection Sets initiative. This was an effort to catalog institutional microform collections on OCLC on a large scale in order to make worldwide access to these records possible. This continuing endeavor has made hundreds of microform set records available on OCLC.

Current Trend of Bibliographic Control

The current trend is to acquire MARC records of individual titles in microform sets and download them into institutions’ local catalogs. Besides OCLC set records, many major micropublishers or vendors also offer MARC record sets for purchase. Some vendors’ records are available on OCLC, while others are only available in the vendors’ internal databases. Typically, these vendors do not allow libraries to upload the proprietary records to OCLC.

The benefits of using vendors’ set records are numerous. Instead of spending an immeasurable amount of time searching, identifying, selecting, and downloading hundreds of thousands of records, one by one, from the utilities into local databases, today’s technology allows batch receiving and downloading collection set records remotely via file transfer protocol (FTP), thus making bibliographic records of microform sets available for public access within days or even hours.

Since the quality of these bibliographic records can vary greatly by vendor, it is best to evaluate a group of sample records before purchase. Some customization work may be needed before the records can be loaded, depending on local integrated library system conventions. Sometimes the vendor will agree to do this, if minor, and sometimes the library will need to do it themselves either manually (for small numbers of records) or by having a programmer write a script that will automatically perform the necessary functions to customize the records to fit local needs. Recently, open source MARC editing tools, such as MarcEdit, have become popular for libraries to perform batch changes of MARC records. For identification, batch changes or deletion purposes, it is also a good idea to flag purchased records in some way within your local catalog, such as using a separate location code or adding a local note field identifying the vendor or adding a prefix to the record control number.

Microform Cataloging

Since improvements in technology in the late 1990s made batch processing of record sets possible, many libraries have adopted this mechanism and have started to load hundreds of records all at once into their local online catalogs. Rarely do libraries check the large quantity of records before batch process. To maintain the quality of the record sets created either by individual libraries or commercial publishers or vendors, certain guidelines are needed for quality and consistency. In 2005, ALCTS published Guidelines for Cataloging of Record Sets: Reproductions (Microform and Electronic) and Original Sets . 1 This publication instructs cataloging institutions to follow Library of Congress Rule Interpretations (LCRI) and MARC21 format for bibliographic data standards. Cataloging records for reproductions based on existing records should be created in a consistent manner. Title-access level records, rather than set-access level records, should always be provided for easy retrieval of reproduction sets.

Cataloging microforms, like the cataloging of other types of materials, entails bibliographic description and providing access to resources. In general, a bibliographic record contains descriptive data and access data. Descriptive data consists of descriptions transcribed or supplied by catalogers, such as author, title, imprint information, physical description (pagination, illustrated matter, dimensions) and other related information that will help researchers find, identify, select and obtain the resource. Access data, such as names and subjects, is indexed data that is normally assigned by catalogers and used for retrieval purposes.

Catalogers who create bibliographic records are strongly encouraged to follow established cataloging standards, namely, contents standards and data standards. The purpose of using standards is twofold. First, standards ensure a level of consistency so that bibliographic descriptions can be understood by users in different libraries. Second, such cataloging records can also be shared electronically among institutions over different libraries’ integrated systems. This is important, as acquiring vendor-supplied records for large microform sets has been, and will continue to be, the primary method of providing bibliographic access to library microform materials.

Cataloging Rules and Practice

Although catalogers in the United States follow the Anglo American Cataloging Rules (AACR), for many years the cataloging of microform reproductions has been done in two substantially different ways.

Cataloging Based on the Reproduction

The main difference lies in the standards that are followed in creating bibliographic descriptions. The second edition of AACR, AACR2, asks that bibliographic descriptions of microform be based on the reproduction in hand, a practice that is at odds with the rules in the first edition. Libraries that follow AACR2 argue that the bibliographic description should always be based on the item in hand because this reflects the bona fide manifestation. Some libraries disagree, arguing that original-item information is more helpful to library users than its reproduction. Researchers look for certain editions of publications. A lack of specific original-publication information will handicap user access and scholarly research. Also, it is more cost-effective to copy an existing bibliographic record of the original publication and provide additional reproduction description.

Cataloging Based on the Original

The latter practice, begun most prominently by the Library of Congress and marking a departure from the AACR2 rules, was later officially listed in the Library of Congress Rules Interpretation (LCRI) and adopted by many libraries in the United States. In recent years, several guidelines on cataloging of microform reproduction have been based on the LCRI, namely, Guidelines for Cataloging Microform Sets ; Guidelines for Bibliographic Records for Preservation Microform Masters ; Guidelines for Cataloging of Record Sets ; and Reproductions (Microform and Electronic) and Original Sets . 2 All of these guidelines ask catalogers to follow the LCRI for descriptive data and AACR2 for choice and form of heading.

Resource Description and Access

Resource Description and Access (RDA) is the new cataloging content standard that was recently developed under the stewardship of the Joint Steering Committee for Development of RDA. RDA will gradually replace AACR2 when libraries across the world begin to implement this new standard in 2013.

RDA was developed under the framework of Functional Requirements for Bibliographic Records (FRBR) in an effort to provide a more structured and user-friendly library catalog. FRBR, a conceptual model developed by the International Federation of Library Associations and Institutions (IFLA) in the 1990s, identified basic levels of bibliographic elements that a bibliographic record should contain in order to fulfill user tasks (that is find, identify, select, and obtain library materials). The FRBR model use s entity relationships to present the four levels of entities in a bibliographic record, that is, work, expression, manifestation, and item. The purpose of adopting the FRBR model was to make bibliographic records more logical and intuitive, with distinctive entity-level bibliographic elements, thus facilitating the user tasks. Under the FRBR framework, a microform reproduction is considered a reproduction of another manifestation and cataloged as such, as described in RDA chapter 27.

RDA continues the AACR2 practice of instructing catalogers to base bibliographic descriptions on the reproduction and the current piece in hand rather than the original item. It also asks that catalogers provide bibliographic links to various versions (formats or manifestations) of the same expression of the work and/or title. During the RDA draft commenting period, according to the RDA online discussion list, some catalogers asked for the option of adding bibliographic information for the original manifestation to the reproduction record or piece in hand. Others asked for the option of providing both “reproduction piece in hand” information and the “original manifestation” information in parallel fields in one bibliographic record. This reflects the single-record practice for publications in multiple formats.

The Library of Congress undertook a comprehensive reassessment of policy for cataloging microform reproductions under RDA during the RDA testing period. A white paper on the treatment of reproductions under RDA was issued and outlined in the Library of Congress Policy Statement 3 Final decisions are yet to be made.

RDA and General Material Designation. Unlike AACR2, RDA discontinues the practice of using the general material designation (GMD) to denote the format of a non-print material and replace it with three elements: content, media and carrier. Three different MARC fields (336, 337, 338) have been defined for these three elements. Providing GMD in the title proper field has not been an ideal practice. This is especially true in an online environment as it interferes with title indexing and access. According the MARC standards, GMD should be placed between the main title and the subtitle. All integrated library systems that are currently available on the market index GMD as part of the title. This impedes title phrase access when an item has both a main title and a subtitle. Furthermore, GMD cannot fully describe the content and carrier aspects of an item, thus creating confusion when users try to access materials in multiple formats. Using separate elements to describe the content, media and carrier aspects should eliminate the access issue and provide more information to facilitate the FRBR user tasks.

Single-Record Approach

The single-record approach, practiced for many years by many libraries, involves using one bibliographic record to represent multiple versions of a publication of library holdings. In the case of microform reproduction, a record can have descriptions of the original print publication and use notes or a holdings record to indicate the reproduction aspect of its microform counterpart. In addition to saving catalogers time in creating bibliographic records for titles in both formats, this approach also consolidates holdings. Library users only need to consult one record for all the holdings of different formats of the same publication. It is especially helpful in dealing with periodical collections where holdings information is especially important and heavily used.

Multiple-Record Approach

Contrary to the single-record approach, the multiple-record approach set forth in AACR2 and RDA is also practiced by libraries. According to the above-mentioned cataloging rules, each manifestation should have a bibliographic record represented in the library catalog. If a library has a title that exists both in print and microform formats, two bibliographic records (one for the print and one for the microform) will be provided. Although it is time-consuming to create multiple full-level bibliographic records, users can get complete bibliographic descriptions of both manifestations, and the separate records will each individually supply essential access points which will enhance retrieval. This practice has become more popular since vendors began making available electronic batch-processing of the microform record sets that they supplied. Libraries can easily load large record-sets into their online catalog for public access. Records in the set are descriptions of microform publications. It aligns with the separate-record approach. Recognizing the benefits of batch processing record sets, libraries that previously adopted single-record practice have also started using this type of service allowing dual practices in their local databases.

Microform Use and Access

During the same time period when microform use was on the rise, literature reported on patrons’ dissatisfaction with and resistance of the various microformats, although microfiche seemed less unwieldy than microfilm. 4 The difficulties with handling the physical pieces and manipulating the equipment led to resistance on the part of users. Long-term reading on dimly lit screens was also a deterrent, although that was aided by the development of reader/printers. The advent of interfacing of microform readers with desktop computers has helped reduce user avoidance of microforms. Effectively digitally scanning the microforms on demand, this technology also allows users to walk away with digital images stored in portable devices, not just printouts, and allows library staff to distribute the information through electronic reserves and courseware. The improved quality of the images also helps address user complaints about difficulty reading microform.

Despite these advances in microform reading technology, as recently as 2007 librarians were reporting that users considered microforms a resource of last resort. This is in line with some study reports. Based on the 2003 OCLC Environmental Scan and a 2007 study conducted by Oregon State University on user behaviors, researchers often select the most convenient way to obtain information, even if the information may be less useful than other resources. 5 One of the questions in an online survey conducted by the authors in October 2008 asked participants to characterize use of microforms at their institution. The single most frequent response indicated a desire for patrons to make paper printouts from microforms. The next most frequent response was that undergraduates almost never use microforms and always prefer an online version. That was closely followed by the desire of patrons to make digital copies from microforms. Just over a third of the responses indicated that graduate students, faculty, historians, and genealogists use microforms frequently and have a tolerable comfort level with them, while undergraduates use them only sometimes. More than half the responses indicated that libraries permit self-service of microforms, while more than a third lend microforms to other libraries.

As the general populace becomes increasingly dependent on the ease of Internet access to resources, the requirement to have to visit a physical place and use a dedicated machine can be expected to continue to negatively impact the use of microforms except by the most intrepid of researchers. Inclusion of MARC records in online catalogs will assist discovery of these materials, but conversion to a digital format is needed to make them truly accessible.

Reference Notes

back to contents   |  forward to chapter 6

We've updated our privacy policy. Click here to review the details. Tap here to review the details.

Activate your 30 day free trial to unlock unlimited reading.

Universal Bibliographic Control and Universal Availability of Publications (UBC & UAP)s

Dr. Anjaiah Mothukuri

You are reading a preview.

Activate your 30 day free trial to continue reading.

Theory of Library Cataloguing

Check these out next

what is universal bibliographic control

Download to read offline

These PPTs are more useful to Library Science Students, for all types of the Competative Examinations, UGC-NET & SLET/


what is universal bibliographic control

More Related Content

Slideshows for you (20).

what is universal bibliographic control

Similar to Universal Bibliographic Control and Universal Availability of Publications (UBC & UAP)s (20)

what is universal bibliographic control

More from Dr. Anjaiah Mothukuri (14)

what is universal bibliographic control

Recently uploaded (20)

what is universal bibliographic control

Share Clipboard

Public clipboards featuring this slide, select another clipboard.

Looks like you’ve clipped this slide to already.

You just clipped your first slide!

Create a clipboard

Get slideshare without ads, special offer to slideshare readers, just for you: free 60-day trial to the world’s largest digital library..

The SlideShare family just got bigger. Enjoy access to millions of ebooks, audiobooks, magazines, and more from Scribd.

what is universal bibliographic control

You have now unlocked unlimited access to 20M+ documents!

Unlimited Reading

Learn faster and smarter from top experts

Unlimited Downloading

Download to take your learnings offline and on the go

Instant access to millions of ebooks, audiobooks, magazines, podcasts and more.

Read and listen offline with any device.

Free access to premium services like Tuneln, Mubi and more.

Help us keep SlideShare free

It appears that you have an ad-blocker running. By whitelisting SlideShare on your ad-blocker, you are supporting our community of content creators.

We've updated our privacy policy.

We’ve updated our privacy policy so that we are compliant with changing global privacy regulations and to provide you with insight into the limited ways in which we use your data.

You can read the details below. By accepting, you agree to the updated privacy policy.

Taylor and Francis Online homepage

Cataloging & Classification Quarterly

Universal bibliographic control and the quest for a universally acceptable subject arrangement.

Sample our Information Science journals, sign in here to start your FREE access for 14 days

Achieving widespread agreement on subject organization is a complex task, and a challenge greater than that of creating a standard bibliographic description for international exchange—the goal of Universal Bibliographic Control (UBC). This article traces the history of the Universal Decimal Classification (UDC), its relationship with other schemes, and opportunities for further collaboration.

1. Google, Inc. Web site, http://www.google.com (accessed September 17, 2009).

2. Library of Congress, Library of Congress Subject Headings (LCSH) Online, http://id.loc.gov/authorities/search/ (accessed September 17, 2009).

3. OCLC Online Computer Library Center Web site, Dewey Services, http://www.oclc.org/dewey (accessed September 17, 2009).

4. OCLC Web site, Dewey Services, http://www.oclc.org/dewey (accessed September 17, 2009).

5. UDC Consortium Web site, http://www.udcc.org/ (accessed September 17, 2009).

6. Bliss Classification Association Web site, http://www.blissclassification.org.uk/ (accessed September 17, 2009).

7. International Federation of Library Associations and Institutions (IFLA) Web site, http://www.ifla.org/ (accessed September 17, 2009).

8. Franz G. Kaltwasser. “Universal Bibliographical Control (UBC),” UNESCO Bulletin for Libraries 25 (1971): 252–259.

9. Dorothy Anderson, “IFLA's Programme for UBC: The Background and the Basis,” IFLA Journal 1, no. 1 (1975): 5.

10. Conference of Directors of National Libraries (CDNL) Web site, http://www.cdnl.info/ (accessed September 17, 2009). International Federation of Library Associations and Institutions (IFLA) Web site, IFLA-CDNL Alliance for Bibliographic Standards (ICABS), http://archive.ifla.org/VI/7/icabs.htm (accessed September 17, 2009).

11. Marcelle Beaudiquez, Bibliographical Services throughout the World, 1970–74 , Documentation, Libraries and Archives: Bibliographies and Reference Works, no. 3 (Paris: UNESCO, 1977): 30.

12. IFLA International Office for UBC, Guidelines for the National Bibliographic Agency and the National Bibliography (Paris: UNESCO, 1979).

14. Manuel du Répertoire bibliographique universel, IIB Pub. No. 63 (Brussels: lnstitut International de Bibliographie, 1907).

15. Ia C. McIlwaine, “The Work of the System Development Task Force,” in The UDC: Essays for a New Decade , ed. Alan Gilchrist and David Strachan (London: Aslib, 1990), 19–28.

16. OCLC Online Computer Library Center Web site, Dewey Translations, http://www.oclc.org/dewey/about/translations/default.htm (accessed September 17, 2009).

17. UDC Consortium Web site, “Overview of Last Reported Editions of the Universal Decimal Classification (UDC) in 39 Languages,” http://www.udcc.org/files/editions_overview.pdf (accessed September 17, 2009).

18. John Phillip Comaromi, The Eighteen Editions of the Dewey Decimal Classification (Albany, NY: Forest Press Division, Lake Placid Education Foundation, 1976), 369.

19. Henry E. Bliss, Bliss Bibliographic Classification. 2nd ed., ed. Jack Mills and Vanda Broughton, et al. (London: Butterworth and Bowker-Saur, 1977-).

20. International Federation of Library Associations and Institutions (IFLA) Web site, IFLA. Cataloguing Section and IFLA Meetings of Experts on an International Cataloguing Code (IME-ICC), Statement of International Cataloguing Principles, http://www.ifla.org/files/cataloguing/icp/icp_2009-en.pdf (accessed September 17, 2009). Also: IFLA Cataloguing Principles: Statement of International Cataloguing Principles (ICP) and its Glossary, IFLA Series on Bibliographic Control 37, ed. Barbara Tillett and Ana Lupe Cristám (ICP). (München: K.G. Saur, 2009).

21. John Phillip Comaromi, The Eighteen Editions of the Dewey Decimal Classification (Albany, NY: Forest Press Division, Lake Placid Education Foundation, 1976), 305.

22. Vanda Broughton, “A New Classification for the Literature of Religion.” Paper presented at the 66th IFLA Council and General Conference, Jerusalem, Israel, August 13–18, 2000, International Federation of Library Associations and Institutions (IFLA) Web site, http://archive.ifla.org/IV/ifla66/papers/034–130e.htm (accessed September 17, 2009).

23. Joan Mitchell, comment on the Decimal Classification “Editorial Policy Committee (EPC) Meeting 131,” 025.431: The Dewey Blog, comment posted on June 16, 2009, http://ddc.typepad.com/025431/200299_religion/ (accessed September 17, 2009).

24. Classification Research for Knowledge Representation and Organization: Proceedings of the 5th International Study Conference on Classification Research, Toronto, Canada, June 24–28, 1991 , ed. Nancy J. Williamson and Michèle Hudon (Amsterdam and New York: Elsevier, 1992). Also Nancy J. Williamson, “The UDC: its Future,” in The UDC: Essays for a New Decade , ed. Alan Gilchrist and David Strachan (London: Aslib, 1990), 29–32.

25. Ia C. McIlwaine and Nancy J. Williamson, “Future Revision of UDC: Progress Report on a Feasibility Study for Restructuring,” Extensions and Corrections to the UDC 15 (1993): 11–18.

Log in via your institution

Log in to taylor & francis online, restore content access.

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations. Articles with the Crossref icon will open in a new tab.

Your download is now in progress and you may close this window

Login or register to access this feature

Register now or learn more


  1. Universal Bibliographic Control and Universal Availability of Publica…

    what is universal bibliographic control

  2. Universal Bibliographic Control and the Quest for a Universally Acceptable Subject Arrangement

    what is universal bibliographic control

  3. Universal Bibliographic Control and Universal Availability of Publica…

    what is universal bibliographic control

  4. Building library networks with linked data

    what is universal bibliographic control

  5. (PDF) Reconsidering Universal Bibliographic Control in Light of the Semantic Web

    what is universal bibliographic control

  6. Universal Bibliographic Control and Universal Availability of Publica…

    what is universal bibliographic control


  1. Bibliographical Meaning


  3. Copy-cataloging in Koha|Introduction To Cataloging|cataloging tutorial in Urdu and Hindhi

  4. M09 Bibliographical Sources: Use and Evaluation

  5. Bibliographic Procedure

  6. Online with the CMC: Stop Wasting Time: Adding the 033 & 518 MARC Field


  1. Universal Bibliographic Control

    Universal Bibliographic Control (UBC) was a concept championed by the International Federation of Library Associations and Institutions (IFLA).

  2. IFLA Professional Statement on Universal Bibliographic Control

    That office became known as UBCIM (Universal Bibliographic. Control and International MARC). At that time the philosophy was that each national

  3. Bibliographic control

    Universal Bibliographic Control is grounded on sharing the effort of resource description, eliminating redundancy by encouraging sharing and re-use of

  4. Universal Bibliographic Control and International MARC

    Other articles where Universal Bibliographic Control and International MARC is discussed: library: Criteria for selection: The program, called Universal

  5. Bibliographic Control

    The concept of UBC is based on the objective of 'promotion of a world-wide system for control and exchange of bibliographic information. The purpose of the

  6. Universal Bibliographic Control

    Universal Bibliographic Control (UBC) was a concept championed by the International Federation of Library Associations and Institutions (IFLA).

  7. 5. Bibliographic Control and Access

    Providing bibliographic control for microform collections is important because it facilitates access to specialized research materials and reduces duplicative

  8. Universal Bibliographic Control and Universal Availability of Publica…

    Universal Bibliographic Control and Universal Availability of Publications (UBC & UAP)s · 1. Invited Lecture on BIBLIOGRAPHIC CONTROL: Universal Bibliographic

  9. Full article: Universal Bibliographic Control and the Quest for a

    Universal Bibliographic Control is a goal that shows every sign of being achieved in the modern age, at least as far as creating a record of written output.

  10. Universal bibliographical control (UBC)

    Universal bibliographical control (UBC) by Franz Georg Kaltwasser. Bayerische Staatsbibliothek. Munich. The author outlines a plan for the