Lernfreiheit und die Khoisan-Sprachen

In an earlier post, we’d had a look at some of the characters used for clicks in Khoesan languages:  to review, ǂ, ǃ, ǁ, ǀ, and ʘ.

Where do you find these on the keyboard?  We’ll get to that.  First, let’s take a look at why these characters exist, how they are used in print and digital materials, changes in print technology, and the state of bibliographic data that uses them.

A German colleague, Dr. Hartmut Bergenthum at the Frankfurt University Library, wrote responding to the last post I had on this, raising points about the current ‘state of the art’, but the story begins in Germany much earlier, when it was still Prussia, and there was an Egyptologist by the name of Karl Lepsius.

Lepsius had the idea, around 1855, that there would need to be a way to transcribe content in spoken languages into written form, and there would not always be a way to do this in an accurate and consistent way when limited strictly to the Latin alphabet.  Extensions to the alphabet were proposed for linguists and missionaries working in the field, and over time they changed a little, but it remained a fairly stable set as it evolved through successive standards, essentially into what is now the International Phonetic Alphabet, and extended Latin ranges of the Universal Character Set.  The characters included found their way into common, practical usage in print traditions.

There was a bit of interlude in print technology, with a shift toward mass production, in the postwar and late colonial period.  Although the orthographies remained fairly stable, the technology available to produce them had shifted, even as African countries were gaining their independence (but largely keeping mass print activity limited to official, that is, ‘colonial’ languages).  Typewriters and linotype machines were scarce on the continent, even moreso for the kind that would handle extended Latin, and literacy rates were low enough that there was no real push for implementing technological support for local languages as industrial standards.  In the 1960s and 1970s, much of the print material produced in African languages was typescript, often with handwritten modifications to produce extended characters.  In 1977, Lucia Rather at the Library of Congress wrote an article mentioning that after having broadened support for Asian languages, characters for African languages were among the next on the list to be slated for technical and policy support.

Back to Germany.  In 1979, an international standard was proposed with the assistance of the Deutsches Institut für Normung, ISO 6438, for the bibliographic information interchange of characters used in African languages.  If I’m following this correctly, it became a formally adopted ISO standard in 1983, was revised in 1996, and its characters were brought into Unicode between 1991 and 1998.

Back to Washington.  Library of Congress Rule Interpretation 1.0E, in the form that it took in the mid-1980s, attempted to cover the issue by providing a rule whereby approximate equivalent characters from the Latin alphabet would be input, each with a double underscore.  Although perhaps the only real solution available at the time, it is one that has been perpetuated by the much more recent Library of Congress Policy Statement 1.4.

Back to Germany.  This is where Dr. Bergenthum notes the cataloging rule in place there:

“Nichtlateinische Schriftzeichen, die in Sprachen vorkommen, die lateinische Schrift verwenden, werden nach Möglichkeit vorlagegemäß wiedergegeben. Andernfalls werden sie gemäß der für sie in § 803,5 festgelegten Ordnung umgeschrieben.”

Extended Latin characters for Khoesan clicks would all be rendered as ‘zz’.

Now to Dublin (Ohio), where the palatal click (ǂ) has been enshrined into the Connexion client as a field delimiter, precluding its use at least until a future upgrade, from the exchange of bibliographic information, be it in print or electronic format.  (Wouldn’t the double dagger () be an acceptable substitute, one may ask? No, and even if it were, it faces the same difficulty by virtue of being the character used as the field delimiter in the Voyager cataloging client software.)

The upshot from all of this is that, if you are a researcher, the title of the book you are looking for may be found if you just remember that if you haven’t found it entered as, for example:

“|Xoa nǃanga o nǁoaqǃ’ae ga”

it may have been entered as:

IXoa n!anga o nIIoaq!’ae ga

or:

zzXoa nzzanga o Nzzoaqzz’ae ga“.

At least, until such time that library systems vendors and cataloging policy are able to catch up.

To 1855.

Oh, the input method?  You can find one here.

Categories: Uncategorized | 4 Comments

Post navigation

4 thoughts on “Lernfreiheit und die Khoisan-Sprachen

  1. This is really a thrilling piece on cataloguing rules, thanks for this elegant coaching!

    Germany recieves historic honor and contemporary blame… May I take the liberty of an additional word of rehabilitation?

    At least, the central Hessian Union Catalogue HeBIS (in Germany we have six major union catalogues) is a PICA catalogue and “speaks” Unicode – so researchers might also use the adequate click-character representation, e.g. http://bit.ly/yboaUt

    Still remain a lot of limitations:
    1.) the outdated cataloguing rules (‘zz’)
    2.) the local subsystems are not able to “talk” Unicode
    3.) during the data exchange to WorldCat charaters are changed or deleted, e.g. http://bit.ly/wrBZGt in Frankfurt catalogue vs. http://bit.ly/y2D7Yr in WorldCat.

    Hartmut Bergenthum, snoring a zz zz zz …😉

  2. The Hessians are not snoring then! They have got it right, they must have been listening to DIN. It is just the RAK-WB, the local subsystems, and WorldCat that are still snoring.

  3. HeBIS is sleepier than first appears, I’m afraid…the example you link to is:

    “!Qamtee |aa ||Xanya”

    where they’ve gone with punctuation marks, rather than the letters:

    “ǃQamtee ǀaa ǁXanya”

    It’s still better than my example that I’ve posted a link to above, where it’s harder to tell, but in that case we’ve used Latin capital letter ‘I’ instead of the pipe ‘|’ or the actual letter for the dental click, ‘ǀ’.

  4. Taking your points in random order, Hartmut,
    (2.) If the subsystems you mention are capable of handling MARC-8, they should be able to employ the technique of lossless conversion that is available: http://www.loc.gov/marc/specifications/speccharconversion.html#lossless;
    (3.) OCLC has demonstrated that they are aware of and actively practice this technique, and would have to be using it to support Bengali, Tamil, Thai and Devanagari in the way that they do; and
    (1.) Proposals for crafting updated cataloging policy should be made with the above in mind.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Blog at WordPress.com.

%d bloggers like this: