These days I am in Vilamoura, Portugal, for the On The Move Federated Conferences. I thought it would have been the right moment to start blogging again, but it seems that the nearly absent Internet connection offered by the conference is not helping.
As you requested we have moved you feeds…
I just updated to the new FeedBurner platform. The new feeds can be found at:
http://feeds2.feedburner.com/EsperimentoTre (esperimento tre)
http://feeds2.feedburner.com/ExperimentThree (experiment, three)
We woke up with some snow today… check the image on top of this blog!
A long time since I last came here and a few things happened. The occasion (or excuse) to start writing again comes from Sound Diaries, a curious project from the Sonic Art Research Unit at Oxford Brookes University:
The Sound Diaries initiative is focused around sound-recordings and sound-texts and the ways in which we can use sound as a document of our lives (from the project’s webiste).
We hear “sounds” every day, hour and minute, but only seldomly we do listen to them. Our mind is full of images and thoughts, but we often loose memory of the sounds that crossed our lives.
I like this comment, because it explains the motivations behind the idea of a “Sound Diary”:
…If my sounds are taken by you and then remixed to form tracks, I think there is a danger of the sounds becoming completely decontextualised.
The purpose of keeping a sound diary or creating one is to document life in sound…
I think recording your own sounds is almost the most important aspect of developing a sound diary project…
The act of listening to the ever-unfolding soundscape around us … (just imagine all the sounds that are happening in the world right now as I type this!!!) is an essential element within the process of creating a sound diary… (by Felicity)
I did a little experiment with “my” sounds, taken at home, while working. I did a three minutes recording, then I edited the file and shrinked it to less than a minute. It’s like “listening to you from the outside”.
(I would add my soundscape, if only WordPress allowed my to do so)
this post has been voluntarily backdated.
Last week I attended a training course on Unicode by Jukka K. Korpela. It was interesting, though the subject is… “tough”! 😉
Following are a few (absolutely non exhaustive) notes I took during the course:
Computers just deal with numbers. They store letters and other characters by assigning a number for each one. Before Unicode was invented, there were hundreds of different encoding systems for assigning these numbers. No single encoding could contain enough characters: for example, the European Union alone requires several different encodings to cover all its languages. Even for a single language like English no single encoding was adequate for all the letters, punctuation, and technical symbols in common use. (from What is Unicode)
Unicode is an international standard that, due to its complexity, is still not fully accepted. However, it is the default in several applications (e.g., XML applications).
Unicode is a “unified coding system” that contains more than 100,000 characters. It is dynamic, because new characters are continually added in order include “all” possible human characters. From a theoretical point of view, we could say that it tries to preserve cultural diversity while giving a universal interpretation of all human languages (it is arguable whether it is successful on this: for example, some Chinese characters are still not part of Unicode).
It’s about encoding, not fonts
There is an important difference between a font and the underlying encoding. Unicode is about “encoding glyphs”: given a sign representing a character in a human language, Unicode describes it univocally. Fonts, on the other hand, are a visual representation (a rendering) of those glyphs.
A font usually supports a small subset of Unicode. Western languages fonts, for example, do not support Chinese characters.
In general, only glyphs can be encoded, not abstract ideas. This simple concept has been and still is a matter of discussion whenever a new character needs to be included.
- Unicode is a 32-bits characters set. Each character has only one encoding, with some exceptions (compatibility reasons with older encoding systems)
- Some characters are obtained as a composition from other characters. The accented letter “è”, for example, is a composition of two characters: è = e + ` (see here for more details)
- The name of a character is its identifier: it contains letters, numbers, spaces, hypens.
A few definitions:
- Code point. A value in the 32-bits space. Each char has a code point, not all code points are assigned to chars. This is the numeric representation (usually hexadecimal) of a character.
- Blocks. Blocks are groups of characters. The assignment to characters to blocks, however, seems a bit confusing: for example, there is a block called “Greek and Coptic”, but it does not include all Greek characters.
- Categories. Each character has a set of properties which can be used for classification. For example, a letter category is anything used to write words in any language. There are properties which distinguish the script to which a character belongs to. There is a math symbols category.
- Normalisation. Technique to translate a complex character in two or more simpler characters.
- In western languages it is usually used to remove diacritics (accents, etc.) by substituting them with apostrophes
- Used for compatibility purposes (eg, to translate to ASCII)
- It might create problems if the process has to be reversed
Unicode in real life and a bit of IDNs
Using Unicode might lead to lots of confusion and extra care should be used when dealing with it:
- ASCII punctuation is different from Unicode punctuation, for example when dealing with quotes: “ ‘ ’ ” ‘ ” but to many the difference is not clear
- Certain characters are repeated in different scripts
- The Latin character A and the Cyrillic character A, for example, look/are the same.
- Different sets of numbers are presents in different scripts
- Sometimes the same punctuation characters can be found in different scripts with different logical meaning (this is the case of math symbols)
- Compatibility characters: they are used to make Unicode compatible with older encodings, it is a very vague concept that may easily induce in confusion. For example, K (Kelvin symbol) is different from K (letter) but identical in their representation.
- To make things worse, characters do not have a property that identify compatibility chars. People “know” which they are only from reading the big books containing the standards.
We discussed a bit about the problem of Internationalised Domain Names (IDNs), which open the doors to typosquatting and phishing. One policy might be to disallow mixing different scripts when registering IDNs. In certain languages it is common practice to use characters or words from the Latin alphabet as part of the sentence and such a solution would constitute a big limitation.
A partial solution, which might work for the most common cases, is to allow mixing any script with the “common” Latin script.