Tag Archives: typosquatting

A long day with Unicode

Last week I attended a training course on Unicode by Jukka K. Korpela. It was interesting, though the subject is… “tough”! 😉

Following are a few (absolutely non exhaustive) notes I took during the course:


Computers just deal with numbers. They store letters and other characters by assigning a number for each one. Before Unicode was invented, there were hundreds of different encoding systems for assigning these numbers. No single encoding could contain enough characters: for example, the European Union alone requires several different encodings to cover all its languages. Even for a single language like English no single encoding was adequate for all the letters, punctuation, and technical symbols in common use. (from What is Unicode)

Unicode is an international standard that, due to its complexity, is still not fully accepted. However, it is the default in several applications (e.g., XML applications).

Unicode is a “unified coding system” that contains more than 100,000 characters. It is dynamic, because new characters are continually added in order include “all” possible human characters. From a theoretical point of view, we could say that it tries to preserve cultural diversity while giving a universal interpretation of all human languages (it is arguable whether it is successful on this: for example, some Chinese characters are still not part of Unicode).

It’s about encoding, not fonts
There is an important difference between a font and the underlying encoding. Unicode is about “encoding glyphs”: given a sign representing a character in a human language, Unicode describes it univocally. Fonts, on the other hand, are a visual representation (a rendering) of those glyphs.

A font usually supports a small subset of Unicode. Western languages fonts, for example, do not support Chinese characters.

In general, only glyphs can be encoded, not abstract ideas. This simple concept has been and still is a matter of discussion whenever a new character needs to be included.

About characters

  • Unicode is a 32-bits characters set. Each character has only one encoding, with some exceptions (compatibility reasons with older encoding systems)
  • Some characters are obtained as a composition from other characters. The accented letter “è”, for example, is a composition of two characters: è = e + ` (see here for more details)
  • The name of a character is its identifier: it contains letters, numbers, spaces, hypens.

A few definitions:

  • Code point. A value in the 32-bits space. Each char has a code point, not all code points are assigned to chars. This is the numeric representation (usually hexadecimal) of a character.
  • Blocks. Blocks are groups of characters. The assignment to characters to blocks, however, seems a bit confusing: for example, there is a block called “Greek and Coptic”, but it does not include all Greek characters.
  • Categories. Each character has a set of properties which can be used for classification. For example, a letter category is anything used to write words in any language. There are properties which distinguish the script to which a character belongs to. There is a math symbols category.
  • Normalisation. Technique to translate a complex character in two or more simpler characters.
    • In western languages it is usually used to remove diacritics (accents, etc.) by substituting them with apostrophes
    • Used for compatibility purposes (eg, to translate to ASCII)
    • It might create problems if the process has to be reversed

Unicode in real life and a bit of IDNs

Using Unicode might lead to lots of confusion and extra care should be used when dealing with it:

  • ASCII punctuation is different from Unicode punctuation, for example when dealing with quotes: “ ‘ ’ ” ‘ ” but to many the difference is not clear
  • Certain characters are repeated in different scripts
    • The Latin character A and the Cyrillic character A, for example, look/are the same.
    • Different sets of numbers are presents in different scripts
  • Sometimes the same punctuation characters can be found in different scripts with different logical meaning (this is the case of math symbols)
  • Compatibility characters: they are used to make Unicode compatible with older encodings, it is a very vague concept that may easily induce in confusion. For example, K (Kelvin symbol) is different from K (letter) but identical in their representation.
  • To make things worse, characters do not have a property that identify compatibility chars. People “know” which they are only from reading the big books containing the standards.

We discussed a bit about the problem of Internationalised Domain Names (IDNs), which open the doors to typosquatting and phishing. One policy might be to disallow mixing different scripts when registering IDNs. In certain languages it is common practice to use characters or words from the Latin alphabet as part of the sentence and such a solution would constitute a big limitation.

A partial solution, which might work for the most common cases, is to allow mixing any script with the “common” Latin script.



Italian TLD and malicious web sites

Mapping the Mal Web, Revisited (McAfee, June 4).

A new security report from McAfee has just been released on the spread of malicious web sites among different TLDs. Very informative and detailed, the report integrates last year report. Some of the key findings:

  • .ro (Romania) and .ru (Russia) are the most risky European TLDs, i.e., the probability of finding a malicious web site is higher if surfing one of those TLDs.
  • Risk related to .biz (business) and .cn (China) is also increasing (if compared to last year)
  • .it (Italy) has worsened, but is still “a safe place”
  • .hk (Hong Kong) is the riskiest TLDs

The “Hong Kong” case, in particular, is worth a closer attention:

Bonnie Chun, an official [from the .hk] TLD, acknowledged that they had made some decisions that inadvertently encouraged the scammers:
1 . “We enhanced our domain registration online process thus making it more user-friendly. Instances include the capability for registering several domains at one time, auto-copying of administrative contact to technical contact and billing contact, etc. Phishers usually registered eight or more domains at one time.
2 . We offered great domain registration discounts, such as buy-one, get-two domains.
3 . Our overseas service partners promoted .hk domains in overseas markets.”

In a previous post I talked about the recent increased phishing activity in the .uk registry, which, in that particular case, has taken advantage from Nominet’s automatic registration process.

Other country, other problem: the .it registry will implement automatic registration procedures by the end of the year; and I read, a couple of weeks ago on Swartzy’s blog, that the IIT/CNR is also launching an advertisement campaign for .it domains.

I am curious to see if, in analogy to what happened in Hong Kong, we will see an increase of the malicious activity in the .it TLD.

DNS Ops Workshop

As promised, I post a report of the DNS Ops workshop I attended last week. The workshop has been very interesting, though a few talks were a bit too technical for me, which I only have a partial knowledge of DNS operations. Following, then, you will find a non-comprehensive list of “impressions” rather than a detailed report.

A Statistical Approach to Typosquatting
Of course 😉 I will start from my talk, which reports the preliminary results of the research on typosquatting I have been conducting recently. The slides can be found here (and here as well, as I gave the same talk at the Centr technical meeting in May).

The talk seems to have generated a bit of interest in the audience, though I think it suffered a bit from the fact that these are “early results” and much work still needs to be done before we can claim we really understand what typosquatting is (at least from a technical point of view). The talk also raised a bit of questioning about Nominet’s involvement in typosquatting. Just to be clear, at the moment Nominet is interested in my work only from a research point of view and is not taking any position in favour or against any registrar, registrant or any other party that might think to be the object of my work.

DNS monitoring, use and misuse
According to Sebastian Castro (CAIDA), in 2007 only 510 unique IP addresses generated 30% of the traffic at the root servers and 144 of them (called Heavy Hitters) sent more than 10 queries/sec and in 11 cases more than 40 queries/sec.

This are impressive numbers which might tell something about the kind of traffic that daily takes place in the Internet.

Later on, Shintaro Nakagami from NTT Communications, one of the major ISPs in Japan, reported that only 15% of the queries hitting their name servers were legitimate. This doesn’t mean that the other are necessarily malicious, for example, many of them are simply malformed queries or are generated by misconfigured web servers, however…

Finally, Young Sun La (NIDA, Korea) showed an impressive tool that they use at NIDA for monitoring queries to the .kr name servers in real time. It even sends sms’ to sysadmins if an urgent problem arises. Have a look at the slides for an idea of how it works. I might have heard that the software will be released for download, but I might have misunderstood.

How do you conveniently represent the IPv4 space? With a Hilbert Curve, for example, or, as Roy Arends (Nominet) suggests, with a Z-order curve. The resulting graph is more intuitive to read and can easily be extended to work in a 3D space.

Check out his interactive tool (from Nominet website) and his slides. In particular, go to slide number 9 and watch the heatmap of… women below 30 and earning more 100000$/year in Manhattan!!

Privacy issues in DNS
Karsten Nohl (University of Virginia) talked about the privacy issues related to the use of DNS caches. When users query the DNS they leave pieces of information in many caches and they have to trust several entities, ISPs, registries, backbone operators, etcc, that their information will not be released, sold, etc.

DNS operators cache the results of user queries, i.e., the IP corresponding to certain URLs in order to retrieve them more efficiently. This information is anonymous, i.e., they do not register the IP who made the query (in theory), but in practice certain URLs are used only by one (or a small subset of) person(s). At present, it is relatively easy for a malicious party to trace the online behaviour of some user by querying specific DNS servers only and check whether a specific URL is present in their cache.

Such an attack can be used to identify the individuals that access a specific web site: knowing the IP gives the geographic localisation of a user, but knowing his/her online behaviour might disclose much more personal information. Alternatively, it might be possible to track a specific user.

This scenario might become even more critical with the large-scale deployment of RFIDs. RFIDs have unique identifiers but are too small to store information (e.g., product information, price, etc) and they will use the DNS to look up for this data. Then, RFIDs (which have unique identifiers) will be indexed by the DNS and it will be easy to identify single users.