david on 16 Nov 2000 15:14:43 -0000 |
[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]
[Nettime-bold] Re: <nettime> Asia and domain names, etc. (@) |
At 22:50 +0900 00.11.15, Benjamin Geer wrote: > On Wed, Nov 15, 2000 at 03:34:05AM +0900, david@2dk.net wrote: > > The question of 'chinese' is much more complicated, and certainly > > chauvinistic. [...] In all cases they are double-byte characters, > > and need to be encoded first to be sent over the single-byte (roman > > character) based networks of the 'wired' network world. The encoding > > systems are also diverse. (There are debates within each of these > > nations about uniform encoding, much less the kind of problems that > > show up when databasing across cultures) Of course the Japanese > > encoding methods can not be imposed upon the Koreans, PRC and ROC > > and more than the PRC's abbreviated characters can be used in > > Taiwan, Korea and Japan. > > I would hope that any international solution would involve Unicode > (http://www.unicode.org), which, after all, follows an international > standard, ISO 10646 (http://anubis.dkuug.dk/jtc1/sc2/wg2/), and neatly > supports all the languages you mention. It seems to be nearly > universally supported now on computer systems made in the West; I'd be > interested to know to what extent it's been adopted in Asia. Language is so difficult. Benjamin, I think that we're talking about completely different things. Unicode, as I understand it, is a project to develop a global localisation standard -- a way to learn how to write one(=uni) source code that will be expedient to localise for any market. It is a belated recognition that the world has double-byte character culture spheres, and doesn't always represent its on-screen information right to left. This is a technical issue for software manufacturers who wish to become multinationals, and not one for finding universal ways of integrating living languages onto 'the' net. Having said as much, I do not know why you 'would hope that any international solution would involve' questions of multinational software gazillionaires becoming treble-gazillionaires. I think that's off topic. ISO 10646 is an international standard in that somebody recognises that there is an issue here. It isn't a functioning initiative that has been actually globally adopted. But I do not know your work, or your affiliation to these initiatives. If you are a Unicode or ISO 10646 programmer or researcher I wish you the best of luck. The problem that I meant to indicate was that, despite various massive character databases which may or may not include all of the relevant character sets, due to various ideosyncratic input methods in 'the far east' (that pathetic phrase), and various national encoding initiatives, I, with my Japanese system have immense problems sending the exact same 'chinese' characters (though I also have a PRC chinese character OS which I can reboot into) to my friends in Korea or Taiwan. This is not a Unicode problem, nor anything that it will solve in the forseeable future. Unicode means that all of us in these various countries may be attempting to send these files in various localised versions of MSWord which all function well in our markets. (You should see what a nettime post sent from someone with a French character set looks like when recieved on a double-byte OS. It's a mess!!) These are complex languages, fascinating languages, and there is a lot of delicious *culture* attending each language's datafication which is worth respecting and looking into. Here are, to my mind, the more interesting questions asked by this new local domain name issue. I could go into further detail, but it seems like we're discussing at cross purposes, so I'll leave the discussion here. Any comments from the PRC, ROC, Korean or Japanese members on the list? Yours, David d'Heilly _______________________________________________ Nettime-bold mailing list Nettime-bold@nettime.org http://www.nettime.org/cgi-bin/mailman/listinfo/nettime-bold