ASCII only works well for English as even other western European languages
have additional characters (e.g., á, ü)
There is an eight bit version of ASCII (ISO 8859-1 or Latin-1) with 256
characters that does a pretty good job of covering the languages in the Americas,
Western Europe, Oceania, and much of Africa.
But what about the rest of the world?
Unicode is an ambitious standard with the goal
"Unicode provides a unique number for every character, no matter
what the platform, no matter what the program, no matter what the language."
Java uses Unicode and the W3C has adopted Unicode as the web standard
Unicode and several encodings that use one, two or four bytes (UTF-8, UTF-16,
UTF-32)
Unicode allows for 2**16 characters (~65K) and you have to know which character
set you are using, so it will support any language whose character set has
<65K characters.
Unicode is big and complex and full of compromises, but it's the probably
the best way forward to having a standard that supports all human languages
on computers.
You can expect to see and use more Unicode in the future.