Japanese has simply never needed to evolve. It's probably no more than a 2nd or 3rd-generation natural language, as compared to English, which is about 12 generations after Latin, which itself is probably a 5th or 6th generation language in itself.
Japanese is "elegant" because it's linear (or close to it). Virtually anything can be expressed in one or two symbols, tops. As such, it would not at all surprise me if a Japanese version of "Gone With The Wind" would fit on one side of A4, comfortably, in 18pt double-space.
You could say much the same about heiroglyphics. The Rosetta Stone shows just how much less is needed to carry the same text in symbology, as it would in an alphabetic notation.
However, IMHO, that is the wrong sort of "elegant" to aim for. It's the right idea, but because it's linear, the increase in memorization required is directly proportional to the number of ideas you wish to express. You can't easily express anything outside of that linear structure.
More "complex" languages, such as English, have many fewer symbols, and many more combinations, to an arbritary depth. This allows you an effectively infinite range of "constructed symbols". From there, you can further combine symbols to produce more and more elaborate and/or specific symbolic descriptions. Infinity times infinity is big. And you only have to memorize a relatively small number of symbols to master the entire system.
However, "word-based" systems, which could be considered "2nd Stage Text" (as opposed to pure pictograms, which are "1st Stage Text"), are not by any means "ideal". They're cluttered, and lose the artistry of pure symbology. Further more, you really only replace a large alphabet with an even larger dictionary.
The "3rd Stage", I believe, is a blending of the two approaches. Instead of "words", being built out of "characters", you can easily develop a system which constructs pictograms out of a limited set of "units", where each unit represents one character, and where each pictogram represents one word.
In other words, you'd have many, many more pictograms than in Japanese, but each would require only a handful of basic blocks - I expect between 10 and 20 would be more than sufficient. So, instead of learning a dictionary of 2,000+ symbols, and then struggling to find the closest one to the thing you wish to express, you compose a symbol, out of a collection of base units, which describes =EXACTLY= what you want to describe, WITHOUT the requirement of a large dictionary OR a large symbol set.
This, to me, is elegence. A language so natural, so flowing, that mis-communication is almost impossible. A language which exploits the fact that you place each symbol in TWO dimensions, not one, or even zero, and uses that to encapsulate the entire word and definition in a single construct, in a way that can be QUICKLY AND EASILY interpreted by anyone, without the requirement for decades of intensive training.
Some day, I'll sit down and try to figure out what such a language would be like. But it would need to be so powerful and flexible, that every thought, feeling, mood, etc, could be readily written down, even though existing languages are hopelessly inadequate to such tasks. Yet it would also need to be so simple, at its most fundamental level, that ANY 6 year old could read PERFECTLY any text of any complexity, without difficulty, be it science, philosophy, or fiction, and understand in excess of 50% of the material.
I believe that that is the next "form" language will take. The existing natural languages are proving difficult to maintain, are impossible to translate with any accuracy, and simply require far too long to master.
The fact that the computer has been used to worsen the problem does NOT help matters. 26/27 character languages can take months for a child just to master the character set. When we're looking at keyboards which may soon be FIVE TIMES as complex, we're looking at a year, plus, just to learn how to operate a device.
That is NOT OK.
Natural Languages are also horribly ambiguous. Part of this is usually "blamed" on people's brains not working well with rigid concepts. IMHO, this is blaming the messenger. The brain doesn't =get= rigid data, so why =should= it deliver it? It's no reason to. So, how do you devise a language that's not ambiguous? Simple. You include meta-data, which defines a context. The more of a context the language can describe, WITHOUT significant overhead, the less ambiguous the language will be.
(Sure, it'll still only represent a person's viewpoint, but if you encapsulate in the language the information which makes up that viewpoint, then the reader will be presented with the writer's brain's view of the data, not merely the reader's interpretation of the language's interpretation of the writer's interpretation of their perceptions.)
[ Parent ]