While we share information in thousands of ways on Earth, languages have never been designed from the beginning with a focus on optimization of information transfer. More than just another outlandish concept, creating a pre-compressed (low or no redundancy, maximum information density) language to allow much faster transmission and reception of knowledge would provide the following advantages:
- improved efficiency for all who used the new language
- simple pronunciation and learning at a young age
- efficiency in matching language learning to natural functioning is a priority
- we can read jumbled words simply by recognizing the first and last letter, let’s capitalize on our brains natural unscrambling power
- shorter, simpler, and more easily discernible syllables and tones for the most commonly shared information
- various subtopics would require separate optimization, the most common shared information in a physics lab may be different than a cooking class
- these could be grouped into general area
I began theory crafting an innovative optimal language in an earlier post "10 (Far Out) Methods to Creating Effective Web Content". After that post, the potential for a more efficient language structure has been an ongoing curiousity of mine. I reconsidered the hypothetical language idea while having a short chat with my lovely fiance Michelle at the beach about prime factorization, followed by a conversation with long time friend and mathematician Eli. I decided using prime numbers to represent other non-primes would be an effective form of data compression (and information compression), even if it was going to be a "hard problem".
Using Prime Numbers Will Be Hard
I pondered the advantages of using prime numbers (for their inherent high information density) as fundamental numerical representatives of this new language (mapping words to prime factors). Moving past the first problem of how we could map an easy to use verbal and written language into numeric prime numbers (i.e. single, easy to distinguish syllables for high probability concepts represented by smaller primes) there are other difficulties. Calculating the next digit in the sequence of primes is an accepted "hard problem" (perhaps beyond co-NP-complete) as the prime number grows in size. What this means is that even though some simple patterns have been realized in limited subsets, prime numbers are computationally extremely challenging.
I'm aware enough to know I'm in way over my head when it comes to computing prime factors and equivalent forms. We'd also need prime arithmetic and new math to support computer operations and efficient storage of prime numbers. There are some techniques that take advantage of known factors but general purpose algorithms are understandably computationally expensive (Wolfram MathWorld).
(more to come, will make time to write some additional ideas on this topic for now the recording has a good splash of thoughts)