The Grand Daddy of Unorthodox Optimization: The Lisp Machine
For a quick background on Lisp Machines here's the wikipedia page. Here's a relevant portion:
The power requirements of AI research were exacerbated by the Lisp symbolic programming language, when commercial hardware was designed and optimized for assembly- and Fortran-like programming languages. At first, the cost of such computer hardware meant that it had to be shared among many users. But as integrated circuit technology shrank the size and cost of computers in the 1960s and early 1970s, and the memory requirements of AI programs started to exceed the address space of the most common research computer, the DEC PDP-10, researchers considered a new approach: a computer designed specifically to develop and run large artificial intelligence programs, and tailored to the semantics of the Lisp programming language. To keep the operating system (relatively) simple, these machines would not be shared, but would be dedicated to a single user
After reading up on Lisp machines the idea of optimized hardware for our favorite software or development language doesn't sound so crazy. Sure, in the end assembly and Fortran/C architectures won out due to in fighting, delays, cost, speed and consumer funded development (my favorite flavor of R&D). But with the dawn of hardware like programmable logic arrays and their ancestors, the idea of hardware optimized to your personal preference of design language is not just possible, but inevitable. A convergence of optimization will occur "under the hood" while hackers and designers continue to create marvels that capture our attention.
Tyler (CHW* of Victus Media) mentioned yesterday that he sticks with what works for his build utlities. It's unlikely he'll adopt a replacement unless it's superior on many counts. His description related to Turing Complete so various alternatives all yield analagous results, so why waste time switching? Now take the idea of Turing Complete and apply it to optimization. If a language is capable of performing any calculation, all executions share an optimal form specific to the data and architecture it is executed on. Our current interpreters, languages, and instructions aren't quite sharp enough to find that optimal run time, but I can imagine more potent translators converging on identical execution speed regardless of our language choice. Maybe I'm overestimating the intelligence of future compilers/interpreters, I humbly request that experts pipe in with your perspective and correct me.
In the long run it doesn't matter what form you choose to express your ideas and algorithms. Find an architecture that resonates with your most creative energy and go crazy with it.
Optimize for the quality of your creations
There's a sharp rocket scientist at my work that swears by Matlab. He has literally Gigabytes of libraries floating around the office. Even though Matlab is painfully slow for certain implementations now (don't do nested for loops, vectorize!), if it survives long enough as a language folks use and love, it could run as fast as any other solution. In the short term if you're like myself and like seeing results fast as an iterative design tool, c++ works pretty well, but it can be a heavy syntactical language.
Note* CHW = chief hacknical wizard