Victus Spiritus

home

Is there a Moore's Law for Machine Intelligence?

03 Jul 2010

High Level Programming Implementations Speed Up

As I've read about the development and evolution of Rubinius and JRuby (and Duby/Surinx) it's become clear that these language implementations are getting faster. The pinnacle of modern computational speed is C and the assembly it compiles down to, with a close second being java byte code (memory permitting). But the implementation speed of C comes packaged with additional code complexity and verboseness. With advanced techniques like the application of the low level virtual machine (LLVM) and just in time (or ahead of time) compiling, sharp developers are converging on a balance between optimal performance and ease of use along with readability.

Will it ever be fast enough to think?

Extrapolating the advance of high level languages hits a barrier at the realm of interpretation and analogy, an ability thus far unique to sentient beings. There's no clear path forward for machine intelligence to mimic human problem solving ability. The dependence on software for carefully described and perfectly matched interfaces is where processing and intelligence diverge. It's easy to forget how much outside (relevant) information we bring to bear on novel problems and data translations. Moving these often tedious transformations into the realm of machine operations would be a huge break through in productivity.

Probe, let's slow down the code even more

The motivation behind Probe is a language which allows interfaces to be sloppy or ill defined, yet continue to function. Perhaps the interfaces wouldn't operate quite as we'd expect or would function sub-optimally at first, but they'd be refined through iterations. The language* will work out similarities in data structures and look for patterns to construct bridges (adaptors) between new data and existing algorithms. Developers may guide the language instead of coding every specific contingency instruction. The language will be powerful enough to abstract a common minimal cross section between two similar data collections and functional code. Instead of relying on absolute descriptions for variables and methods (identical names, taxonomies), the language will leverage a probabilistic pattern matching of data features, both before and after processing. The statistical features of input and output data can be checked for consistency with training data, just like we do sanity checks on processed results.

In order to achieve this high level of utility a large set of semantic data will be required in addition to sufficient training data. We humans have a breadth of external experience we bring to problems. The language will require a notion of objects and their relation to each other as well as the human language used to describe features. It may be more practical if custom versions of the language are trained to cope with specialized disciplines and industries.

Notes:

*= Probe as I envision it is really different than formal programming languages. It's a language with a built in machine intelligence application, or smart interface guessing engine.