The intentional interconnection of technology happens at surfaces that are matched*. No matter how well thought out, interface assumptions must iteratively adapt to a dynamic data environment. Information and data rarely maintain the same formats, rates, and sizes over time. As much as I'd prefer to never have to write another data reformatter, it is silly to expect other technologies to stand still. What happens when an interface changes, but all of the needed data to satisfy that new interface is readily available? A developer has to go back into that code and manually write up a new interface or retrofit the existing interface to maintain backwards compatibility. This usually produces some hard to maintain and read code over time. Old code has a way of evolving into a thing out of nightmares.
Smarter Mapping Utilities in probe
A couple of days ago I described the beginning of a top down designed programming language, probe. This design effort is defined by what I want as an application builder first, with only a hint of implementation details ruminating in my subconscious.
One of the primary design goals I have specified for probe is an advanced interface mapping framework. I'd like to automate as much data reshaping as possible in order to minimize developer time for these tasks. One way interfaces are handled for dynamic or duck typed objects is referencing by method name. If an object has quack it is easily mapped to interfaces that call the quack method. What happens if a needed method is missing?
There are a variety of ways methods can handle the absence of a called method.
- New nominal methods can be appended to objects without quack
- a functioning method can be created on the fly^, constructed from other needed methods. This way an object is scanned for a mininum set of builder methods and quack is created and appended to the object
- an error can be raised for a required method, without sufficient builder methods to construct quack
The act of passing an object to a method can mutate the object. Methods are able to dynamically modify objects**.
Data I/O blues
A problem occurs if the input format isn't in a format the software has encountered before. Markup languages like XML, JSON, and RDF attempt to generalize the description of information to grease the wheels for self describing data. A strategy for probe to handle new data is to rely on common descriptions within namespaces. In this way probe can mimick the way dynamic methods are handled for new data formats. Dynamic objects or data are "probed" for building blocks to synthesize required interfaces. New object types can be described in data file formats by leveraging library specified core structures.
A couple of open questions/ideas:
- How dangerous is it to comingle one's source with data?
- Is there an "organic" way novel structures described in data can move into libraries for wider usage & optimization?
- Nearly identical data objects can converge into a common library object
- The core language interpreter and default librares will be adapted by it's usage, and specialized dialects spun off (Lisp-like)
- Can the language benefit from the variety of source already developed in other languages through translation tools?
Notes:
* & ^ = Unintended matching surfaces of technology and information can yield wonderful breakthroughs. Enabling this type of serendipity within programming languages is possible with probe. The trick is an application of the generalized Hive Mind idea Kevin Marshall inspired me to think more about after we discussed it over burgers earlier in the month. The generalized hive mind idea is to expose two or more unmatched sets to look for revealing commonality or missing features. In the case of missing methods we're looking for raw builder methods to satisfy an interface by synthesis.
** = Handling dynamic objects in a fast way is a tough problem (optimizing common structures is much easier than optimizing for malleable objects). Luckily I'm designing this language from a application developers perspective ;).