Foyer‎ > ‎

Log book

For various reasons this site has come prior to proper investigation of blogging software, hosting, et al. As such this is going to double as a blog for the time being. At some point in time this may actually become something a little more advanced.

Automating Genius

posted 17 Apr 2015, 20:29 by George Hicken   [ updated 11 Jun 2015, 14:46 ]

Just in case I publish this instead of save it as a draft... these are notes in very rough draft! I'm not even sure the concept is worth pursuing at this point in time.

This discussion is limited to creativity within a scientific/material domain. I suspect that it could be rewritten so that the core concepts are equally applicable to artistic endeavour as well however I'm not convinced I could do so within the same text. Not only does the terminology change, but so do the meanings of critical words such as value and there are additional cultural memes that are likely inhibit discussion when not enforcing separation.

In the following discussion I, tentatively, ascribe to the term value the meaning of widening future possibilities.

Intuition/inspiration - recognition of value in a seemingly unconnected or inapplicable transform/action as it applies to a base concept

Genius - chaining multiple non-obvious steps (leaps of intuition) to reach a goal that lies beyond the bounds of what is considered possible, or a single intuitive leap that is so disconnected from the base concept that even with the start and end points another can't see how you get from one to the other.

Assuming my definitions of intuition and genius are remotely correct this allows me to state that genius is the ability to free associate beyond what is normal, coupled with a superior ability to judge value of the resulting concept in a given domain. It is said that there is a fine line between genius and insanity; I say that that line is the ability to judge value. It may be that in time those we thought insane will be proven otherwise, that they simply judged value from a perspective that allowed them to see beyond our far horizons.

Now we get to the why, why have I tried to break down the concepts of intuition and genius into constituent parts, why discuss this topic at all. The first part is a desire to figure out how to improve the scope and range of my intuition, how to do so without simply polluting my perceived solution space with useless ideas, and how to retain a (mostly)contemporary value judgement.  The latter is most important in the modern world as the ability of a single person to effect change is essentially non existent and while a label of genius provides a fulcrum a label of insanity is significantly limiting in most cases. The second part is just as speculative and relates to computers.

A computer could be considered the perfect free association mechanism, to the point that it is entirely random. The problem being of course that a random walk is rarely a good way to find something, however knowledge management systems are in their infancy and knowledge manipulation is barely gestating. The closest I've come across is the reprehensible practice of using a computer to scan existing patents, map the nomenclature to a close but not colourably similar domain, and resubmit them if the patent meets various sensibility criteria.

--Continue discussion about various ways to construct an automated value judgement--

References:
Free association - http://www.nature.com/news/why-great-ideas-come-when-you-aren-t-trying-1.10678

Abusing nested paging for non-virtualized performance

posted 18 Sep 2012, 15:53 by George Hicken

It occurs to me, having just had a quick scan of the literature, that some of the hardware virtualization extensions that have been introduced in the past several years may also be leveraged to provide improved performance of non-virtualized operating systems.

The feature in particular that's caught my eye today is nested paging; essentially the introduction of an additional layer of hardware indirection between a virtual memory address and it's destination. While this doesn't have the fine grained control I've been lusting after for many years what it does provide is a wonderfully high performance page level mechanic for the following:
  • thread local storage
  • function pointer tables
The first is a much more appealing proposition in many ways as it's a much better mapping for the semantics of nested paging, however it's the second that would likely offer more of a performance benefit, not because it's inherently slower than a TLS access, but because it's so much more common. It does however require a  lot more work to implement and is likely to be more wasteful of memory, but then that's almost always the tradeoff you make for performance.

Obviously this function would in many ways be better implemented in the operating system, however the common address space generally sits as the core concept of a thread, and as such there is significant resistance to allowing per thread page mappings both due to the technical changes required and a mental reluctance to turn the process/thread binary set into a spectrum, maybe due to a reluctance to make the execution model more complex or an attachment to existing models.

The trials and tribulations of advanced make

posted 8 Aug 2011, 05:28 by George Hicken   [ updated 17 Apr 2015, 18:30 ]

This post comes about after spending a lot of time and not a few curses on trying to get a nice generic build system set up with make and avoid the deferred pain of recursive make files. Why am I using make? For the same reason I use vi, but that's another post.

I know, I know, ask pretty much anyone in the industry these days and they'll either tell you that using make directly for complex projects is old school and you should be on one of the various toolchains that wraps it such as autoconf/configure, or they'll tell you that they've never written a build file of any description because, well, the IDE takes care of it. The first stance has some merit I admit, proponents of the second will seriously contemplate suicide the first time they're asked to work on a code base that's not tied to an IDE.

Personally I think that experience of varied build systems should be a critical item in the skill set when hiring if you're a software shop with any degree of environmental heterogeneity. Even if you're not you should consider it as even IDE's shift over time and I've never come across an established environment where there are no legacy systems of some kind.

So, taking as a given that, for some reason, you are wanting to produce a complex build environment in make what would you like to support you?

Good documentation? I've found the online edition of O'Reilly's Managing Projects with GNU make utterly invaluable.

Template build system? The problem with these is that they generally impose a very particular organisational structure on your project which may not be viable and the tiniest tweaking leads to mayhem as you try to figure out how the various includes, defines, implicit rules and shell scripts interact. The O'Reilly's book contains various templates in the examples.

Debugging capabilities? Well, there is that -d flag, which becomes more useful if you use --debug and become more selective which what debug data you want. However I found nothing that would let me debug variable expansion, deferral and evaluation except for the ages old court of last resort, adding print statements. As with printf, $(warning myvar: $(myvar)) has been an amazingly useful construct and, as with printf, it's use demonstrates a lack of support tooling. 
While it's not enough, what is there?
  • dry run (-n):  useful, but only work for the eventual commands invoked and not how you get there
  • inspect make data base (-p): useful for control flow, but doesn't insert values into variables. If only this was closer to the set -x behaviour for bash
  • pretend a files new (-W): useful with -n for targeted testing

The tool I've wanted ever since attempting to port some very complex software with very, very complex build systems is a full stack debugger. Many of you at this point are thinking either what does he mean, or thinking that there are such things; if you're still think the latter is correct when I've elaborated then tell me, please, and put me out of my misery!

A full stack debugger - a tool capable of traversing control and data flow through the intricate stacks of varied shells, interpreters and native programs that get involved in a build system. Something that can deal with a call hierarchy of bash->perl->bash->make->sh->make->perl->[my custom binary 4GL language generator]->bash->make->awk->et al.
I want to be able to SEE where a given value came from, if it came from the shell then where was that shell created and where was the environment for that shell constructed. I want to be able to single step through these scripts and programs and look at the values of data at any given point. 

Maybe you start to see why I grudgingly grant some merit to those who play in the utopian sandbox of an IDE instead of venturing into the post-apocalyptic wasteland called legacy.

This particular problem of full stack debugging actually has bearing on future as well as legacy development headaches. Think about a cloud environment, not any given cloud, the concept of a cloud; a heterogeneous collection of hardware with varying capabilities and potentially transient execution environments as we provision and dispose of our hosts  on demand.
Now think about how you are going to debug exactly what is was that went wrong, somewhere down the line on an unknown machine with an OS that's since been erased, that caused your application to think that little Tommy really did have the privileges to access his fathers keyring and purchase that Russian bride suggest by, some rather confused, targeted advertising.

The upshot of this rant is that I've added another project to my wish list (not my todo list). Not the full stack debugger, for while I'd love to have it the time required is prohibitive, but an interactive Make debugger. Something that allows you to step, line-by-line, through your build, inspecting your variables and seeing precisely why libsecurity.so hasn't been rebuilt for four years despite all those patches you've applied to the source files.


Incorporation

posted 22 Jul 2011, 06:08 by George Hicken

Incorporation, the first step on the road to legitimacy and taxes. It turns out that this is both remarkably simple, an online process that can be accomplished in under ten minutes, and also rather involved if you don't already understand the rules behind the concepts. For example, every signatory of the Memorandum of Association must own at least one share (makes sense) and the online application allows you to specify how many shares a person holds and how much of the nominal share value is paid or unpaid. Simple in concept but there are some questions that the help pages just don't touch:

  •  How can a company have a value prior to incorporation? The corporate entity doesn't exist so how can it possibly hold assets at this point?
  • How can a signatory have paid any of the nominal share price prior to incorporation; to whom would they have paid it?
Normally I'd just shrug at questions like these and assume that they're mostly unimportant, but for some reason I feel wary when I'm signing legal documents and don't understand how some of the possible responses can apply. In the interests of not requiring a degree in corporation law before I start this business however I've crossed my fingers, hit submit and am now waiting for Acme Inc. to deposit an anvil in my immediate vicinity via their patented orbital drop delivery service.

1-4 of 4

Comments