Eye And Hand

An older article, HandVsPointer, describes the possibility of modeling pointers as 'hands' that can carry objects. I've been pursuing that model recently, i.e. with the notion that hands can explicitly carry tools that modify options in context, and users (players, developers, end-users, etc.) are able to switch or cycle multiple hands.

When I got to thinking about 'heads up displays' and 'stylesheets' and the like, I got to thinking: it might be a good idea to explicitly model the user's eyes/lenses/view-transformers as a stateful object separate from the hand. I.e. if a user wants a debug view, or a different layout, they can attach this as a lens to their logical 'eye'.

So this gives us an 'EyeAndHand' UI model.


Would this be a functional part of the developer's interface, or would it be included as a part of the application, independent of it?

We could experiment with this idea for specific applications, and even IDEs, of course. But when I speak of 'UI model', I'm imagining something larger - like an ObjectBrowser or the UI component of an OperatingSystem, such that it applies across all applications. I dream of 'programmable' UserInterface designs, that blur the lines between application, IDE, and social collaboration.


Related:


EditText of this page (last edited September 2, 2011) or FindPage with title or text search