What are wheel menus? They're context menus. But not any kind of lame-assed context menus. These context menus have exactly 8+1 slots for menu items, no more and no less. The 8 slots are arranged in a wheel, like the points of a compass, and the 9th, reserved slot, is in the center. Each of these slots corresponds to a predetermined position on the keyboard, so that whatever menu the user is in, they always know what keyboard shortcut to press. The map is simple,
(We will not here consider MouseGestures.)
The scheme is both extremely learnable and usable, due to:
The scheme is trivially obvious when you consider DougEngelbart's general design with a chording keyboard for the left-hand. In fact, it is a chording scheme since actions not in the top level menu require a sequence of keypresses.
Okay, it seemed so far even to me that WheelMenus were standard OctoRadialMenus and that the only thing significant was that I came up with them independently. But I was wrong about that.
One of the problems with the standard mouse idioms is the asymmetry. My system is totally symmetric.
Off of the left mouse button, you access the navigation actions such as homing in on an object, autolocking the Hand, selecting/deselecting an object, dropping the selection of objects, popping the selection stack, creating a new selection, resorting an aggregate node by creation date, by last modification date, by last read date, and so on.
Off of the right mouse button, you access the transformation actions such as deleting an object, duplicating an object, making it stand on its hind legs and dance, whatever basically.
So there are two menus and two mouse buttons; a perfectly symmetric situation.
Meanwhile, the standard mouse behaviour is that a left mouse click triggers an action in the navigation menu, and the right mouse click triggers display of the transformation menu. Left clicks select an object or "push" a button (and what those have to do with each other is beyond me) and right clicks cause reflection upon the object. Well, since ButtonsAreEvil so that we no longer care about "pushing" bottons, the standard mouse behaviour can just go straight to hell.
Since selections are an action, they're a left button mouse gesture. A very simple mouse gesture mind you; short line down. Drawing a short line down on an object is trivial and doesn't significantly add to the ergonomic cost of selecting an object using a mouse. Since selection is now an action, it can even be triggered by pushing a key on the keyboard, entirely eliminating mouse clicks. The mouse just gains 9+1 buttons and the two that come standard with the mouse are simply no longer required.
Meanwhile, what a null left mouse button gesture does is bring up the navigation menu, in line with the standard null right mouse button gesture which brings up the transformation menu.
The advantages of this scheme are that the short line down gesture can't be used to click on buttons, because buttons are typically smaller than the gesture, making buttons pretty much unusable.
The other advantage is far more practical. Now that wheelmenu slots can no longer be "pushed" as buttons, we can treat the actions in these slots as objects in their own right. Objects that can be picked up and manipulated. This fulfills my implicit promise that objects and functions would be treated separately but equally. (Either that or it reifies functions as objects, condemning me straight to FP hell. :)
In particular, it becomes possible to;
Finally, I don't think the manual affordances provided by visibly raised buttons that you push down and which respond by being depressed are significant. This one idioms is so easy to learn that the botton metaphor gains nothing. And since the button metaphor would lead users to expect to be able to click on an action to "push" it, and that doesn't happen, it's best to do away with it entirely.
As a consequence, actions in slots will not look like buttons, and triggering an action will not result in its seeming depressed. The most obvious alternative is colour changes of the action slot. For example, from white to pink (on button down) to red (on button release), exactly like virtual keyboards in typing programs. Which, surprise surprise, is the role of wheel menus.
Lessons learned:
Having used RadialMenu?s (which are a superset of your idea) in FireFox extensively, all I can say is that in the end, I didn't like them. When you start loading all your interface into them, it makes the interface very unstable. Things can move and jump around, and you have to be very aware of which menu level you're in at any given point. Multi-level wheel menus tend to obscure each other, which is a major weakness compared to standard rectangular menus.
You may think ButtonsAreEvil and MenusAreEvil, but I have to say that a WheelMenu doesn't sufficiently correct for them to be considered a replacement for even a context menu. And, WheelMenus are typically based around Icons because of space constraints.
But WheelMenus can be nifty where there is a very simple set of operations and you'd like to provide them without any keyboard access at all. Good examples of this are applications for children where there are only 4-5 possible actions at any given moment. The minute you spill out into more, things get complex.
MouseGestures are probably a better solution in the long run.
-- DaveFayram
Without keyboard access? The entire reason to have a wheel (as opposed to disc or circle) is to have a fixed set of slots so you can always use keyboard access to navigate them. Why do you want to do that? Because targeting an itty bitty button is hard work.
As for mouse gestures, they complement the keyboard and in no way replace it. Mouse gestures are a great way to access actions that are nested 2 or 3 levels deep in the wheel menu, but why would you want to waste the simple mouse gestures on actions that can be accessed with a single key combination?
(Note that I have my doubts as to whether >2 levels is ever justified. And I'm certain >3 levels cannot.)
(Note also that what you call rectangular menus are linear menus. A rectangle is just as much a surface as a disc or a wheel; the numeric keypad is rectangular.)
You can get around space constraints by prioritizing menu items. The names of some are bound to be more important than others. The more common ones will be more important at the beginning, before the user has encountered them 20 thousand times. After that point, the action names will be totally useless because the user will have memorized their positions down cold.
I did consider the obscuring problem and there are several ways to get around it. These include:
Hey what do you know, this isn't new. Except for the central slot, and the decoupling of mouse gestures from menu actions, this is just OctoRadialMenus. -- RK
Well, radial menus can graphical represent a few simple options. For instance, back and forward for a slideshow could be a good example. It shouldn't be the only mode of access, but for those rare occasions where icons are in fact more descriptive than text (rare, but it happens) then it's a good fit.
As for the obscuring problem, neither of your solutions adequately solves it. In fact, reading them made me realize a critical flaw, which I'll get to in a moment. But first, the obscuration problem:
Interestingly enough, flat box menus never have this problem. If you're going to make this concept a serious contender to replace context menus and wheel menus, you need to address this issue somehow. This does not mean the concept is bad. Indeed, I like the idea. It may just mean that the WheelMenu is only suitable for use when you have a very simple and limited set of options. If you've never got more than 9 options, it'll work well.
To go on a slight tangent, I can't help but you're limiting your thinking to common hardware. You keep talking about the keyboard and mouse like they should be separate entities. Check out http://www.fingerworks.com for a way to combine the two in a meaningful and powerful fashion. I personally can attest to how awesome those utilities are.
-- DaveFayram
Just a personal preference, but I loathe touchpads.
You still have to worry about where you are on the stack, and this limited translucency makes it more difficult to read the current menu and it's still impossible to understand the previous one.
You then introduce extra state to the UI which will prove very confusing if you erase the previous menu, requiring context to understand where you are in a virtual space of commands.
I don't understand why you would need to require context to understand "where you are in a virtual space of commands". That sounds like a bad command set. OTOH, the limited transparency is aimed exactly (and only) at providing such context so I shouldn't slam it too hard.
Note that the transparency cuts out when you move out of the top-level menu and into the submenu.
Another option, if I just want to provide a feel for what kinds of commands a submenu provides is to display the submenu under the top-level menu, obscured 40-50%. Once the user moves into the submenu, the top-level menu goes to 5% transparency, so the user can barely tell it exists.
If I bind the central slot to "up a menu" then I plan to show an outline of the higher menu beneath the current one (95% obscured) to cue the user that they're in a submenu. And quite possible, the orientation of that outline would cue the use to which submenu they're in.
I don't anticipate "where you are in the stack" to be a serious problem so long as I keep my command set short and sweet. I myself only came up with a few dozen graph-oriented actions, so I don't see why I can't fit them into two levels. I've seen plenty of bad command sets but I don't consider catering to incompetence to be a valid concern for an interaction designer. If programmers are forced by the UI to provide no more than 3 or 4 dozen commands for an object type, then that's a GoodThing.
(I suspect that FireFox's use of OctoRadialMenus failed simply because it is an application and it tried to fit a diarrhea of commands into an octoradial menu system. I don't plan to have applications, the commands for the node-aspect of an object will simply be inaccessible when you're looking at the content-aspect of the object. I talk about the separate aspects of objects in MenusAreEvil. They refer to different things and so shouldn't be available from the same place. I doubt that firefox is anything but a traditional application in most respects.)
I don't plan to use icons, I plan to use text. If space considerations really matter then I'll just find some fancy way to make it work. But I will use text! -- RK
The Fingerworks keyboard is so much more than a touchpad, Richard. It's very easy to dismiss it, but it's actually a complete gesture system. It can do everything a keyboard does, everything a mouse does, and also has an incredibly sensitive gesture recognition system. It is not pressure sensitive, it's heat sensitive, which gets rid of a whole slew of issues with touchpads. It also understands that the heels and sides of your hand are often not touch surfaces (but gestures can activate them or receive them at choice). If I were building the UI of the future, I'd be looking at new input methods, not just new UI.
As for the rest, the biggest flaw with your text system is that it'll slow visual recognition. I know you think otherwise, but icons are not inherently evil. Bad icons are the problem. For, say, a forward and back button, an icon makes much more sense than "Forward" and "Back". Icons aren't suitable for everything, but dismissing them out of hand because people overuse them is a very bad idea.
-- DaveFayram
I know what finger works does and how it does it, I still prefer buttons to gestures. Typing is inherently faster than handwriting or chording. And handwriting is a chording scheme when you think about it.
I think you've been missing the point about recognition. Visual recognition is simply not a factor. When you're in the learning phase, you need text and speed is simply not an issue. When you're out of the learning phase, you don't have to recognize anything when you can simply type ALT-D-W to perform some action.
The other concern people brought up is space constraints. That too isn't a concern to me. There aren't going to be any space constraints for BlueAbyss since it's meant to be used with a wall of monitors > 1 meter wide horizontally and > 0.4 meter high vertically.
And even in the short term when I can't expect people to have a wall of monitors, or afford to dismiss the people who don't, I need only concern myself with my own applications. And I can make sure that my command sets aren't stupid.
BUT, I might build in a 2 level stacking depth into wheel menus. -- RK
But the Fingerworks keyboard is a keyboard. I'm not sure where you get the notion of handwriting. As for the rest, I think that you're dismissing the flaws out of hand. It's hard to go one way or the other without seeing an implementation. Get coding and we'll see how it turns out. I look forward to that. -- DaveFayram
I tried using the RadialContext? WheelMenu extension for FireFox (http://www.radialthinking.de/radialcontext/) and found it abysmal, because all the menus were represented by dinky icons that look the same to me for some reason. IconsAreEvil?! -- EarleMartin
You may think that fingerworks is a new input device but it isn't. Not at all, to an interaction designer.
What's a keyboard anyways? A keyboard is a one or two handed discrete multi-dimensional input device. And a mouse is a 2, 2+1 or 3 dimensional continuous input device; 2+1 is a wheelmouse. That's all they are, continuous vs discrete. There are issues about how to combine them and how many dimensions should be in either, but these are all just details.
Fingerworks functions either in a continuous mode or in a discrete mode and it never blends modes into something new. It's not like you can move multiple fingers independently because that's outrageously difficult within the limitations of a plane. So it's either in a one handed continuous mode or a multi-finger discrete mode. Again, from an interaction designer's perspective, this is nothing new.
And even if it were possible to use multiple fingers in fingerworks (eg, drawing different gestures using ring_down+pinkie_down and ring_down+middle_down) it still wouldn't be different. It would just be sign language, and sign language is a form of speech. And that's what idiomatic mouse gestures are.
(Which actually makes me think that voice recognition would be a superior alternative to mouse gestures. They'd be way faster to input, infinitely easier to learn, and would even lower display use since the name of a command could be its trigger word.)
"cursor bold" would be two actions off of the text context menu.
The only differences would be in the acceptable maximum size of a vocabulary and the speed with which words can be spoken. To an interaction designer, these aren't very meaningful differences. To a usability engineer, they might be, but then I'm not a usability engineer now, am I? -- RK
The demo of the new Bard's Tale game uses something almost identical to what you describe for its in-game interaction, including a "hot key" that brings up the wheel menu. A little practice gets you very quick at making selections. It can get more than 2 layers deep (I think; I'll have to play it again), and it uses icons instead of text, though. -- ChrisMellon?
See also: PieMenus, OctoRadialMenus