Posts: 5
Joined: Tue Nov 18, 2014 11:35 pm
A different take for a curious mind?
The talent you show in your work: the art, music, writing, and software development is humbling. Your positive attitude which comes across in the posts here, on your blog and in your videos is inspiring. I've been web-stalking you a little bit recently - hope you don't mind

I came onto Verve by way of some other forum and after quickly running it and being overwhelmed by the many knobs I left exploration for later. When I finally took some time to watch your YT videos and actually made an effort to discover the UI, I was amazed at the power Verve provides. The fluid simulation you achieve is unbelievable and I can see that pretty much any kind of painting effect can be obtained. I'm hoping, though, that the UI can be made even more efficient yet at the same time easier to pick up for new or less frequent users. Based on the updates you've been giving, it looks like you will be addressing UX among other things. WRT this, can you tell me if in your ideal UI you see touch being very important in the way of being simultaneously used along with a stylus?
In most art software that has incorporated touch, it's used for panning, zooming and tool selection/configuration. It's for this last purpose that I think I've thought of a novel way to use touch. Seeing that you're the type of person that approaches UI/UX in a way that clearly shows creative thinking, maybe you could think about implementing what I will attempt to describe - at least to prototype it and see if it could be as useful as I can imagine it to be. The fact that Verve calls for a decent GPU is a plus for what I'm about to suggest since it would benefit from acceleration.
Let me preface the description by stating that I'm OCD about workflow efficiency - this is a driving factor for me in choosing apps that allow for movement to be minimized and for the focus to remain on the primary task. As a software developer myself (though not in graphics for quite some time), I've thought about implementing a prototype myself - I'd probably use Unity3D, but the stage Verve is at and your own personality (which I infer) makes it seem that you might be open to at least discussing new ways of interacting with software, if not to actually trying out some of the ideas.
So, assuming the stylus is held in one hand, how can we maximize what the other hand does with touch? How, at the same time, can we not obstruct the canvas unless it's for operating the UI? What you're doing in terms of showing a knob at the pentip is OK to a degree, but two of the problems that come to mind are reduced discoverability and too much memorization of keyboard shortcuts to bring up the various controllers. Pie menus (aka marking menu, radial menus) offer another solution, but the problem is two-fold there as well. First, they're usually meant to be invoked by a mouse or stylus. This breaks the flow of what the hand that is controlling the cursor is (or should be) mostly doing in a sketching/painting application (which is what I'm targeting) - artistic, flowing strokes and not picking through a menu (often multi-level). The second problem is that the breadth of features that need to be interacted with is so great that these kinds of relatively static menus/toolbars can't handle showing more than a limited subset at any one time so as not to overwhelm the user.
With the above issues in mind, would it not make sense to use the non-stylus hand to operate a fluid UI while the stylus hand simply picks and drags within that UI? But, isn't this what a toolbar or radial menu already allows for if operated using touch? Well, not really. Imagine if you will, a blank canvas. Single touch is discarded if pen near screen to avoid false strokes. If pen is away, single touch can be used for panning or, possibly, some other operation assigned to a double-tap or tap-and-hold, etc. Still the potential is limited. Two fingers are more useful when a pen is around, for both panning and rotating the canvas, but again, possibilities are limited without nesting.
OK, there's still nothing new here. How about three fingers (and maybe four, but for now let's talk about three). What if, when you place three fingers (and only three) on the screen/tablet, a translucent (or not) overlay is shown? It would not be centered at the geometric center of the triangle formed by the fingers, obviously, but rather would be a puddle-like "blob" that extends away from the three fingers (thinking thumb, index and middle or thumb middle and ring) providing touch and towards the hand holding the stylus. Hmm, still not very exciting. The magic would be in the implementation of this UI blob. The exact content and size of the blob would fluidly change depending on the relative positioning of the fingers to each other. A particular three-finger press can be pretty easily memorized in terms of the positioning and spread of the fingers, but what happens if you put down your fingers and get something other than the tools that you wanted?
Well, you don't lift them up and try again in geometric configuration hoping to get it right. Instead, you rotate them or change the distance between the fingertips and the blob overlay fluidly "scrubs" (or maybe morphs) between adjacent/related tools. At any time, part of the UI would be larger, easier to click on, while on the periphery you would see UI elements that can be brought into "focus" by continued movement or rotation of the fingers. Think fisheye or hyperbolic visualization where the outer boundary of the UI blob wouldn't necessarily be a regular shape, but rather puddle-like. I'm not quite sure what the ideal layout of the tools would be, but one possibility would have them arranged radially with sub-tool options further out from the three fingers. So again, the main idea is that a simultaneous rotation of the three fingers would scrub/switch the focused tool, while changing the position of the fingers relative to each other could maybe allow some other "dimension" of the UI to be navigated to, i.e. they could act as sliders.
Selection of a particular control (button, slider, widget) would most likely be best done with the stylus, but it might be reasonable for certain types of interactions to allow complete operation with the touch hand. This might work by scrubbing a particular tool into focus using rotation and, as mentioned above, using sliding motion of the fingers to reach sub-tools, but not to actually invoke them. Invocation could possibly be done by lifting one of the three fingers off the screen/tablet and, with the UI remaining as is in terms of content and selection, tapping the lifted finger once.
An implementation of the above would probably work best as a vector (rather than bitmap) overlay so that that it could be smoothly scaled as the fingers morph the UI and bring the various parts into focus. Of course there are many details that would need to be worked out by actual trial such as ergonomic layout, sub-tool navigation, the actual kinds of movements that would minimize the risk of RSI and so on.
I'm not sure I've given you a clear picture of what I'm imagining, but, without visuals, I've tried to paint a picture in words. It would be cool if you were to say that you've been thinking pretty much along the same lines. If not, well then, maybe you could still consider it for some time down the road. I'm curious to hear any feedback you might have or to provide details that I might've overlooked.
Cheers and best wishes,
Adrian