Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
How 'Minority Report' Trapped Us in a World of Bad Interfaces (theawl.com)
138 points by LeratoAustini on Nov 17, 2016 | hide | past | favorite | 102 comments


The linked article by Bret Victor on the future of computer interfaces is worth reading: http://worrydream.com/ABriefRantOnTheFutureOfInteractionDesi...

Of course, Pictures Under Glass interfaces have advantages too. Touching a screen has become a familiar way to interact with technology: a more tactile interface might be harder to learn. To make a screen-based interface, you only need to draw a 2D picture and respond to 2D touches, not design a 3D object which can be manipulated in space. You can buy touch panels off the shelf instead of having to manufacture expensive custom 3D parts.

But these are advantages of easiness and cheapness, not quality. If designers and engineers can put in the time to make a physical interface and users can put in the time to learn it, you can improve quite a bit on Pictures Under Glass. Think about how terrible it would be to replace a car's steering wheel with a touch screen, for example.

So that's really what the article is saying: we should think of touch screens as the cheap, easy option, not the exciting, futuristic one. Our futuristic starship control panel should be a custom-manufactured physical interface instead of a cheap glass panel.


Well, so the thing that I think people don't realize about gestural interfaces is that gestures are hard to do and tiring to do all day. They are also very low bandwidth. Typing is hard to beat in terms of bandwidth / strain. But there are things that gesturing is better at, and I like that that article focuses on that. I think the mistake is to try to get a catchall. That's why we have both a keyboard and a touchpad / mouse. Each does a different kind of task. Touchscreen typing, on the other hand, is a step back. Game console typing is like 20 steps back.


What are some good examples of gestures that work better than alternative input devices? It seems to me like the obvious choices of zooming, scrolling, and rotating things can all be done better with buttons, a mouse, a wheel, or a knob. I can press the back button on my mouse a lot faster than I can two-finger-swipe to go back a page.


> I can press the back button on my mouse a lot faster than I can two-finger-swipe to go back a page.

Big assumption that you're using a mouse (especially considering the ubiquitousness of laptops). Laptops provide a ready-made surface for fully portable use and I've found getting using a touchpad for professional and personal use[0] to be more than sufficient and that the availability of easy to use gestures is very nice.

If you haven't tried gestures on OS X with Apple's touch hardware, I'd highly recommend attempting to use them regularly. I got a fancy Razer Blade 2016 and using the touchpad is significantly worse than an Apple laptop, and OS X has much better native touch support (3rd party software might make Windows better).

I'd say touch interfaces are great compliments to keyboards for text editing workflows (more or less all I do professionally), you don't even need to move your hands off the keyboard to get a mouse (hooray thumbs!).

I would liken getting used to laptop keyboard/touchpad as getting used to a minimalist *nix install. It's difficult at first, but once you get used to it you have a great skillset that lets you do anything you want on almost any computer. Not that you should never nice high-level and user friendly programs, but being able to drop down a level of abstraction and still get the job done is great for understanding how systems work.

[0]: most of my video games suck without a mouse, but that's the only wall my personal use runs into


I find zooming to be infinitely better done as a gesture - whether on screen or on a trackpad.


The problem is when are you zooming and when are you rotating? Google Maps on the phone is a great example. I find myself using "tap then flick up or down" or just the +- buttons much more useful because they don't require my second hand and won't accidentally rotate the map.


Call me unimaginative, but I think fingers are good at manipulation more so than hand signs. So I can imagine either stuff like 3d modelling tasks, or when you're interacting with a 3d world (think games). Smush-deforming play doh with your hand is so much quicker and more precise than trying to do the same with a mouse or shudder keyboard. Or imagine doing knitting.

I think hands are great when there is a massive multitude of possible next actions, like the knitting example, or artistic applications. Otherwise, if we can logically eliminate options of what the user could be doing (this object can only go in so many slots, etc), it's easy to just assist the user with that (think snap-to-grid) and then a precision input device is less necessary.


«I think people don't realize about gestural interfaces is that gestures are hard to do and tiring to do all day.»

To be critical, this is a biological feature much more than it is a bug. Our bodies don't have very good feedback mechanisms for the small accumulative damage from fine motor control, which leads to things like RSI and Carpal Tunnel with very few in-between indicators.

In a way, moving to a more gestural interface that feels physically tiring is a possible means to better overall ergonomics because people actually start to get real feedback (tiredness, soreness) telling them to take a break every now and then when they are physically straining themselves.


I'm not sure mouse and keyboard will be unseated from the top spot until we get to the point of straight mind-machine interfaces.


Speech to text has been making a strong showing recently. It's been creeping more and more in my life as a faster and easier alternative to keyboard input. It does take some practice, though, so I encourage people not to dismiss it based on their first impressions.


Again, same trap. Talking all day is NOT appealing, and doesn't work in a public setting. Works great for mobile phones precisely because often we have our phones out but can't type, like in a car or walking with my hands full.


The other issue with speech to text is discoverability both for commands and the object of the command. This presumably becomes less of an issue as natural language processing gets better. Voice recognition is pretty good with the right microphones today but using it is something like remembering the right wizard's incantation. Think a command line :-)

That's fine if it's a variation on some frequently used command but I find myself forgetting the triggers and skills I have set on Amazon or the names of playlists I might want it to play.


This is actually a great point - I think this is a one-way vs two-way communication issue. With a lot of our current input devices, tactile feedback helps with verifying that the individual "packets" of information are registering, but visual feedback is key for knowing that the high-level command worked / didn't work, and what other options are discoverable. And the fact that the visual feedback is extremely high bandwidth is key. Imagine being read everything on the screen. Come to think of it, screenreaders are hard.


If your Eminem maybe, I can type faster than I can talk.


Siri cuts me off mid sentence because I talk too slow.


Average conversational speed is 110 to 150 WPM. Maybe you type unusually fast? Professional typists might type at 80 WPM and it takes experienced stenographers to keep up with normal conversations.

Again, speaking for speech to text takes some practice, so it doesn't quite make sense to compare a lifetime of typing against your first shot at speech recognition.


Stenographers can type 300+ wpm on phonetic chording keyboards with appropriate predefined shortcuts.

People can also write shorthand with a pen at 300+ wpm.

A big part of the problem with regular typing is that our phonetic alphabet is not optimized for writing efficiency. Sometimes a single syllable takes 5+ discrete characters to type.

Standard QWERTY-style keyboards are also designed to use single finger motions in serial fashion, and requires various awkward combinations of finger motions and plenty of repositioning between keypresses; the design is designed for learnability rather than efficiency, and wastes much of the possible bandwidth of the hands.

If someone with deep knowledge of human hand physiology / neuroanatomy were to design a keyboard optimized for efficiency (at the expense of requiring years to properly master), we could probably push peak performance to 400–500 wpm.

At some point thinking speed is the main bottleneck though. Regular QWERTY keyboards are fast enough to keep up well enough with most writing tasks.


If you're going to talk about stenographers (experts at typing fast, with specialized equipment and training) let's talk about auctioneers.

The world record for speaking is about 600 WPM. The world record for stenographers is about 360 WPM.

A typical speed for speech is 150 WPM, for someone who is practiced but by no means an expert. A typical (but somewhat fast) speed for typing is 80 WPM, for someone who is practiced but not an expert.

So if you're going to compare the fastest speeds or the typical speeds, speaking comes out ahead either way. It's almost as if our ancestors had evolutionary pressure to develop speech, but lacked the same kind of evolutionary pressure for typing.


Regular people (or computers) listening are going to have a heck of a time understanding world-record speaking speed https://www.youtube.com/watch?v=l-o9vTk8Poo

As for auctioneers, most of what they say is repetition and filler. The fast talking is a gimmicky technique aimed at improving sales, not a practical tool for imparting information at maximum possible speed.

As I said before, the keyboard is not remotely optimized for the capabilities of human hands, and English orthography is not remotely optimized for information density as an encoding the spoken language.

If someone speaking had to say a full syllable for every letter, speaking speed would be much slower.

> almost as if our ancestors had evolutionary pressure to develop speech, but lacked the same kind of evolutionary pressure for typing.

More precisely, speech was developed over millennia by millions of people mutating it slowly over time, whereas modern typing hardware was designed by a single person, based on the mechanical characteristics of a particular prototype typewriter in the late 19th century (before any concept of “touch typing” even existed, and typists were forced to adapt to this mostly fixed hardware design as best they could.


So, the fastest rapper seems to talk at 280 wpm and that's not filler.

http://fivethirtyeight.com/datalab/the-fastest-rapper-in-the...


150 wpm speaking is sort of irrelevant when half or more is verbal filler and rambling that doesn't provide any signal.


> If someone with deep knowledge of human hand physiology / neuroanatomy were to design a keyboard optimized for efficiency (at the expense of requiring years to properly master)

Per language.


Go read out a hello world in C++, pressing ; is much faster and more accurate for me than verbalizing "Semicolon". What about variable names? You have to spell out the word each time to get camel case. u-s-e-r-Capitol C-o-u-n-t and that's if it works perfectly.

Python, you have to say return space space space space every damn line. Return space space space space space space space space while inside a function. It would nearly drive a man to switch to tabs.

Navigation, "go 2 characters to the right" isn't pleasant.

To top it all of picture the fun you can have in vim using speech to text. Dear lord.

Now imagine an open office full of people all doing this at the same time. That my friend, is hell.


what about a programming language optimised for this purpose?


This is what I wanted to say as well. To shift to verbal programming, we'll also have to change to a language designed with that in mind.

Interesting convergence would be VR mixed with TTS processing. So, you use VR and presence/hand tracking to physically interact and words to refine and/or program the visual blocks.


>Typing is hard to beat in terms of bandwidth / strain

We type only because we can't talk to computers (although it starts changing). Voice control would easily beat typing.


Phonetic chording keyboards are faster than typical speech, they just take a lot of training to learn. A regular keyboard with extensive custom shortcuts for common words is also potentially faster than speech, but not nearly as ergonomically friendly as a chording keyboard.

Keyboards are also much better than speech for making very precise inputs. You’ll be hard-pressed to invent a speech interface which outdoes vim (for instance) for efficiency at general text editing tasks, even if you have a general AI interpreting the speech commands. Even just for straight typing computer code, mathematical formulae, musical notes, poetry with precise constraints on punctuation & spelling & layout, or e.g. controlling a video game or CAD program, a keyboard is much more effective than speech. A keyboard input can also have real-time response to partial input, like autocomplete, etc.

Perhaps most importantly, typing on a keyboard doesn’t distract/leak the content of your typing to everyone in earshot.


With proper AI editing by voice will look different -- probably more like talking to a smart assistant rather than "cut here paste here". Keyboards and mouses will probably remain for some particular tasks (e.g. math formulas or CAD design) although it won't be the mainstream input method -- more like a helper device.


So long as there's not another human being within earshot of you which is... rare for computing contexts.


I don't see this as a problem. At some point voice recognition will be able to distinguish one individual from another by voice. Also with directed/implanted microphones you might be able to just whisper instead of talking.


Really? Go walk around Facebook's ridiculous one-room office building, experience the ridiculously distracting noise level, and then tell me you "don't see this as a problem" when it's exacerbated by voice controlled computers. That sounds awful.


Really? Voice input is faster for entering text. However, I can't imagine editing text by voice would ever be faster than using a keyboard.


I believe it's just a matter of proper voice interface and commands. Most probably it would require elements of AI for context awareness.


A voice interface interrupts my internal dialogue. I can't think and talk at the same time. So I would disagree, I prefer to let my hands do the talking.


> Our futuristic starship control panel should be a custom-manufactured physical interface instead of a cheap glass panel

I disagree. Modern aviation has been moving away from physical controls and towards generic screens. The last gen of fighter aircraft (The F/A-18, later block F-16s, the F-22) have a collection of generic screens with context sensitive buttons on the side for control.

The F-35 takes this a step further and has a single large screen (which is also apparently a touch screen).

In something as complex as an aircraft (or a starship), there are hundreds (or thousands) of different controls. Look at the cockpit of an older aircraft (e.g. a Boeing 747) as an example. Most of the controls in a cockpit are used in certain situations, you don't need to control the landing gear or the brakes when you're at 30,000 feet.

By using generic displays, the relevant controls for the current situation can be shown. Yes, it is more difficult to operate the controls by feel, but this is less of a worry in aviation, where you're usually several thousand feet from anything else, than it is in a car, for example.

So of course, it's all about the expected situation, you wouldn't want a little touch screen that you swipe for the safety on an assault rifle, for instance, because you need to be able to do it by feel in various situations. But a swipe unlock is perfectly acceptable for a phone.


Crazy that this guy criticizes the Minority Report interfaces and then makes the terrible claim that interfaces ought to make use of humans' ability to grip. Not all humans have strong grips, particularly the elderly. There's a reason newer buildings use lever handles over doorknobs.

My hope would be that the interfaces of the future would be more accessible to the disabled, not less.


It seems pretty obvious that accessibility is always going to require extra thought. Some people are paralyzed from the neck down, but we still have keyboards.

In any case, why do you think that using gripping as a UI action implies that you'd need a strong grip to use it?


It seems pretty obvious that accessibility is always going to require extra thought

Actually, I think this extra thought could help everyone by treating disabilities as just special cases of the general idea that different people prefer different input/output devices and having apps (mostly) stop processing low-level input (keystrokes, pixel hit testing) and switch to handling more abstract "primitives".

The OS would have a set of more abstract primitives: enter text, zoom in, zoom out, rotate, 3d move, 3d rotate, select item, select 2d region, select 3d region, deselect, grab, cancel, undo, etc. You, the user, would need a set of I/O devices that provided one or more implementations of each primitive in the checklist.

One user would send an "undo" to an app with a click of the tongue, another with a hand gesture, another by pressing the ESC key, but the app just gets an "undo" event from the OS.

For 2d region select, for example, one would use a joystick, another uses voice, another uses eye movement. The app doesn't process pixel hit testing to do its own region select, it lets the OS tell it what region of the 2d surface was selected. The app makers don't make the app "accessible" themselves, they just make it respond to the checklist of higher-level primitives.

You could provide at least the core functionality of most apps by this means after which an app could possibly add some lower-level processing in the form of "shortcuts".

When I'm forced to use a watch interface away from my keyboard, I am in a sense "disabled". When driving in my car with eyes on the road, I am "disabled". I'm not a gamer, so relative to a gamer with mad joystick skillz, I am "disabled". Well, fine, just change the approach to software interaction to a more abstract type of interaction that treats us all as being variously abled.


Sure, it's always going to require extra thought, but why create problems when you don't need to? There's this entire train of reasoning that seems to be that a rich interface is one that places as many demands as possible on the human body. That thinking will inevitably lead to problems of this type. If grip isn't necessary, why make it necessary? I could have made a number of objections of this type to the article, but this seems the most obvious and understandable one.


why create problems when you don't need to?

I don't think it's fair to make non-disabled individuals use a less productive interface, just as it's not fair to prevent disabled individuals from using the interface of their choice.


How is gripping more productive? I'm not disabled, but much prefer grippless handles because they are easier/simpler to use.


I was speaking in more general terms, but I'll provide some examples.

I grip my mouse to use its side buttons to navigate forward/backward in a web browser. Human hands are generally good at holding things, and flat touchscreens/trackpads aren't very holdable. Power tools, steering wheels, gear shifts, volume knobs -- all examples of things that are gripped.


> There's this entire train of reasoning that seems to be that a rich interface is one that places as many demands as possible on the human body

I disagree completely. The line of reasoning is "what actions can we perform that have the highest bandwidth, and lowest cognitive load?" And various forms of manual manipulation, like gripping, are an excellent avenue of exploration when answering that question.


Hmmn, I'm betting that a user interface that requires 'gripping' might possibly not require someone with the grip strength of a gorilla. And your self-serving invocation of the disabled fails to notice that most gestural or touchscreen interfaces are atrocious for the elderly or disabled, and have a tendency to misconstrue 'tremor' as intentional movement.

Generally if you want to design for the disabled, you probably are designing for the disabled, rather than using them as fodder for an argument.


To be fair, waving your arms around, or hitting targets 10x smaller than the finger you're using to hit them is no more accessible than twisting a knob or moving a slider.

No single interface will ever suit all potential users, so any accessible design has to keep this in mind.


Well not all humans have fingers either. There's a lot of vets and amputees who have no arms or legs so all user interfaces should be eye tracking based.

Honestly, common sense says the guy is talking about interfaces for the general population, not the disabled. If most people can grip then make interfaces that grip. Don't force everyone to read braille just because a few people are blind.


> There's a reason newer buildings use lever handles over doorknobs

For the cats.


Yeah, as a teenager I had to disassemble my handle and reinstall it vertically in order to make the cat stop jumping on it in the middle of the night and waking me up in the process...

(I like cats. The problem then was that I had way too much stuff in various cabinets and the cat liked to make a grand tour through them, throwing a lot of stuff to the ground in the proces...)


Wow, I guess the article really struck a nerve with you huh?

Taking advantage of major human ability != making it an exclusive requirement to use said interface.

Also, suggestion != claim.


You sound a lot more het up about this than the parent comment. The parent just suggested that this article is ignoring some things that should be considered if you want to talk about designing interfaces for real people. It's important to design things that don't fail when confronted with edge cases.


I don't think calling something crazy is merely suggesting something.

What edge cases? What design? What are you even talking about?

A person merely said "Hey guys, have you thought of this". GP got on their case as if they were against the disabled or something.


I don't know if Minority Report started the trend, but there is one thing that I have really grown tired of in movies with futuristic tech: transparent screens.

I realize that they let the director film the actor without something blocking the camera but they just seem like a lazy way to be "futuristic."


As a side note what on earth are these?

https://s-media-cache-ak0.pinimg.com/736x/bd/17/68/bd1768e3a...

And why are they in so many sci-fi graphical UIs?


More productive is to try and understand why Minority Report has successfully captured the imagination of so many, persisted over a decade, spawned an entire genre of syfy interface trope that we see in virtually every film having anything to do with future, beyond film, well into games and now commercial interface development.

The assertion that the gesture interface is some kind of virulent ear-worm, dismisses what may be the most interesting thing, the fact that it has captured the imagination of so many. Why?


I don't think it's hard to imagine why the gestural interface survives in the movies. It's very visual. The main actor is moving and full of action and may even be literally jumping around. It's as cinematic as the Hollywood OS representation of hacking is.

Meanwhile, out here in the real world, I'm a real computer programmer and my real interface is even more boring looking than a standard Windows install. Nobody's going to be mining my UI for Hollywood ideas anytime soon, despite the fact that I, on occasion, actually hack.

It's the same reason cars explode when they go over cliffs, bullets spark when they hit things, and all those other tropes.


Humans are visual creatures, so if something looks good we want it, regardless of whether it is practical.


There is an article somewhere about how Speilberg commissioned consultants to imagine the near future. What happens you get fresh, imaginative movie imagery loke 2001, Star Wars, Minority Report. Then fifty copycat movies.


A few days ago I tried out an HTC Vive VR device with Google's Tilt Brush app. It's basically MS Paint in 3D. You hold both Vive controllers and have a brush in one hand and your palette in the other.

I was amazed how intuitive the controller UI was. Some of the controls are on the back of your hand, but it was incredible how natural it was to pick up and use comfortably. I've never been all that fond of the Minority Report floating screen UIs, but other 3D UIs can feel very natural in the right context.


> If Jeff Han had designed a keyless entry system for a car, it would’ve involved dragging a secret gesture on the car’s window instead of the car automatically unlocking when you open the handle if you have the key in your pocket.

So basically: "Good design is invisible"


Both examples are invisible. But one works with the flow of getting in your car and the other requires arcane gestures.


That’s not what “invisible” means in this context. A better name might be ”unnoticed”. It’s interface sufficiently natural as not to add to the things you have to think about.


One of my favorite design mantra's is "Design dissolving into behavior." - Naoto Fukasawa.


It was a good interface for predicting future crimes of bad interface design.


Now I want to be in position to order a GUI, preferably for some sort of notarization or document-drafting app.

I'll explain that I'm actually thinking of a movie for the UI, and I want it to be as exactly like that as possible; I'm sure they get this request really often, and it won't be anything new to them; and I'll explain that I even brought some scenes from the movie to show them exactly what I wanted. And then I'll be grudgingly persuaded to pop in the relevant DVD...

And the scenes will be from a pirate movie, with an old-timey European pirate/bandit/mercenary chief writing with a quill on parchment while wearing a coat with ruffles and lace.

Only quills will work as pointing devices. To save your work, you sprinkle some sand on the tablet and blow it off. The tablet comes with a breathalyzer attachment, which checks for enough rum in the air to prove that you're a legit pirate but not so much rum that you're signing contracts while drunk. Voice input must be in Ebonics, which is pretty much what the Cavaliers spoke in the English Civil War. Voice output is in a thick Devonshire burr (the classic pirate accent) that makes you think that the app's probably half drunk.

Now, if only someone's eccentric enough to do it...


Ha, if you're ever in the area, you seem like you'd appreciate 826 Valencia's pirate supply store in the Mission district of SF.

http://826valencia.org/store/


I'll keep it in mind; it looks well worth a visit!

I'm also toying with the idea of getting some reenactor-grade pirate clothing: long-wearing like nobody's business, and relatively easy to care for. I've even found places that sell those leather cavalry boots with turned-down tops! For an extravagant price, of course...


This is also what comes to mind when comparing the giant touchscreen in a Tesla Model S to the haptic buttons and knobs in most other premium brands. Especially when driving, touchscreens just seem an inferior way to interact with the systems.


Driving, yes absolutely, because you're constantly about 6 feet away from hitting something.

For flying, touch screens are a great idea, since you have a lot of controls, and attention to where exactly you're going is less critical, since you're usually >1000 ft from the nearest solid object. Also, aircraft usually have autopilot (and often a copilot).

I guess with Tesla's autopilot, it might make the touch screen a bit more usable? (at least while driving on a highway I guess).


Cost is another driver here; touchscreens are significantly cheaper than buttons, and can be used to do more things -- at the cost of being awful for typing on.


I want a 2-knob "mouse" like the one I had on my etch-a-sketch 30 years ago! One for X, one for Y...

[ 3 minutes of googling later... ]

Of course it's been done: https://mitxela.com/projects/etch_mouse


I'm curious; why does this appeal to you?


I haven't been a big fan of many of the UI changes in the past few years, but I realized the other day that these changes have actually resulted in _DEATHS_.

http://www.detroitnews.com/story/obituaries/2016/06/20/obit-...

Cause you know, having the shift level position indicate what gear its in, just isn't "cool"...


what happened to "Oblong" technologies (the guys who actually did the minority report UI/UX and turned it into a conference room app/system)


Oblong sells a conference room product and we also do custom work.

I work at Oblong and recently gave a short (unofficial) talk which covers some past and current activities.

https://creativemornings.com/talks/brandon-harvey/1

My favorite sentence from the piece is "there’s a huge gap between what looks good on film and what is natural to use". Yep!


They morphed their high end concept stuff into smart meeting room and team collaboration tool platform called Mezzanine (- see http://www.oblong.com/) which is sold into large enterprise class customers e.g IBM, SAP, Accenture, NASA, GE etc.


I kept imagining the kind of BS the actors are talking about the fake graphics.


They recently received 65M funding. You can supposedly buy that stuff.


if you thought the interface was bad, try programming with their software stack.


Interesting. Would you care to elaborate?


I worked there for 2 years. My comment was mostly jeering and being silly. It would be really bad form to speak ill of them in any substantive way. All I can say is that they punch above their weight.


Here's a nicely realistic breakdown of the physical affordances involved in useful interfaces in VR.

https://www.youtube.com/watch?v=id86HeV-Vb8&t=5m34s


Some people blame Xerox Parc for locking the world into the office metaphor. Some widely icons like floppy disks and phonsets represent obselete objects.

The ancient Greeks used rooms of a house to organize information. Just think we could of ended up with that instead.


Microsoft tried that in 1995. It was an interface called Microsoft Bob.

https://en.wikipedia.org/wiki/Microsoft_Bob


I wrote an iPhone app called "iLoci" [1] that was based on the Method of Loci [2], which had a gestural interface that let you construct and navigate your own networks of locations by dragging rooms around and "kissing" them together to connect and disconnect them. Another way to think of it is as a pie menu editor.

Later I elaborated on the idea in a Unity prototype called MediaGraph [2], which lets you arrange your music in an editable gesture navigable map.

[1] iPhone app iLoci by Don Hopkins @ Mobile Dev Camp: https://www.youtube.com/watch?v=03ddG3jWF98

A talk about iLoci, an iPhone app and server based on the Method of Loci for constructing a Memory Palace, by Don Hopkins, presented at Mobile Dev Camp in Amsterdam.

[2] Method of Loci: https://en.wikipedia.org/wiki/Method_of_loci

[3] MediaGraph Music Navigation with Pie Menus Prototype developed for Will Wright's Stupid Fun Club: https://www.youtube.com/watch?v=2KfeHNIXYUc

This is a demo of a user interface research prototype that I developed for Will Wright at the Stupid Fun Club. It includes pie menus, an editable map of music interconnected with roads, and cellular automata.

It uses one kind of nested hierarchical pie menu to build and edit another kind of geographic networked pie menu.


T9 Texting RIP.


I am getting rid of my smartphone after it breaks and downgrading to a flip phone.

I am actually looking forward to being able to type with one hand again.


With the thumb of the same hand that's holding the phone? Welcome to RSI hell!


Until we are able to dynamically reconfigure our devices to offer context-aware controls that we can grab, touchscreens seem to be a preferable alternative to buttons. You don't want your desk to become the Apollo CM dashboard.


>context aware controls

you mean key-bindings?A full keyboard already has ~100 keys.


A keyboard cannot morph into a joystick.


If you're interested in this kind of thing, you might like the book "Make It So" by Nathan Shedroff and Christopher Noessel, which discusses the interfaces used in movies.


Don't pretend you don't want a 3d printer with a Tony Stark design UI :)


And despite his acute taste for good interfaces, he offers no solution to the "trap".


It's perfectly legitimate to criticize something without offering an alternative solution.


Is it? What value does criticism bring if there's nothing actionable in the criticism?


To inspire someone to work on it?


It isn't legitimate to do so without at least offering some idea of how the object of criticism could be improved though. If you criticize something, that means you know it could be better in some way, and you should at least point out those ways.


It means you know that there are problems with the current way, and at a minimum you hope there's a better way to do it, but it by no means requires you to know what that better way is. You can be capable of recognizing the flaws in something without figuring out how to fix those flaws.


If you can recognize a flaw, you also know what could be improved since you're comparing the flawed attribute to some non-flawed or less-flawed ideal. You might not know how to get there, but you at least know where "there" is.


You can recognize it when you see it, but that doesn't mean you know what it is ahead of time. Determining that something is flawed does not in the slightest require some sort of "ideal" to compare it to. It just requires the ability to recognize that there are problems with the current state of the thing. To go back to the article's subject, we can look at the current state of touch interfaces and determine that they have problems. You're touching a textureless pane of glass, using gestures that aren't actually used when manipulating real-world objects, you can't see what's directly under your finger, etc. But if we knew what a better interface looked like, we'd be working on it already. There's certainly no known "ideal" interface to compare with.


The ideal is in the attribute, not the object it is attached to. With respect to interfaces, if the problem is that we use gestures that we don't in real life, then the solution is to implement ones that are closer to those in real life; if the problem is that we can't see under our fingers, then we should implement some way to display the blocked information. The critic doesn't have to know how to implement, just that it can/should be implemented.


You've now reduced your argument to the trivially inane. You were claiming that the critic needs to be able to propose a solution, but now you're just claiming the critic needs to say "given flaw X, the solution is to not have flaw X".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: