Don't get me wrong. I love computers, I live with them, wear them, program them, read about them… but they OWN us. As an entrepreneur and computer scientist I'm part of the problem. (Yes!) Creating programs, apps, web pages, and physical devices that require individuals to try to figure out how to make use of them. We create user interfaces screens using buttons, icons, tree controls, etc.. We hope that humans will figure out how to use them and what’s possible. Most times, unfortunately humans don’t understand how to use programs and have to read a manual/webpage or simply ask other people to determine what or how to do something. Any person who uses a computer or device has experienced programs which are good and bad at various degrees of “usability”. We have all also observed other people struggling as well trying to achieve something which seems to be a simple task. E.g. changing the volume/brightness/font etc... but not being able to do so.
As a designer/developer, it is frustrating in light of the fact that many times 80% to 90% of the functionality of the program is not used because the user cannot figure out how to do it or if it’s even there in the first place. Developers and designers spend a lot of time trying to find out how to make programs simple to use. Whether it’s a user finding a feature or informing a user a feature exists. Developers/designers are limited to arcane 2D interface methods (screens) of the past with menus, drop-down, sliders, tree controls and other buttons to solve this for people.
There have been great improvements in interface design most notably the iPhone interface, especially when it first came out. The iPhone allowed humans to use their fingers directly. Previously, interaction had been abstracted to use a mouse for interfaces which is an indirect form of input. The iPhone also broke down many tasks into smaller units from the previous monolithic programs on Windows/Mac. The size of the iPhone screen was smaller so only a few of many features could be added and thus made the apps easier to use and discover features. It was easier for a human to figure out what to do. Humans could also physically ‘hold’ the interface. This is subtle, but I feel important, in providing more of a sense of physical control or directness than over a larger non-touchable monitor/computer. Humans, however, even with the iPhone computer-overlord still have to perform their animal tricks for their “treats” but not as many as the Windows/Mac programs require.
Today the iPhone and its cousin devices, Androids, etc. are now becoming overly complicated for people. As time marches on, more functionality is added and people start seeing the same problems as on the Windows/Mac platforms. Whether it’s a particular app and it’s added functionality or the sheer number of apps the complexity has increased. How to do something, even something simple, is difficult. Even more confounding is how to discover that something is even possible is difficult for many people. Given the technology available to us, it all should be possible and with little effort by humans across various programs/apps.
There is a solution already working its way into people's lives, devices, and computers.
Natural language processing (NLP) and specifically transcription from voice will provide the future where humans can use their voice and their hands to turn the table, becoming the human-overlord of the computer.
My mission is to create and contribute to the human-overlord cause so that we can benefit from ALL the functionality companies and developers provide without having to perform circus tricks. Voice-centric interfaces are a mind shift for developer and users alike. Developers and designers need to buy into the NLP religion to get this done. It is inevitable we just need to accelerate it.
With NLP we will be free. YEAH!