John Mueller on April 25, 2013
Most IT projects have the goal of automation, but the term is so widely used that it’s easy to become confused as to what precisely anyone means. This article helps developers and managers understand the five types of application interface automation that determine whether users will actually use an application (with joy, even) or churlishly toss it aside.
An application is the invisible participant in user interaction with a device that provides access both to data and to other users. In order to ensure that an application performs as expected by the user (and is therefore used, rather than forgotten somewhere) a developer must provide automation that makes it possible for the user to forget the application is even present. When a user can completely focus on the task at hand and not see the application at all, the developer has created something really useful.
Of course, the best application is the one that requires little or no user interaction. For example, consider a mobile banking app (a single-purpose micro-application). Instead of optimizing the forms into which the user types, the developers can automate the process: Allow a user to scan a check using her smartphone, and automatically deposit the money into a bank account without her entering any information. That may be the perfect application, since the user does nothing unless the default account won’t work for the deposit. However, most applications require some level of input and interaction; and that’s what this article is about.
A lot of texts repeat the concept of the invisible application. They imply that automation is the key to making truly useful applications. The only problem is that no one’s really explored what automation is or why it’s important. Some developers know they’re supposed to automate something, but are clueless as to what that might be. Business stakeholders, managers, users, designers, and developers need to work together to define the precise sorts of automation that an application must have to make it useful.
Here are the five levels of automation that a great application includes.
Many applications start in a default state which requires someone to configure before the user can use the application. Users find this practice annoying to say the least, and IT departments are almost as grumpy about it (because inevitably the system is misconfigured and then whom does the user call?). The application should remember the user’s preferences and configure itself for that particular user as part of starting up.
A new design problem with applications today is their need to run on multiple devices and in multiple environments. A user wants to access the application from the desktop, tablet, and smartphone with equal ease. Storing the user preferences on individual systems doesn’t work; the user changes preferences as needed to meet new challenges. Some applications (Outlook.com, as an example) are starting to store user preferences on the server so that they’re available no matter where the user interacts with the application.
As part of the user preferences, the application must detect the device and configure the application on the fly. Settings that look good at the desktop may look only acceptable on a tablet, and may not work at all on a smartphone. In order to make the application disappear, the automation must automatically detect and configure itself to fit within the user’s environment.
The main goal of data input is to allow user interaction with new or updated data in a manner that prevents errors. Users don’t always care about errors during the input process, but they are certain to care about it when they view the data or interact with it. Misspelling words or using the wrong capitalization may not seem like a significant problem until you try to get the data back out of the computer.
To manage data input gracefully, an application must manage data input using several kinds of automation.
The best way to avoid errors is to limit user input so he can only pick from correct choices. Checkboxes, radio buttons, drop down list boxes, and the like have long been used to avoid errors at data-entry time. Developers today have a considerable range of controls available to ensure that input is correct without actually having the user touch a keyboard. For example, a slider can help the user choose from a range of values. Date and time pickers ensure that values appear in the correct format.
Sometimes a user must type values; this is where errors become a problem. Spelling and grammar checkers can help prevent problems, but many applications lack even this simple level of automation. Automatic correction is also useful; some word processors use this feature to good effect. For example, the user types email and the application automatically changes it to e-mail—the preferred presentation for your organization. The first form isn’t wrong, but to ensure that the data can be retrieved later with consistency, the application changes the form.
Automatic data parsing is also essential. A user inputs a telephone number as (555)555-5555, but your organization prefers 1-555-555-5555. The correct data is already present, so there’s no need to ask the reader to change it; your application should change the format automatically.
Even the best automation can’t keep up with current language usage, so allow some means for the user to signal that a seemingly incorrect word is correct. The application should conditionally accept the new word, an approval process should be in place to check it, and then the word should be added to the global dictionary so that every user has automated access to the word. This is one level of automation that, unfortunately, doesn’t appear in any application today, so users are constantly creating custom dictionaries full of words that may not be words at all, wreaking havoc on any attempt to retrieve data later.
Note: Even though user aids like wizards generally aren’t considered part of automation, make an effort to include them in your application as part of the data input process. Provide the user with step-by-step instructions whenever possible, to minimize the chance that of incorrect input. In addition, use visual features such as progress bars to show how far the process has progressed. Giving the user a sense of accomplishment makes the input process a little less boring, makes it less likely a user will get bored, and possibly most important, keeps him from wrongly concluding that the software crashed when the process unavoidably takes a long time.
Users want applications that react to their needs immediately and quickly. Applications must make a distinction between client-side and server-side processing to ensure the user has a good experience. Many applications perform all processing at the client or at the server, rather than distinguishing between the two environments and using the best location for a particular task.
User input should be validated at the client before being sent to the server. This reduces the number of network transmissions and shortens the time required for a user to discover there’s an error in input. The user expects to know that an error occurred in input before moving to the next field of a form so that the application doesn’t report back with a number of errors that have nebulous error information provided with them.
For example, that bank mobile app includes a nifty feature to scan a check for deposit. For security reasons, the maximum dollar amount that be deposited with the app is $1,000. But instead of the app recognizing that the scanned check is over the limit and immediately alerting the user, it waits until the entire deposit is submitted before sending a “Sorry, you can’t do that” message. Why make someone wait to learn about the error or limitation?
Performing calculations and other sorts of computational/database intensive tasks that have nothing to do with the user on the client system needlessly burdens the client, creates a throughput bottleneck, and causes the application to work more slowly than it should. User server-side processing for tasks that would require the client to make a number of calls to the server anyway.
A simple way to look at the problem is to consider each network transmission and ask whether the trip is really necessary. In addition, consider whether a task is user-oriented (requiring client-side processing) or processing-oriented (requiring server-side processing).
Some applications leave the user wondering whether a task has been completed successfully. Some users shrug and go onto the next task, while others get upset because they don’t know whether the task was successful.
The user shouldn’t have to think about this issue at all; an application should provide output at the completion of every task. The output need not be a process-interrupting dialog box, but there should be some output. For example, some applications use the status bar to display a non-interrupting message that some users read and others ignore. The point is that the information is available and that the application automatically tells the user about task status without making anyone guess. Especially guess wrong.
The output format is also important, and you need to make it flexible. Many applications rely on a one-size-fits-all approach for output presentation. The output may work well for a desktop application, but look terrible or be unreadable when displayed on a smartphone. An application must detect and format the output for the device that the user is utilizing for a given task. In many cases, making this work means storing predefined settings on the server to ensure the presentation is correct.
Users make errors of all sorts. Displaying a dialog box that tells them that an error occurred using language the user doesn’t understand is hardly helpful. Every user shows specific patterns in making errors during data input. Creating an application where most, if not all, errors are resolved automatically or with minimal user help is essential if you want to maintain a productive data flow.
Some word processors provide an example of modern applications that detect and store these patterns for use in correcting user input. When the user makes a mistake in typing, many word processors correct the error in the background; the user may not even notice. This is a kind of automation that your application should provide in order to resolve errors quickly. When a user has to think through the same error every time it occurs, it becomes annoying and the user begins wasting time in frustration.
Modern systems have enough processing power and storage capacity to track user errors over time. Storing common user errors, for specific users, along with the error resolution that applies to that user, can save a significant amount of time and ensure that many errors are resolved automatically for the user.
Automation is an essential component in modern applications that means the difference between an application users adore and one they ignore. It may seem bothersome to keep such close watch over user activity, but this is precisely what an application needs to do in order to be successful. Users should be able to count on the automation to correctly diagnose situations and react on the user’s behalf—allowing the user to maintain focus on the task at hand and increasing overall user efficiency and satisfaction.
Receive Mendix platform tips, tricks, and other resources straight to your inbox every two weeks.