Such testing is paramount to the success of an end product as a fully functioning application that creates confusion amongst its users will not last for long.
The aim is to observe how people function in a realistic manner, so that developers can identify the problem areas and fix them.
Techniques popularly used to gather data during a usability test include think aloud protocol, co-discovery learning and eye tracking.
Synchronous usability testing methodologies involve video conferencing or employ remote application sharing tools such as WebEx.
[6] One of the newer methods developed for conducting a synchronous remote usability test is by using virtual worlds.
[6] Similar to an in-lab study, an asynchronous remote usability test is task-based and the platform allows researchers to capture clicks and task times.
Additionally, this style of user testing also provides an opportunity to segment feedback by demographic, attitudinal and behavioral type.
This approach also provides a vehicle to easily solicit feedback from users in remote areas quickly and with lower organizational overheads.
In recent years, conducting usability testing asynchronously has also become prevalent and allows testers to provide feedback in their free time and from the comfort of their own home.
[8] Nielsen's usability heuristics, which have continued to evolve in response to user research and new devices, include: Similar to expert reviews, automated expert reviews provide usability testing but through the use of programs given rules for good design and heuristics.
The idea of creating surrogate users for usability testing is an ambitious direction for the artificial intelligence community.
For instance, on an e-commerce website the purchase funnel is typically a good candidate for A/B testing, as even marginal improvements in drop-off rates can represent a significant gain in sales.
In the early 1990s, Jakob Nielsen, at that time a researcher at Sun Microsystems, popularized the concept of using numerous small usability tests—typically with only five participants each—at various stages of the development process.
[14] In the early stage, when users are most likely to immediately encounter problems that stop them in their tracks, almost anyone of normal intelligence can be used as a test subject.
When the method is applied to a sufficient number of people over the course of a project, the objections raised above become addressed: The sample size ceases to be small and usability problems that arise with only occasional users are found.
The value of the method lies in the fact that specific design problems, once encountered, are never seen again because they are immediately eliminated, while the parts that appear successful are tested over and over.
You will often be surprised to learn what the user thought the program was doing at the time he got lost.Usability testing has been a formal subject of academic instruction in different disciplines.
[18] Scholar Collin Bjork argues that usability testing is "necessary but insufficient for developing effective OWI, unless it is also coupled with the theories of digital rhetoric.
[20] In translated survey products, usability testing has shown that "cultural fitness" must be considered in the sentence and word levels and in the designs for data entry and navigation,[21] and that presenting translation and visual cues of common functionalities (tabs, hyperlinks, drop-down menus, and URLs) help to improve the user experience.