Usability Of user Interfaces

Sandun jayasekara
6 min readAug 7, 2021

Heuristic Evaluation

Heuristic evaluation is a thorough assessment of a product’s user interface, and its purpose is to detect usability issues that may occur when users interact with a product and identify ways to resolve them.

When we think about the design of a product, the first thought that comes to mind is how something looks:

  • Is it eye-catching?
  • Do the colors complement each other?
  • Does it have the aesthetic appeal that will lure consumers in?

While all this is technically true for a good design, a great design needs to go the extra mile. How to achieve this?

By making sure your product not only looks awesome but also provides a seamless user experience.

10 of heuristics shown here:

  1. Visibility of System Status (users should know the system status at all times and get feedback on interactions with it);
  2. Match between system and the real world (the system should resemble the experiences that users already had);
  3. User control and freedom (users should be able to reverse their action if done by mistake);
  4. Consistency and standard (similar system elements should look similar);
  5. Error prevention (minimize the likelihood of making mistakes);
  6. Recognition rather than recall (users should be able to interact with the system without prior information or context;
  7. Flexibility and efficiency of use (both new and experienced users should be able to efficiently use the system);
  8. An aesthetic and minimalist design (declutter as much as possible, less is more);
  9. Help users recognize, diagnose, and recover from errors (make error messages understandable, and suggest ways to fix an error);
  10. Help and documentation (if a user has a hard time interacting with your app, make sure there’s help that’s easily accessible).

There are several inspection methods for heuristic evaluation:

  • Heuristic analysis
  • Cognitive walkthrough
  • User testing

While the end goal is similar, the efficiency and validity of each are not.

Cognitive walkthrough

  • Who: New user.
  • What: Performs specific user tasks in line with user goals.
  • Why: To determine if the sequential processes to get from point A (user task) to point B (user goal) work in the correct order they were designed to.

User testing

  • Who: End-user.
  • What: Uses the digital product in realistic circumstances.
  • Why: To understand how representative users will complete typical tasks in real-life situations.

Heuristic analysis

  • Who: System expert.
  • What: Compares usability to predefined heuristics.
  • Why: To see if the digital product can be used in a way that is most compatible for users and aligns with recognized usability principles.

From all three usability inspection methods, heuristic analysis is the most reliable, as tests are more rigorous and systematic.

Walk-Throughs

Walk-throughs offer an alternative approach to heuristic evaluation for predicting users’ problems without doing user testing. As the name suggests, walk-throughs involve walking through a task with the product and noting problematic usability features. While most walk-through methods do not involve users, others, such as pluralistic walk-throughs, involve a team that may include users, as well as developers and usability specialists.

amazing walkthrough examples for apps and websites

  • Grammarly

Grammarly is a digital writing tool that checks your grammar. You’ve likely heard of it already because it has an impressive 7 million active daily users.

Upon installing the Chrome Extension, Grammarly asks new users to personalize their experience.

  • MyFitnessPal

Fitness applies to everybody. Every single person needs to stay fit and active in some way. This makes the fitness industry lucrative, but it also makes it very competitive. For a fitness app like MyFitnessPal, an amazing walkthrough is vital for encouraging new users to get started.

After asking a couple of necessary questions, The app creates a custom plan for new users designed to help them with their fitness goals.

  • Honey

Honey is a coupon site that automatically applies online coupons for users when they are shopping on eCommerce websites.

Web Analytics

Web analytics is the process of analyzing the behavior of visitors to a website. This involves tracking, reviewing, and reporting data to measure web activity, including the use of a website and its components, such as webpages, images, and videos.

Data collected through web analytics may include traffic sources, referring sites, page views, paths are taken, and conversion rates. The compiled data often forms a part of customer relationship management analytics (CRM analytics) to facilitate and streamline better business decisions.

why we need web analytics?

  • Determine the likelihood that a given customer will repurchase a product after purchasing it in the past.
  • Personalize the site to customers who visit it repeatedly.
  • Monitor the amount of money individual customers or specific groups of customers spend.
  • Observe the geographic regions from which the most and the least customers visit the site and purchase specific products.
  • Predict which products customers are most and least likely to buy in the future.

Examples for web analytics.

  • Google Analytics
  • Optimize
  • Kissmetrics
  • Crazy Egg

A/B Testing

An A/B test aims to compare the performance of two items or variations against one another. In product management, A/B tests are often used to identify the best-performing option. For example, two variations of a new user interface could be tested, and, in this case, the variation that receives the most user engagement would win the A/B test.

An A/B test is used to determine which version or variant of something will perform more effectively in the market. This strategy is commonly used by marketing and advertising professionals, who show multiple versions of an ad, marketing email, or web page to randomly selected users, and then analyze the results. Product managers can also use A/B testing to develop products that will resonate with users.

There are many benefits to using A/B tests, including:

  • Marketers (or product managers) can focus on very specific elements to test
  • The results are immediate and easy to analyze
  • Unlike surveys, where users’ answers are theoretical, A/B tests measure real engagement with the assets.

Examples for A/B Testing

  1. HubSpot’s Mobile Calls-to-Action
  2. Groove’s Landing Page Design
  3. HubSpot’s Site Search
  4. Csek Creative Homepage Design
  5. HubSpot’s Email vs. In-App Notification Center
  6. Humana’s Site Banners
  7. Unbounce’s Tweet vs. Email CTA

Predictive modeling

Predictive modeling, also called predictive analytics, is a mathematical process that seeks to predict future events or outcomes by analyzing patterns that are likely to forecast future results. The goal of predictive modeling is to answer this question: “Based on known past behavior, what is most likely to happen in the future?

Once data has been collected, the analyst selects and trains statistical models, using historical data. Although it may be tempting to think that big data makes predictive models more accurate, statistical theorems show that, after a certain point, feeding more data into a predictive analytics model does not improve accuracy. The old saying “All models are wrong, but some are useful” is often mentioned in terms of relying solely on predictive models to determine future action

Predictive modeling is often performed using curve and surface fitting, time series regression, or machine learning approaches. Regardless of the approach used, the process of creating a predictive model is the same across methods. The steps are:

  1. Clean the data by removing outliers and treating missing data
  2. Identify a parametric or nonparametric predictive modeling approach to use
  3. Preprocess the data into a form suitable for the chosen modeling algorithm
  4. Specify a subset of the data to be used for training the model
  5. Train, or estimate, model parameters from the training data set
  6. Conduct model performance or goodness-of-fit tests to check model adequacy
  7. Validate predictive modeling accuracy on data not used for calibrating the model
  8. Use the model for prediction if satisfied with its performance

Thank you.

--

--

Sandun jayasekara

Software Engineer || Undergraduate , University Of Kelaniya , Srilanka