Category: Software Performance

Measuring Performance Where Users Feel It

Measuring Performance Where Users Feel It

Measuring Performance Where Users Feel It

 

Why Dashboards Don’t Reflect Real Work

Many organizations rely on their applications to get work done and serve their customers. Have you paid attention to how your application has been performing lately?

Can a user log in without issue?

Do navigation menus load quick enough?

Are users experiencing frequent crashes?

Is downtime frequently a problem?

It can be difficult for businesses to gauge the performance of their systems if they don’t have defined metrics of “good” performance. It’s not good enough that your system turns on and most work can get done, even if it’s a bit slow.

Metrics include:

  • Subjective satisfaction
  • Operation success rate
  • User errors
  • Time spent on tasks
  • How easy an application or design is to use

However, what do you do when your metrics all look fine, but users are still left frustrated by poor performance?

In general, good performance is when your users don’t feel slowed down by the environment they’re trying to work in. When a user is experiencing a responsive interface, they don’t think about IT because they don’t need to. This is how we know we’ve succeeded. When a user is frustrated and feels like they cannot work because things are crashing or don’t load, this signifies a poorly engineered deployment.

Where Do Users Feel Pain?

Most organizations think they’re measuring performance, but they’re actually just measuring system health — not the workflows where users actually feel friction.

To an engineer, measuring performance may mean looking at disk response, throughput, network latency, bandwidth, CPU/memory, etc. But measuring performance also means paying attention to the specific, repeatable, and measurable tasks that impact users.

For example, what does performance look like when opening an application and clicking a button?

Does the application load within your designated metrics of “good enough” performance? Does it take too long? Does the page crash?

When a user clicks a button to complete a task, does that operation happen within milliseconds? Seconds? Minutes? Do they need to try a couple of times for it to work?

Users feel performance pain inside of their daily workflow. Actions that need to be repeated day to day stand out to users and create friction if those actions can’t be completed without issue.

You may be wondering, what does it mean to measure performance where users feel it? This means ensuring the metrics you’re measuring are tailored to the specific outcomes that matter to your business.

If a dashboard is not customized to the unique workflow of an organization, then health ≠ performance ≠ experience.

Health: Whether the system is working on a basic level — is the system up and running?

Performance: Server vitals, disk performance, network performance — how are the individual pieces of the system operating? How are they working together?

Experience: What is the average user actually feeling?

At a glance, your metrics may seem fine, but if you’re not measuring specific workflows in health and performance, then you’re not getting a clear idea of the user experience. 

Consider a large-scale payroll processing company.

Let’s say all their clients process their payroll concurrently and are experiencing issues. Pages are loading slow and frequently crash. Things aren’t taking minutes to load, but the issues are significant enough to slow down work and frustrate customers. 

When the company starts to receive complaints, they take a look at their dashboard for signs of an issue:

The network connectivity looks fine.

Their software is up to date.

The hardware is operating appropriately.

The usual metrics look fine, so what is the issue?  

Problems with the function of their application persist, so they decide to bring in a Managed Service Provider (MSP). The MSP evaluates their system and discovers the architecture of their system isn’t capable of handling such heavy traffic. During busy times, the application risks grinding to a halt, impacting every customer.

A lack of scaling in the infrastructure and understanding of how to build architecture for speed and growth was contributing to performance issues over time, despite their metrics not reflecting an issue. Meanwhile, performance inconsistency and degradation are reputation damage.

The MSP was able to come in and change the responsiveness and throughput of the architecture with no downtime for their 800 customers. The MSP also instituted bespoke tools for accurate performance monitoring. Customers are now more satisfied with their experience with the organization’s application, bolstering their reputation and profits.

 

Why Does the User Experience Matter?

If work is happening a bit slow, but still getting done, you might not realize the impact of poor performance if you don’t know how to measure it or your dashboard says everything is fine. You may not even notice there’s an issue until the problem becomes an expensive one. The key is knowing how to measure and monitor performance so you can catch and address issues before they start to cost you.

Measuring metrics of specific application or workflow performance is a common blind spot in performance monitoring. Any solution can look at CPU, memory, or disks, but it requires thought and consideration to build monitoring and define metrics around a customized deployment.

For example, a payroll processing client may measure:

  • Transaction latency during peak payroll windows
  • Concurrency limits when thousands of employees submit payroll at once
  • Queue depth during processing
  • Error rates under heavy loads

Their unique deployment needs monitoring build around real payroll workflows, not generic infrastructure health.

Let’s get more specific.

How would we evaluate performance in the context of how long it takes to generate a report on PTO usage for an organization? This company would need a highly available database and web servers to accommodate large changes in request volume.

In this context, a unique metric they need to pay attention to is the amount of time these reports take to generate. This specific workflow wouldn’t be included in a typical dashboard because the lift depends on the organization generating the report, as well as how many users there are.

Instead, we would work with the client to do periodic testing. From the dashboard side, our engineers would specifically look at how responsive the web servers are to incoming requests so that we can understand if they’re slowing down unexpectedly. We also monitor the websites that users are using to log in and generate reports to understand if those websites are behaving unexpectedly slow.

When users become impacted by poor performance, this can significantly hurt your organization in many ways.

Tools aren’t working the way they should à employees lose confidence and implement workarounds

Systems are lagging à work slows and productivity is limited

Work isn’t getting done on time à decisions are delayed

Staff get frustrated  à morale decreases and staff quit

Poor user experience à you’re unable to sell your product to customers

Customers are left unsatisfied by their experience à your reputation and revenue take a hit

Performance and satisfaction are highly correlated — a poor user experience means dissatisfaction with your business.

Performance issues are also expensive in the literal sense.

Maybe your hardware is outdated and needs to be completely replaced with newer equipment capable of meeting the demands of your business.

Maybe your IT team decides to deploy multiple needless products in an attempt to address the symptoms of an issue without searching for a cause.

Maybe increased shadow work puts your company at risk of a ransomware infection, lawsuits, privacy issues, and non-compliance.

Paying attention to the user experience tells you when things are not performing the way they should. It’s also important to appropriately monitor your system for issues so they can be addressed before users feel it.

What If My Metrics Look Fine?

Measuring Performance Where Users Feel It

This is the core of the issue. Performance monitoring tools are insufficient unless they’ve been customized to a business’ needs. Your dashboard must be tailored to the specific workflow of your organization. If it’s not, then a green dashboard will tell you if something is running or not, but it will miss workflow-specific delays.

Catching issues specific to your workflow is how you can reduce friction. Otherwise, if you’re not looking at the right metrics, you may not know there’s a problem until it’s too late. Letting performance issues go unaddressed frustrates employees and customers, hurts your reputation, and threatens the profitability and growth of your company. Reliable performance translates to trust in your organization to deliver on its promises.

You may not know that your environment isn’t performing the way it should if you don’t know what to look for. For example, when we talk about an issue like high latency, it’s usually caused by a combination of variables/ system failures. Issues must be spotted early on because users will typically tolerate some slowness. However, that slowness will continue to get worse and by the time users are impacted enough to report it, it’s already too late for an easy solution.

It’s also important to remember how performance issues can seem minor right now, but they can become major disruptions as your company grows. Monitoring general metrics and having a system that can support “good” performance now is one thing, but it’s crucial to have an environment capable of scaling with your business and an efficient monitoring system. Otherwise, user pain will get increasingly worse and growth is extremely limited.

 

The Protected Harbor Difference

At Protected Harbor, when we come in, our job is to evaluate your current system, identify areas of improvement, and implement the recommended solutions.

We take the time to understand each client’s needs, workflows, and growth goals — and design a custom application built specifically for how your business operates.

Our engineers work hard to create bespoke tools that are designed to match you — not force your organization into a box of general performance metrics.

Dashboards that are specific to the needs of an organization generate metrics that accurately reflect where problems lie. Building an environment for scalability is also crucial for ensuring performance remains steady while your business grows. Our 24/7 in-house monitoring will tell our team when an issue has been spotted, allowing us to act fast to ensure users aren’t impacted. We prioritize a proactive response, not responding to issues after they’ve already caused disruptions to your users and your organization as a whole.

 

Framework: Are You Measuring Performance Effectively?

Overall, it’s important to pay attention to the user experience because this is a key way to identify if there’s a problem in your deployment. Ideally, issues should be addressed before the user notices, which is why intentional monitoring is crucial.

A dashboard that isn’t customized to your organization will produce metrics that are too general and simply tell you if your system is on — not if the operations that matter most to you are working the way they should. You must pay attention to the specific metrics that are key for the success of your unique organization, and you need a dashboard that can reflect that specificity.

Consider:

  • What metrics does your organization use to measure performance? Do those metrics accurately reflect the user experience?
  • If your metrics look fine, what frustrations are users still experiencing?
  • How is an inadequate user experience costing you?
  • What does monitoring look like for your Managed Service Provider? Are issues identified and addressed promptly?