Quality, Post-Covid

Quality, Post-Covid

Aug 5, 2024

Sam

Since the global pandemic swept the globe a few years ago, one area that has yet to recover (and continues to decline) is the quality of shipped software

The Problem

There has been a severe decline in the amount of testing performed per release as companies scramble to make up the losses of the pandemic. However, something interesting is that many e-commerce companies did very well during this time. As people could not leave their homes, they took their impulse purchases online.

This led to a development boom. Many new features and changes to existing functionality started flowing through the pipes. But then something interesting happened - quality started to slip. I don’t mean the age-old joke that deadlines tighten, so QA is shrunk into a comically small amount of time. I mean, the quality of the quality (?!) took a massive hit.

With remote working becoming the norm, office conversations disappeared. Developers and Testers no longer spoke with each other as much as they used to. The context was no longer a shared team involvement but just an “if you fancy it” option. 

Join the standup, if you want.

This extended to meetings, where the team could usually air issues or concerns or celebrate new ideas. With the dissolution of these ceremonies, teams became segmented, isolated, and disconnected from each other.

I have seen teams now wholly missing the refinement and planning meetings; instead, all of the tickets are JFDI, or a more safe-for-work acronym of JDI (Just Do It). Developers were picking up work with ropey-at-best requirements, not involving the test team members, smashing some keys to get the “feature” done, and then throwing it over the fence. 

This lack of requirements trickles down to the testing, where light-touch checks are performed, and then the code is “promoted” to the real testers, the users.

More time writing code, less time testing.

Now that the testers had more time, they turned to anti-value addition. That may seem harsh, but it's true. Instead of working out ways to improve quality, this extra time was consumed entirely with learning coding. 

This was then a self-fuelling spiral that made more “busy work”—time taken out of a working day performing tasks to enable one to achieve one's original goal. However, the value output and quality improvement were nowhere to be found. 

We now have teams running 100ds of tests, mixing UI and API testing, and modifying the application under test to work for them all while missing the critical journeys the users are taking.

Struggling to attribute issues to functionality

When something goes wrong in production, fewer and fewer can be easily attributed to a particular feature/ticket or release. 

These items are now raised as Production Defects found by users or business stakeholders who have seen a report raising a red flag. 

No joke, I have seen e-commerce companies push code to production relating to payments, resulting in failed orders and double refunding the users?!?! How, on this green earth, did an issue like this get to prod? 

It's simple: The payment server was mocked, the integrated version was not tested due to “limitations” or time constraints, and the lower packs resulted in all-green results. This oversight led to a P1 / P0 critical issue being found by both users and the business. 

Once bitten, twice shy.

This problem has a catastrophic effect on the test team's goal, causing confidence in the testing to suffer. This leads to more “human interaction testing” time allocated to checking new releases, which impacts the cycle time of features.

Teams then look to “fix” the testing but are entirely missing the initial objective that they set out to do. Instead, the “sunk cost” fallacy kicks in, and you invest even more time and effort (money, in other words) in getting some levels of confidence back. 

While you will undoubtedly get back to some levels, at what cost? From the initial standpoint of gaining confidence before going live, this could potentially have been months or even years.

A Solution

There are many different approaches to resolving some of the issues I mentioned earlier. Let’s examine them one by one.

Hey, have you got a minute?

If I had a penny for every time I have sent or received this. But, joking aside, talking through features with developers and product folk can massively increase the chance of good-quality software on the other side. 

Communication is vital and promotes those brain juices to start flowing. Have you considered this? I wonder what will happen if you… All good bug hunts first begin with a cheeky chat with the developer who may have introduced them.

Nope, this ticket needs to be refined more!

Don’t be scared to reject testing something that is not refined, even if it has already been developed. Do the product and engineering teams have to collaborate to get the story to a point where you, as a quality lover, can deterministically and categorically approve or reject a creature as ready?

Certain things can assist, such as a good Definition of Ready and Definition of Done, but those principles only work if you hold people accountable for following them. Shout out if a ticket does not look ready. Bounce a ticket out of QA if you have yet to have a handover.

Soon enough, the team will start to realize that if you put sh*t in one end, you don’t get a polished product out the other end. Start refining and talking, and boom, good things happen!

Write some code, but first, write some tests.

Now I get it: writing code can be fun and enjoyable, and it's personal development. 

But you must satisfy the business reason for your role’s existence—some tests. Why not create a six-hour production test pack?

Pick tooling that gives you high-level confidence over the critical user journeys. The freedom this opens will allow you to focus on shifting these value additions earlier in the development cycle, also known as Shift-Left testing.

Another vital part is testing the site as a user would have interacted with it. Filling out MFA codes? Reading Emails? All these steps should be tested and done on production first, then updated to work cross-environment later.

Continuously Improve, all the time.

Now you have your “Golden Threads” covered, you will need to ensure these are kept up to date and any new priorities added quickly.

We all make mistakes; we let things slide. If the business deems something critical, add it to the pack. This way, you can deliver confidence that what has been repaired will not regress again.

In Conclusion, make some changes.

The market is heading into disaster; confidence is at an all-time low, and companies burned by the wasted money decide the risk is easier to swallow than blown budgets. 

Use tooling and frameworks that allow you to have baseline confidence, then add to that. Nobody will shout at you for finding defects earlier in the process, but they will do it if you spend all your time coding while allowing critical defects to fly out the door. 

If you enjoyed this article, look at some of our other posts. In them, we do not wrap the situation in cotton wool but instead help expose and, therefore, understand and resolve issues. 

The first step to fixing anything is to identify that it is broken. 

Catch you on the flip side!

Since the global pandemic swept the globe a few years ago, one area that has yet to recover (and continues to decline) is the quality of shipped software

The Problem

There has been a severe decline in the amount of testing performed per release as companies scramble to make up the losses of the pandemic. However, something interesting is that many e-commerce companies did very well during this time. As people could not leave their homes, they took their impulse purchases online.

This led to a development boom. Many new features and changes to existing functionality started flowing through the pipes. But then something interesting happened - quality started to slip. I don’t mean the age-old joke that deadlines tighten, so QA is shrunk into a comically small amount of time. I mean, the quality of the quality (?!) took a massive hit.

With remote working becoming the norm, office conversations disappeared. Developers and Testers no longer spoke with each other as much as they used to. The context was no longer a shared team involvement but just an “if you fancy it” option. 

Join the standup, if you want.

This extended to meetings, where the team could usually air issues or concerns or celebrate new ideas. With the dissolution of these ceremonies, teams became segmented, isolated, and disconnected from each other.

I have seen teams now wholly missing the refinement and planning meetings; instead, all of the tickets are JFDI, or a more safe-for-work acronym of JDI (Just Do It). Developers were picking up work with ropey-at-best requirements, not involving the test team members, smashing some keys to get the “feature” done, and then throwing it over the fence. 

This lack of requirements trickles down to the testing, where light-touch checks are performed, and then the code is “promoted” to the real testers, the users.

More time writing code, less time testing.

Now that the testers had more time, they turned to anti-value addition. That may seem harsh, but it's true. Instead of working out ways to improve quality, this extra time was consumed entirely with learning coding. 

This was then a self-fuelling spiral that made more “busy work”—time taken out of a working day performing tasks to enable one to achieve one's original goal. However, the value output and quality improvement were nowhere to be found. 

We now have teams running 100ds of tests, mixing UI and API testing, and modifying the application under test to work for them all while missing the critical journeys the users are taking.

Struggling to attribute issues to functionality

When something goes wrong in production, fewer and fewer can be easily attributed to a particular feature/ticket or release. 

These items are now raised as Production Defects found by users or business stakeholders who have seen a report raising a red flag. 

No joke, I have seen e-commerce companies push code to production relating to payments, resulting in failed orders and double refunding the users?!?! How, on this green earth, did an issue like this get to prod? 

It's simple: The payment server was mocked, the integrated version was not tested due to “limitations” or time constraints, and the lower packs resulted in all-green results. This oversight led to a P1 / P0 critical issue being found by both users and the business. 

Once bitten, twice shy.

This problem has a catastrophic effect on the test team's goal, causing confidence in the testing to suffer. This leads to more “human interaction testing” time allocated to checking new releases, which impacts the cycle time of features.

Teams then look to “fix” the testing but are entirely missing the initial objective that they set out to do. Instead, the “sunk cost” fallacy kicks in, and you invest even more time and effort (money, in other words) in getting some levels of confidence back. 

While you will undoubtedly get back to some levels, at what cost? From the initial standpoint of gaining confidence before going live, this could potentially have been months or even years.

A Solution

There are many different approaches to resolving some of the issues I mentioned earlier. Let’s examine them one by one.

Hey, have you got a minute?

If I had a penny for every time I have sent or received this. But, joking aside, talking through features with developers and product folk can massively increase the chance of good-quality software on the other side. 

Communication is vital and promotes those brain juices to start flowing. Have you considered this? I wonder what will happen if you… All good bug hunts first begin with a cheeky chat with the developer who may have introduced them.

Nope, this ticket needs to be refined more!

Don’t be scared to reject testing something that is not refined, even if it has already been developed. Do the product and engineering teams have to collaborate to get the story to a point where you, as a quality lover, can deterministically and categorically approve or reject a creature as ready?

Certain things can assist, such as a good Definition of Ready and Definition of Done, but those principles only work if you hold people accountable for following them. Shout out if a ticket does not look ready. Bounce a ticket out of QA if you have yet to have a handover.

Soon enough, the team will start to realize that if you put sh*t in one end, you don’t get a polished product out the other end. Start refining and talking, and boom, good things happen!

Write some code, but first, write some tests.

Now I get it: writing code can be fun and enjoyable, and it's personal development. 

But you must satisfy the business reason for your role’s existence—some tests. Why not create a six-hour production test pack?

Pick tooling that gives you high-level confidence over the critical user journeys. The freedom this opens will allow you to focus on shifting these value additions earlier in the development cycle, also known as Shift-Left testing.

Another vital part is testing the site as a user would have interacted with it. Filling out MFA codes? Reading Emails? All these steps should be tested and done on production first, then updated to work cross-environment later.

Continuously Improve, all the time.

Now you have your “Golden Threads” covered, you will need to ensure these are kept up to date and any new priorities added quickly.

We all make mistakes; we let things slide. If the business deems something critical, add it to the pack. This way, you can deliver confidence that what has been repaired will not regress again.

In Conclusion, make some changes.

The market is heading into disaster; confidence is at an all-time low, and companies burned by the wasted money decide the risk is easier to swallow than blown budgets. 

Use tooling and frameworks that allow you to have baseline confidence, then add to that. Nobody will shout at you for finding defects earlier in the process, but they will do it if you spend all your time coding while allowing critical defects to fly out the door. 

If you enjoyed this article, look at some of our other posts. In them, we do not wrap the situation in cotton wool but instead help expose and, therefore, understand and resolve issues. 

The first step to fixing anything is to identify that it is broken. 

Catch you on the flip side!

Since the global pandemic swept the globe a few years ago, one area that has yet to recover (and continues to decline) is the quality of shipped software

The Problem

There has been a severe decline in the amount of testing performed per release as companies scramble to make up the losses of the pandemic. However, something interesting is that many e-commerce companies did very well during this time. As people could not leave their homes, they took their impulse purchases online.

This led to a development boom. Many new features and changes to existing functionality started flowing through the pipes. But then something interesting happened - quality started to slip. I don’t mean the age-old joke that deadlines tighten, so QA is shrunk into a comically small amount of time. I mean, the quality of the quality (?!) took a massive hit.

With remote working becoming the norm, office conversations disappeared. Developers and Testers no longer spoke with each other as much as they used to. The context was no longer a shared team involvement but just an “if you fancy it” option. 

Join the standup, if you want.

This extended to meetings, where the team could usually air issues or concerns or celebrate new ideas. With the dissolution of these ceremonies, teams became segmented, isolated, and disconnected from each other.

I have seen teams now wholly missing the refinement and planning meetings; instead, all of the tickets are JFDI, or a more safe-for-work acronym of JDI (Just Do It). Developers were picking up work with ropey-at-best requirements, not involving the test team members, smashing some keys to get the “feature” done, and then throwing it over the fence. 

This lack of requirements trickles down to the testing, where light-touch checks are performed, and then the code is “promoted” to the real testers, the users.

More time writing code, less time testing.

Now that the testers had more time, they turned to anti-value addition. That may seem harsh, but it's true. Instead of working out ways to improve quality, this extra time was consumed entirely with learning coding. 

This was then a self-fuelling spiral that made more “busy work”—time taken out of a working day performing tasks to enable one to achieve one's original goal. However, the value output and quality improvement were nowhere to be found. 

We now have teams running 100ds of tests, mixing UI and API testing, and modifying the application under test to work for them all while missing the critical journeys the users are taking.

Struggling to attribute issues to functionality

When something goes wrong in production, fewer and fewer can be easily attributed to a particular feature/ticket or release. 

These items are now raised as Production Defects found by users or business stakeholders who have seen a report raising a red flag. 

No joke, I have seen e-commerce companies push code to production relating to payments, resulting in failed orders and double refunding the users?!?! How, on this green earth, did an issue like this get to prod? 

It's simple: The payment server was mocked, the integrated version was not tested due to “limitations” or time constraints, and the lower packs resulted in all-green results. This oversight led to a P1 / P0 critical issue being found by both users and the business. 

Once bitten, twice shy.

This problem has a catastrophic effect on the test team's goal, causing confidence in the testing to suffer. This leads to more “human interaction testing” time allocated to checking new releases, which impacts the cycle time of features.

Teams then look to “fix” the testing but are entirely missing the initial objective that they set out to do. Instead, the “sunk cost” fallacy kicks in, and you invest even more time and effort (money, in other words) in getting some levels of confidence back. 

While you will undoubtedly get back to some levels, at what cost? From the initial standpoint of gaining confidence before going live, this could potentially have been months or even years.

A Solution

There are many different approaches to resolving some of the issues I mentioned earlier. Let’s examine them one by one.

Hey, have you got a minute?

If I had a penny for every time I have sent or received this. But, joking aside, talking through features with developers and product folk can massively increase the chance of good-quality software on the other side. 

Communication is vital and promotes those brain juices to start flowing. Have you considered this? I wonder what will happen if you… All good bug hunts first begin with a cheeky chat with the developer who may have introduced them.

Nope, this ticket needs to be refined more!

Don’t be scared to reject testing something that is not refined, even if it has already been developed. Do the product and engineering teams have to collaborate to get the story to a point where you, as a quality lover, can deterministically and categorically approve or reject a creature as ready?

Certain things can assist, such as a good Definition of Ready and Definition of Done, but those principles only work if you hold people accountable for following them. Shout out if a ticket does not look ready. Bounce a ticket out of QA if you have yet to have a handover.

Soon enough, the team will start to realize that if you put sh*t in one end, you don’t get a polished product out the other end. Start refining and talking, and boom, good things happen!

Write some code, but first, write some tests.

Now I get it: writing code can be fun and enjoyable, and it's personal development. 

But you must satisfy the business reason for your role’s existence—some tests. Why not create a six-hour production test pack?

Pick tooling that gives you high-level confidence over the critical user journeys. The freedom this opens will allow you to focus on shifting these value additions earlier in the development cycle, also known as Shift-Left testing.

Another vital part is testing the site as a user would have interacted with it. Filling out MFA codes? Reading Emails? All these steps should be tested and done on production first, then updated to work cross-environment later.

Continuously Improve, all the time.

Now you have your “Golden Threads” covered, you will need to ensure these are kept up to date and any new priorities added quickly.

We all make mistakes; we let things slide. If the business deems something critical, add it to the pack. This way, you can deliver confidence that what has been repaired will not regress again.

In Conclusion, make some changes.

The market is heading into disaster; confidence is at an all-time low, and companies burned by the wasted money decide the risk is easier to swallow than blown budgets. 

Use tooling and frameworks that allow you to have baseline confidence, then add to that. Nobody will shout at you for finding defects earlier in the process, but they will do it if you spend all your time coding while allowing critical defects to fly out the door. 

If you enjoyed this article, look at some of our other posts. In them, we do not wrap the situation in cotton wool but instead help expose and, therefore, understand and resolve issues. 

The first step to fixing anything is to identify that it is broken. 

Catch you on the flip side!

Now give these buttons a good test 😜

Want Better Automation Tests?

Want Better Automation Tests?

High-quality test coverage with reliable test automation.