Transcript

Hi, I’m James Wilson, director of customer success at Diffblue. I’m going to give you a webinar today on why everyone loves automated testing. This is a great question and we’re going to have a look at what automated testing does, what problems it solves and what problems it doesn’t solve. 

So, let’s have a look at what we are going to cover today. We’re going to start off by talking a little bit about quality assurance (why do we have it and what’s the relationship between quality assurance) and look at some different bug types and how automated testing can help. I’m going to start off by setting the scene and then talking about the automated testing and finally finishing off with some of your questions. 

So, let’s step back away from testing and automation to start with and think about what’s the goal of quality assurance or QA. Fundamentally, QA is a cost centered to a business, so there’s got to be some return at the end of it. The return is typically around reducing support cost, maintaining the company’s reputation or making sure good quality products get into the field. One of the important things to remember about quality assurance is that it’s relevant throughout the software development lifecycle. All the way through, from inception to delivery. 

What does testing have to do with it? Testing is really part of quality assurance. It’s not all of quality assurance and it’s not everything quality assurance is. Typically, testing will occur after implementation, so it’s all about validating: does the solution that’s being provided meet the requirements? Really, it’s trying to find bugs by looking at the change in behavior. So we can look at regression bugs, where a pre-existing behavior has changed. We can look at security bugs, where someone is able to access information or impact a system they shouldn’t be able to. Or new feature bugs, where we have introduced a problem with something that didn’t exist before that’s a brand new feature and a brand new bug. 

Now, on these slides I’ve listed out regression, security and new feature bugs in that order because I see them as being more important the higher up the list you go. We all know that security bugs are interesting, they’re sexy, they’re newsworthy; everyone loves a good story about a company that’s lost your credit card details. For your reputational risk as a company, in my opinion regression bugs are even worse. They’re even worse because they impact your customers’ day-to-day lives. 

You have just released a new version of your product, you either deployed it or someone’s upgraded their on-premises environment. Suddenly, what they did before has stopped working. You’ve now impacted their business. They are going to be on the phone to your support line: “I can’t do anything, my world has stopped working.” Whereas, a bug in a new feature, they upgrade, try out a new feature and go that didn’t work very well and carry on with their day to day life. So, one of the areas where testing can help by looking at the change in the behavior of the product is really in the regression bugs, because that’s the area where you’re going to impact your customer’s ability to do what they want to do. 

So, I’ve talked a little bit about why I think regression bugs are much worse than other types of bugs, and here’s a news article from back in 2011. This is one I’m practically excited about because I was actually late to work on the 2nd January 2011 because my alarm didn’t go off. I woke up in a real big panic and where are we now, in 2019, and I have not used a recurring alarm on an iPhone for eight years, because I don’t trust their ability to keep that feature working. So, if you talk about erosion of trust and user dissatisfaction, I have not used a feature on an iphone for longer than I did use it because they eroded that trust. (Full disclaimer: I am a bit of an Apple fan so I’m not bashing them in any way, I still have an iPhone, I still think it’s a great product, but there’s one feature that they’ve broken actually two or three times which I have no trust in any more.)

What can we do about this? I’ve painted a very bleak picture and the first thing to think about here is really about how can we do testing; how can we do testing for regression bugs? This is where automation comes in. You don’t want to spend your expensive, high-quality human effort on testing whether your feature still does what it did yesterday. You want to ask a computer to do that; you want to have an automated test. 

Not only do you save your human resource for doing something that’s more appropriate for a human, like exploratory testing, you are also able to move the testing to earlier in the cycle. If the developer knows they’ve broken a feature before they even merge the code in, it’s impacting less people; it’s only impacting that developer. They then go away and fix the bug and then introduce their change to the codebase. By doing regression testing in an automated way using CI tools, you can catch bugs immediately and can give that instant feedback to the developer. 

The other thing I have mentioned on this slide is predictable test coverage. One thing humans are great at is not being predictable. If you are testing a feature for the 100th time in two years because you do the same set of manual regression tests every release, you are going to slightly change the way you do things. You’re going to get bored and switch off to what’s happening and you get to a point where you are less likely to find the bug than an automated test is. The quote here, I really like this quote: ‘if you don’t like unit testing your product, most likely your customers won’t like testing it either.’ To the developers out there, if you’re not prepared to test your change, QA aren’t going to like you. You’re going to be handing over poor quality code, they’re going to be testing it and feel like they are wading through treacle trying to dance around bugs to actually get the answer of what this feature is like.

If you’re a company and you don’t like testing, your customers really aren’t going to like you because the first person executing your code and the first person really trying out your product is your customer. Sure, there are plenty of people out there willing to take on beta testing, who are willing to take on early products to market and really be at the cutting edge, but the vast majority of people really just want their code to work. I think this quote really sums things up. 

Let’s move onto Q&A. We’ve already got a few questions coming in. So let’s start off with: Who can benefit the most from using automated testing? I want to say everyone. I think if we look through the lifecycle of development and think about developers writing code… I know it’s not starting at the beginning but let’s start here. Developers writing code, they know if they have broken something the CI system tells them, so they can address problems earlier. This means they don’t have to shift context, they are not having to stop work on something to move onto fixing a bug; they’re in the zone. 

Testers or quality assurance people who are looking at the end-to-end testing or user acceptance testing can really benefit from automated testing because they know there’s a level of quality in the product before they start working. They can be confident that what they are doing is exploring new territory and really look at the things that are most interesting and most useful to have a human look at. Checking whether, if you put a bad username and password into a login screen, it still produces an error… that’s not very exciting for someone who is doing testing. But it’s something a computer can do easy. The business, the commercial aspect: if you shift left your finding and fixing of bugs, you can deliver software with more confidence more quickly to your customers. That can only be a good thing from a commercial revenue perspective. 

Does automated testing make manual testing obsolete? Absolutely not, is the simple answer. There are certain types of testing that humans are much better at than computers. I mentioned earlier exploratory testing; one thing a computer is very bad at is spotting where something slightly weird has happened. When you are doing exploratory testing and you work through a product you kind of get this hunch of what if and computers don’t have that. So, you can get some interesting bugs from having expert testers and that’s without mentioning specific testing skills like penetration testing. 

There’s always going to be a need for people to do that manual testing and one thing automated tests are very poor at is testing something the first time around. The cost of running an automated test once is a lot higher than running a manual test, so you only get that return on investment once your automation has run many times. In my mind, manual testing should be right at the bleeding edge of your product and your automated testing should be worrying that everything that has happened or has worked in the past continues to work. 

One of the first steps for getting started with automated testing is actually a nice introduction to the next webinar in the series that talks about why are we so bad at it? I think one of the first steps is to set some time aside and produce an automated test, just one. Just building one simple automated test is going to check you have the infrastructure in place, you’ve got the right tools in the CI system, you’re using the right testing technology for your product and what you are trying to achieve. That’s going to take more than enough time to do that first test case. Gradually set some time aside, maybe a Friday afternoon or a company I saw on Wednesday afternoons had all the developers work on tests for an afternoon. Or finding some times in sprints for setting aside some story points for getting your infrastructure up and running. Really the hardest thing is getting started. 

So I think that is all we have time for today. If you have any questions or would like to comment on our webinar, then you can reach out to us through our website or through my Twitter account or Diffblue’s Twitter account. As I mentioned earlier, we’re going to talk about why we are so bad at automated testing in our next webinar, so keep any eye out on Diffblue’s Twitter or our website to find out when it will be.