Peter Yabsley:               Okay. Hello everybody. And thank you for joining us today. Welcome to this Diffblue webinar on automating for great developer experience and why that’s really coming to the fore in so many companies today. I’m Peter Yabsley, I’m the head of marketing here at Diffblue, and I’m going to be your host for today’s session. And this will be the start of a regular series of Diffblue webinars so we certainly hope that you’ll be able to join us more of these in the future as well.

So let’s get going with today’s session. I’m hosting today, but two much more important people are here with me to share their insights and their experience on the topic of developer experience.

First of all, I’m delighted to introduce Rachel Stephens. And Rachel’s a senior analyst with RedMonk, who are a developer focused industry analyst firm. Rachel focuses on helping clients understand and contextualized technology adoption trends, particularly from the view of the practitioner. Her research covers a range of developer and infrastructure products with a particular focus on emerging growth technologies and markets. And basically that work across industry means that Rachel is going to be able to give us a broad view on how different companies are thinking about this topic of developer experience and what they’re doing about it. So, Rachel, thanks so much for joining us here today, delighted to have you.

Rachel Stephens:          Oh, thank you. I’m excited to be here.

Peter Yabsley:               And we’re also joined by Diffblue CEO, Mathew Lodge. Mathew’s got over 25 years of experience in the software industry at different companies like Anaconda and VMware, but I think importantly for today’s discussion as well, he started out as an engineer. So he definitely understands this question of developer experience and the feeling that engineers have when they’re thinking about their daily life. So Mathew, great to have you here. Thanks for joining us.

Mathew Lodge:             Great. Thank you.

Peter Yabsley:               So just before we dive in, let’s just take a very quick high level look at what we’re going to Cover. In the main part of the session, we’re going to spend some time just talking about this topic of developer experience and getting the benefit of the insights and experience that Rachel and Mathew can share there as well. We’ll get into why that matters, what companies are doing about it. And in particular, how automation has an important role to play in this topic. Then we’ll take a quick look at how Diffblue’s automation technology in particular is helping Java Teams to improve developer experience. And we’ll finish up with a very quick demo of our product, Diffblue Cover as well, followed by some Q&A.

Let’s get into this topic of developer experience. Rachel, maybe you can start us off and set the scene for us to make sure that everybody on the call has the same understanding. What do we mean when we talk about developer experience? What do RedMonk mean when they talk about this topic, which I know you’ve been doing some work on over the past year or two?

Rachel Stephens:          Yeah. So I think a good frame for developer experience is thinking about that user experience framework of all of the different touch points and interaction points for someone using a product. But in this case, the users specifically are people who are creating software. So software developers - and it’s caring about how these software developers have to do their work.

So we have to think about it in a variety of contexts. It’s thinking about the functionality of the tool overall, but kind of different facets of that could be things like the ease of use, how effective and efficient is it to use all of the various components of the tool, how discoverable and intuitive is it, is there enough context for the developer to understand what’s happening without it being a total black box, but is it kind of walking that line between being contextual, without being verbose and just kind of sorting through a bazillion logs kind of experience? Is the design thoughtful and have you considered those possible points of friction that a developer might encounter and tried to smooth out those rough edges for them? So thinking about all of those different ways that a developer needs to do their job, what they’re trying to accomplish, and then how does your tool fit into that? That’s what we mean by developer experience.

Peter Yabsley:               And am I right in saying, certainly from RedMonk’s perspective, that generally the idea here really is to position developers to do their best work. That’s what we’re all really all about with this concept of developer experience is. How can we help developers to do their very best work in the day? And then the rest of the discussion around how we enable that sort of flows from there. Is that a fair way of looking at it?

Rachel Stephens:          Absolutely.

Peter Yabsley:               Great. Thanks. So I think hopefully everybody on the call will be familiar with what we’re talking about here. But Mathew, I hope you don’t mind me pointing out again, you’ve been in the technology business for a while - we’re talking about this topic of developer experience, it’s coming up more and more these days, but it’s not really a new concept, right? It’s been around for long time.

Mathew Lodge:             Yes. Yeah. I’ve been around for so long that when I was a developer, the majority of folks programming were writing in C: C++ was a new thing! Java had been out for like, I don’t know, nine months and developer experience was still very important. When I was a C developer, just trying to solve things like memory leaks in C, it was a big issue. I was working on some legacy codebases and Reed Hastings, before he did Netflix, did this company called Pure Software and they had this tool called Purify that would help see programmers find memory leaks in their software. And it was like magic for finding memory leaks.

So developer experience, and tooling associated with it, is not new in that sense. What is different today is that software is just much more important. It’s in everything. It is how organizations run their businesses, and it’s really crucial. When I was writing using Purify, that was 20 years before Marc Andreessen came out with this, “software is eating the world,” and that’s the difference today.

Peter Yabsley:               And do you think from the point of view of the label, if you like, the topic itself? Has the meaning has evolved over time? Or is it kind of the same today as it always was in terms of just trying to put developers in a better position to do better work that they’re happy with?

Mathew Lodge:             Well, I think that’s definitely true. I think what’s different is that there is vastly more software to be written than there are people to do it. And that makes developer experience much more important to organizations, it’s much higher up their priority list. And our customers today, that is one of the key things that they are concerned about is, how do they recruit and retain development teams and make them as productive as they can and enjoy their experience?

Rachel Stephens:          Could I interject there, too? So I love this concept of more software being written. And I also think the flip side of that is that there is also more software tools being used to write that software, so there are more pieces of things that are needing to be integrated together. So when I talked before about developer experience being the way that the tool works, that’s only a component of it. And one of the things that we talk about at RedMonk is actually the developer experience gap, where it’s like each individual tool or open source project or thing may have thought about its specific part of what it does in the tool chain. But thinking about how all of the tools fit together and integrate is also a really important part of developer experience, because we have to be able to not only use an individual tool in isolation, but we have to be able to have all of this working together in concert.

And so the thing that has changed over the years is that we’ve built software in a much more composable way, where we have things that are coming together from different sources, both internal and external, and we have more modular software, which is great, but it also means that we really have to think about those integration points and the potential frictions of those integration points in order to have successful developer experience.

Peter Yabsley:               Fantastic. Thank you, Rachel. And let’s just dig into that in a second. I’m really interested in that point and I know you’ve got some kind of examples of this in the real world, but just to come back on this idea particularly of developer availability and too much software and not enough people. I assume that chimes with what you are hearing at RedMonk, but is there anything else  that’s driving this idea of developer experience in the companies you talk to and work with?

Rachel Stephens:          I think Mathew really hit the nail on the head in terms of, people are just trying to figure out ways to help their people be productive. And we have skills gaps for a lot of people, especially as technologies are starting to evolve where people are trying to help their teams transition. If you think about the Kubernetes world and things like that, there are not enough people in the world who know how to do all of that for all the places who are thinking that they need to be doing that. That’s a whole other conversation.

But there are skill gaps, there’s just upskilling that’s happening in general. And then there’s also just more software being written, more people who are viewing this as a value added part of their business. And so all of this together means that we’re really asking our teams to do more with either less or with the same amount of resources that they had before. And so we’re trying to help these teams automate as a big part of it is trying to help people be effective, be their most effective selves and augment their abilities to do their jobs.

Peter Yabsley:               Yeah. Thanks. I think, to use the same phrase, you’ve hit the nail on the head there as well, in terms of the reason why we’re talking about this, this idea of teams being asked to do more with less is something I’ve heard for certainly the last few years working on the software world. Enabling developers to do that is I think what this topic is all about. So you mentioned some of key thoughts around integration and the connection of tooling being very important. Are there any specific examples or specific companies you have in mind that you can use to illustrate the kind of actions that organizations are taking to address that kind of question?

Rachel Stephens:          Yeah, so I think there’s kind of two broad approaches that we have seen in the industry. Heroku, I think when Mathew talked about like magic, I think Heroku is one of the magical things that people talk about when they first talk about like, “Oh, look how easy it is to deploy my application?” That is more than 10 years old at this point but I’d say that we at RedMonk have probably heard more about Heroku  - and this is not so much Heroku the company, but Heroku the dream, like the dream of this easy deployment - we’ve heard more about that in probably the last two years than we have in the decade leading up to now, because that whole concept is really having this broad appeal for people. People want this easy way to get their software out the door and they want to have fewer things that they’re trying to string together.

So we have one class of people who are approaching the world through kind of, what is a platform that might work for me? And that could be something, a limited platform like Netlify or Dark or Render, where they kind of tackle a limited use case, but it’s not going to be something that works for every class of application, but can make it easier to kind of integrate all of these pieces into the platform. So that’s one approach we see people taking.

Another approach we see is kind of that Spotify-esque approach, where you have a golden path that’s paved to production. So we have kind of this opinionated scaffolding that we’ve assembled internally. And if you use these internal tools, it’s supported through a team, like a platform team, something like that. And it will help you get your application into production in our systems. And if you want to use different tools, that’s great, but it’s something that you have to support yourself. So kind of see these two different worlds. So if we want a platform or a path that can do something for us, or we want to assemble our own tools internally in a way that we can do it. But in both cases, the commonality there is we’re trying to reduce the friction for people to get their tools into production, their applications into production.

Peter Yabsley:               Thanks, Rachel. Obviously we’ve raised the point a couple of times already of how central automation and tooling is to this whole idea. Mathew, can you just maybe expand a little bit on what you were talking about earlier about why this is so central to organizations in improving this idea of developer experience when it comes to these? Not just the tooling, but the automation as well.

Mathew Lodge:             Yeah. Yeah. I mean, part of what you’re trying to do is help individuals be more successful and improve their productivity. And we all will do the thing that is easiest to do. And so the role of tooling, a lot of times people talk about culture and how important that is. And it’s absolutely the case, that culture is important if you define culture as how work gets done, you think about it that way. But you can shape culture using tooling, you can smooth the path, right? So a lot of what folks like Spotify are trying to do is they’re trying to have that golden path be the easiest thing for you to do as a developer. And if they make it very easy, you’ll use it because it’s the easiest thing for you to do. And that’s part of how you reduce the number of moving parts, reduce the amount of friction, simplify things.

And so the role of the software and the automation that is used as part of developer experience is very important if it can help you smooth the path. If it gets in the way, if it doesn’t help your people get done what they need to get done, then it’s not improving your developer productivity.

Rachel Stephens:          I love that comment. I think that one of the things that happens a lot in the industry is people kind of have this, especially you kind of talk about DevOps, things like that, where we say, “there’s no silver bullet, you can’t buy DevOps”, which in a sense it’s true, you can’t just use procurement to completely change your software practices. That’s not going to work, that is a path for sadness. But if you’ll also try to do “culture”, like “DevOps is culture or software delivery is culture,” that’s also true. But if you try to do that without the tools to support it, you’re also going to be on a path of sadness because those things have to work together to bolster each other. So tools and culture are kind of equal sides of the same problem, we have to have both of them working in conjunction.

Peter Yabsley:               Yeah. Thank you both. And I think you make a really interesting point there as well about the way that technology can open minds and maybe make a bit of a shift in mindset because sometimes it can demonstrate possibilities that just weren’t there before, in terms of different ways of working and how advantageous that can be. So yeah, interesting stuff.

Mathew, we are a company that focuses on the area of testing and how automation can help there. And certainly we’ll talk a little bit about how our product can help in that specific area, but why is testing such a good example of this whole topic around developer experience and the pain and automation, how that fits together?

Mathew Lodge:             Yeah. Lots of the ideas in sort of modern software automation are borrowed from manufacturing. If you think about modern software pipelines and continuous integration, continuous delivery, a lot of that is analogous to the production line. And yeah, testing is a really important part of that. If you think about traditional QA as being testing happens at the end of the production line. So when the product is “done”, then it gets tested by QA and they look, then they try and verify that it does what it’s supposed to do.

And the problem with that is of course, that you’re finding everything at the last minute and as all good lean production people know, the sooner you find a problem on the production line, the cheaper and the faster it is to fix it. And so unit testing is essentially that analogy. It’s like, how do you find problems at the time the code is being written? And that’s why unit testing is so important.

Peter Yabsley:               But I guess also from the point of view of the developer, it’s not really the kind of thing they want to be doing. So it’s really important for the process, but maybe for the developer, that’s something you’d like to solve for.

Mathew Lodge:             Yeah. And that’s part of the… Software development is kind of insane in that it’s pretty much 100% manual. And you think about any other industry where it’s like, “We’re just going to do everything manually.” You wouldn’t start a new retail operation by trying to do everything manual, you wouldn’t. In most industries, you just couldn’t, that’s just a complete nonstarter, but that’s exactly where we are with software. And the challenge is that a lot of software coding is tedious and error prone. And that is really where the opportunity exists to improve the developer experience by thinking about, what are those things? Because they’re tedious, they become error prone. If things are tedious, we make more mistakes because it’s just mind numbing. And so there’s a great opportunity there.

Peter Yabsley:               Thanks Mathew. And just before we round off the discussion in the next few minutes, one thing I wanted to ask you, Rachel, was just about that idea of characteristics of the tooling that can support good developer experience. You mentioned integration is a key thing, but are there some real particular areas that teams, when they’re looking at how to improve this area might be looking for to focus on, or as characteristics that would be beneficial?

Rachel Stephens:          Yeah. So I loved Mathew’s comment just about the value of reducing tedium. So those kind of easy on ramps is a good one, the way where you can get people up and started quickly. So sometimes that’s through scaffolding and frameworks, sometimes that’s through templates. Sometimes that’s just through general automation things. Having context so that people can understand what is happening in their system without having to kind of switch screens, get out of their flow state and understand what’s happening in their thing.

Short feedback loops is another one that Mathew kind of alluded to there. So it’s something where a developer can get feedback on, what is happening? Is this working? Is this something where I need to make changes while they’re already in that state rather than having to come back and try to get back into that state of mind later, that’s another great one. Documentation is one where I feel like it doesn’t get a lot of love a lot of the time. And we are kind of talking about user experience, but your documentation really is like your marketing material for your developer. So is it something where people can understand how the product works? Can they self-serve in a way where they can troubleshoot their own problems? All of those things are really great ways to think about how you can impact your developer experience.

Peter Yabsley:               So we’ve got a relatively short slot here today and I’m sure there’s more we could dive into in a lot of these topics around the technology in particular and what developers are looking for. But there is one kind of big thing we haven’t touched on yet, which is the outcome. This is a nice concept and we want developers to be happy and productive. But Rachel, I believe you’ve looked into this a bit in some of the companies you work with, how do businesses see the real benefit of developer experience? Is it quite tangible?

Rachel Stephens:          Yeah, so I think one of the things about developer experience and automation in particular is this is something that people up and down the value or the organizational org chart can appreciate it because there’s a value prop for everybody. Like the developers as we talked about, they don’t like the tedium, they want to be able to be more effective at their jobs. But we’re in this world now where software has kind of shifted for a lot of companies from being something that’s a cost center to something that is driving kind of top line revenue, value creation, this new and unique way that the company is doing business. And so people at the top also want their developers to be able to build effective and great software more quickly. Velocity is a big driver for a lot of these companies.

And there was this line from Jeff Lawson’s book, Ask Your Developer, that I really liked where it’s kind of moving from a build versus buy to build vs die. A lot of these companies really are thinking about building software as something that is crucial to their ability to do effective work as a company overall.

Peter Yabsley:               Great. Thanks. And Mathew we’re working particularly with Java teams at Diffblue, but we’re talking to these similar kind of organizations, I think that Rachel might be referring to. Is that the kind of thing that you hear as well when you talk to our clients in terms of how developer experience is a positive thing?

Mathew Lodge:             Yeah, it’s very clear to senior executives in these organizations because software is so central to how they operate, Rachel’s point there. And so they can directly connect their happiness and the churn on their team in particular. Like, low churn teams where folks stick around, they enjoy what they do, they deliver good software and they can deliver it quickly. The velocity point that Rachel made, that’s very easy to translate into business success and the effect on competitiveness in particular. And so we see that a lot inside of these organizations and anything that can help those organizations and help those teams be more productive and essentially focus more effort on the things that really deliver value for the business.

In one area where it gets more challenging for management to see the benefit is like, “Okay, we understand you have to do a lot of other stuff other than implementing this new thing that we need in order to beat this competitor.” But they don’t really understand what that is, and so there’s always a question of like, “Why are you spending so much time on that?” And so being able to reduce the amount of effort that doesn’t go into making software much more effective and more competitive is something that they can understand what the value is, even if don’t understand exactly what’s involved.

Peter Yabsley:               Great. Thanks. Well, thank you both. We’ve spent about 25 minutes on this and I think a really good kind of overview of what’s happening in the industry and why this is a topic that’s coming up so much. Mathew, what you described there is very much part of what we offer here at Diffblue with our technology and in that specific area of unit testing that you mentioned. So why don’t we click into that a little bit deeper as an example of how automation can help to improve developer experience, and lets them focus on those more value add things that you’ve talked about. So I’ll just hand over to you for a few minutes to explain to our audience how we do that with the teams we work with.

Mathew Lodge:             Great. Thank you. Right. So I’ve got a couple of slides here that I’ll take you through. So what do we do at Diffblue? We write software that writes software. We did an editorial for The Wall Street Journal about this, basically on the 10 year anniversary of Marc Andreessen’s, software is eating the world, software in the world and will soon write itself. And so we’re looking to automate tedious and error-prone manual coding. And we started with unit testing. Unit testing is important, as I mentioned before - just to recap - because it’s the fastest way to find and fix errors at the time the code is being written. So as Rachel said, “when you’re in the flow as a software developer,” right? And so it’s a lot faster to fix things there.

And so the way the Diffblue does that is that we have artificial intelligence, it’s machine learning, I’ll get into what that is a little bit in a second. But essentially our product writes tests for the entire application, so you can write a baseline of tests. So the tests that we write, unit tests we write, reflect the current behavior of the application. And that means you can run those against proposed changes. So the Delta, when you want to update the application, when you make a change to the application, you can run the Diffblue test to find regressions.

And so when you find those regressions with the Diffblue test, the developer can decide what to do, how to fix those things. Eventually, they get to the point where they’re happy with their code, and then we automatically update that test baseline to match the new code. So what we do is we write the tests that are impacted by the change. So we incrementally update that test baseline. So in a typical software change process, if you think about sort of a Git-style process where you build a thing called the pull request with all your changes in there. So typically an engineer puts the pull request together, they create a branch, they get ready to have that reviewed. And typically in their CI system, there’s a bunch of things happen automatically at this point, one of which is running the tests. And so they run the current test against their change. They may have updated the tests in some way. And based on the output of that, the tests are one of the things that tell them, is my change correct or not?

With Diffblue Cover, you’ve got a full set of unit tests, a suite of unit tests that catch regressions. So you can run those in addition to any tests that the developer has written and they help you find more regressions. So those tests get run in exactly the same time as anything that’s been written by human, by the developer themselves, and they help you catch those regressions. And then after the PR has been approved and your branch has been merged into the main line, then the Diffblue tests get updated.

So the question is, well, how do we do that? Sounds magic. So what’s this AI? And lots of companies talk about AI, but it’s gone to the point where it’s become a bit of a buzzword. So the artificial intelligence that we use is a form of machine learning. And it enables us to search for the best unit test for Java code. And it’s a similar technique - it’s called reinforcement learning - that’s been used in a lot of game playing. So it’s the main technique that’s used, for example, in Google AlphaGo to search for the best Go moves. And we search for the best unit test for Java code.

So in the case of AlphaGo, why do we need to do this? Well, because the search space is so big, you can’t look at every single possible move, right? So the number of potential moves in the game of Go is larger than the number of atoms in the universe, that’s how big it is. So it’s a very, very difficult problem. And that’s why it took such a long time before machines were able to solve this. Chess has a much smaller search space and so early things like IBM with Deep Blue, basically just exhaustively searched that space, but did it in parallel and very quickly. And with chess, you can do that, you can search the entire space. You can’t do that with Go. And there are lots of practical difficulties, it’s very difficult to determine how good a move is at the time move is made. And you need to have a prediction about what that means for the long term success of the game.

And so the approach that the Google team took was to conduct a probabilistic search of the Go space. Probabilistic in the sense that you don’t try and search everywhere because you can’t, there’s not enough time, but what you can do is search in areas where solutions are likely to be found and you spend more time searching there than in other places. So you’re not guaranteed to find the best solution, but this is much more effective and it’s good enough that it becomes very effective. And in the case of Go, Go masters can beat humans. And in the case of unit testing, we can write tests that are just as good as a human can write.

So in the case of test writing is the same issue. The total number of possible test programs that we could write is exponential. There are lots of practical difficulties associated with writing those tests, particularly around things like IO, and in the case of Java, others - inversion of control is very popular for example. So it’s not obvious if you look at the code exactly what is happening and what the flow of control is through that code. And so we take the same approach, we conduct a probabilistic search of the test program space. What that means is, we write a test, we run it against the code that we’re trying to test and we see how well it does.

So we look at what kind of coverage it gets. We look at what kind of results are generated. And then we predict what a better test would look like. And we try that and we run that against the code, and then we see how that performs, and then we predict what a better test would like. So that’s how we conduct the search by modifying the test program and iterating through that until we find the best set of tests to reach the coverage, very similar to AlphaGo, where it tries lots of different moves, predicts what a better one is, tries it and plays that out, sees whether that’s a good move or not, predicts a better one and iterates through moves. So exactly the same kind of approach.

Our product is available both as a command line, as an IntelliJ plugin. What I’m going to show you in the demonstration is the command line. So you can see exactly how you would write that test baseline for an entire program. So I’m going to stop with slides at this point. And what I’m going to do now is share my screen. I’m going to show you a thing called pet clinic. So pet clinic is an example Java application for the Spring Java framework. And it’s a demonstration application, so here it is. It’s a fictional vet’s office automation system, so it’s a classic database-driven application. So if I want to find pet owners in pet clinic, then I go to this screen here. If I don’t enter anything, then I just get all of the owners. So I can see Peter McTavish has a pet called George, we don’t really know what George is yet. So if I type in, I don’t know, brown or bron, what do we do now? Because I can’t even type, then brown is not found, we just go back to this thing here. But if I type in McTavish and I click find owner, and there he is. And so if I click on find owner there, I get straight into Peter McTavish’s records, only one record matches. And so I go straight there.

This is classic logic you’d find in an example application and lots of better applications look the same way. So in find owner, we have three paths, right? So when you are writing tests, then there are three different cases we need to write tests for. We need to write the case where it doesn’t match. We need to write for the case where we don’t enter anything. And you look at all the owners and we need to write tests for the situation where there’s exactly one owner matching as we saw in here in Peter McTavish case.

Let’s take a look at the code for that. What I’m going to do here is switch over into IntelliJ, and what you can see here is the code for pet clinic, the Java code. And I’m going to highlight this entire section here. So that’s the logic that we just saw in action. And so this first section here, if I take a look just at this thing here, which basically says, if I don’t enter anything, then do the null string. So we need to figure out how to cover that. Here’s the first logic that says, if we didn’t find any owners, then we just go back to where we were. Second branch here says, if we find exactly one owner, then we redirect to the page for that owner. And then the third case is where we got multiple owners and we just show a page with all of those owners in there. So those are our three cases. So what I’m going to do here is switch over into my terminal window.

And I’m going to run Cover, which is the Diffblue product. I’m going to run dcover-create and I’m going to give it a class. I just want to write tests for a single class just in the interest of time. So what I’m doing here is telling Cover to go away and write tests for the entire class. And you can see that Cover starts by figuring out the current Java environment, what build systems is in use. So it configures itself based on the project. And so it’s detecting the version of JUnit – it writes all its tests using JUnit - which version of Mockito is installed, all of those things, because we it’s going to write full set of tests in JUnit five. It’s going to use Mockito to do mocks to isolate this code and it’s getting started here. So there are 10 callable methods in the class and it’s going to go ahead and write to test, all of those. This is slightly slower than it would be normally because I’m running Zoom, but we should be done here pretty quickly.

Peter Yabsley:               Just while that’s running, just wanted to mention why we certainly feel this is a good example in this topic of developer experience, because we’ve got research that says 25% of Java developer time is spent on writing unit tests. And in reality, I think that’s very conservative. We speak to organizations who maybe say 40%, 50% of developer time. And equally the same research we did said around 40% to 50%, I think, of developers would prefer to never write a unit test again.

Mathew Lodge:             That’s right.

Peter Yabsley:               So when we talk about this idea of automated unit test writing in connection with this topic of developer experience, that’s why it’s such a good example of why this kind of thing matters to developers. But I can see the tests are generated. So I’ll hand back over to you.

Mathew Lodge:             Yeah. Yes. All right. So let’s see what we did here. So after we’d spent a little bit of time, like getting ready to write the test. And I think I that’s quite a lot of overhead. If you are only writing 14 tests in this particular case, if you are writing 2,000, 5,000, 10,000 tests, that overhead basically comes down to zero. But you can see here that if we look at the timings for this, then 35:03 to creating the test here in 35:54, and we were done basically a few seconds after that. And we’ve got our 14 tests written. So we write a test roughly about every two and a half seconds once we get into this. So we have 14 tests for 10 methods, one of the test… We didn’t write a test for a trivial construct, because it’s already covered by other code that we write.

And then there’s another thing that’s not unit testable, which is a Spring configuration. So that’s great. Let’s switch back into IntelliJ and see what we have. So the tests for this show up in this part of the project. And so I can just double click this and open up the code that has been written by Diffblue Cover. And it’s written a test for every single method that’s in there. What I’m going to do is scroll down here. I’m scrolling down because I’m going to find the tests for processing the find form, and there they are. So here’s our first test that’s written for that process find form. And you can see the tests are organized into the structure of arrange, act, and assert. So the arrangement is basically getting everything ready to go for that particular test. And then the act and assert is when we run the method under test and we check the results.

And so you can see in this arrange section that we’re setting up a mock. So this is a good example of how tool works. It knows that it has to mock the database. In a Spring application, if you want to run a unit test, you don’t have to run on database, to do that you don’t have to want to have to run a web server, any of those things, because you want the test to run quickly. So what we do here is we take advantage of the mocking framework that’s built into Spring, and essentially what we do is we are going to return an empty list of owners, right? So it’s going to be null, there’s going to nothing in there. And that’s going to hit that first case where we have a null string and what we expect, we’re not going to find anything. And so we’re going to go back to the find owner’s page. So what we do is we run this here, we mock a call into this controller and we get redirected back to the find owners case.

So the second test that’s been written here is when we have the case where we have exactly one owner. And so what we do is we trade an object for an owner. So we’re going to return Jane Doe. She obviously lives in Oxford at 42 Main Street. And so we create a list with exactly one owner in it. We’re going to return that to the function under test, and again we run the mock code here in order to mock the web request. And here we’re checking that we get redirected to the page for owner number one, for Jane Doe. So that’s our case where we get exactly one result. And then here’s the test where we get more than one result, the third case. So we create two Jane Does, we return that list of two owners, and then again, we check to see, do we have the right number of items? And are we sending that? Sending us back to the owner’s list to do the search again?

So you can see this is the kind of code would expect. It’s designed to be idiomatic for Spring, right? This looks like a Spring test. The important point there is that if you’re a developer and one of these tests fails, you want to be able to understand very quickly why it failed and what happened there. And so that’s exactly what we’re doing here with Diffblue Cover.

The other part of Diffblue Cover that I’m going to show very quickly, I’m going to switch back to my browser here and I’m going to go to Diffblue Cover reports. And I’m going to give an example here, this is not for pet clinic, this is for a different application. This is for Mojang/brigadier. So those of you who are familiar with Minecraft have heard of brigadier. So brigadier is basically a part of the infrastructure for Minecraft, for dispatching commands, so it’s an open source project. And the reason I pick this is because it doesn’t have very many unit tests. And so what you can see is that the overall coverage of the unit testing for Mojang brigadier is the sum of these two - 22.2 plus 3.1. So it’s about 25%, just over. And when we run Diffblue Cover against this, the coverage goes up to 71%. And so that you can see the breakdown here in the coverage, the manual tests.

So Diffblue and humans covered the same 22%, so that’s effort that could have been saved with Diffblue Cover, Diffblue covered another 50% on top of what was already there. And then the manual tests reached some cases that Diffblue Cover did not. And so it’s a good example of how combination of manual tests plus what Diffblue Cover does can get you the best result. And so you can see overall, some really great coverage statistics here and for managers of projects, this is pretty interesting, they can see what the breakdown looks like. We have a whole other section where you can break into this and crucially, you can also see on the right hand side that some of this code is not testable and that’s very common. We see that a lot.

And so now our product can also refactor your code to make it more testable. So in a lot of cases, we know exactly what to do from our analysis in order to make your unit testable. And if you would like, we will just fix it for you. So we’ll give you a refactoring, recompile and rerun Diffblue Cover and you’ll get more tests. So that’s it for the demonstration.

Peter Yabsley:               Thanks very much, Mathew. So just to summarize what we’ve talked about today. We’ve talked about how developer experience has mattered for a long time, but particularly matters today in businesses for many different reasons, not least the amount of work to be done. And companies are really responding to this in terms of how they change their priorities, the makeup of their teams, the tools that they choose, because they see the benefits being real. They see developer experience as one of the things that can help them to be a more effective and competitive business, or a more effective and competitive IT organization.

And I think it became clear in that original discussion at the start of the session, the automation technology has a really key role to play in this. It’s not the be-all and end-all, but it can certainly be a critical enabler and actually open some new opportunities for different ways of working and better developer experience that might not have been there before. And Mathew’s just taken us through. I think there’s a great example of how unit testing is a really clear example of this. It can be a huge amount of work. It’s not necessarily easy and it’s not necessarily something that really anybody wants to do, but it’s a really important task. And being able to automate that and make it more standard and repeatable and take it away from developers is going to help their day-to-day and help their experience in what they’re doing as they write software.

So that’s the end of that main part of the session. But I do just want to have a look at a couple of the questions that have come in because there are a couple of quite good ones. And Rachel, I’d like to sort of start at the end actually with this one, maybe a bit backwards, but we finished the first conversation talking about outcomes and one of our audiences asked whether you can really measure the impact of good developer experience, whether that’s something we can really know.

Rachel Stephens:          Yes and no. So I think there is a degree of stoichiometry involved sometimes, this ratio of this, and then you can kind of worm your way there. It’s not quite as clear sometimes for those top line metrics as something like the DORA metrics that have… Okay, we can kind of think about really solid metrics that we can measure from an engineering perspective, and we can get a sense of how our teams are performing. Sometimes getting to those causal relationships between developer experience and revenue or something like that, that can be a little bit less clear. But I think one of the things that we have definitely seen is that there are strong correlations between good engineering teams and those business outcomes.

Peter Yabsley:               Thank you. Great. We’ve got another one. I think you mentioned a couple of these, a couple of examples earlier, but one of our audiences asking whether you have examples of companies who do this really well, I don’t know if Spotify is one because of their structure or just an example of the tooling. Is there anyone else that springs to mind?

Rachel Stephens:          Oh, so the companies that do developer experience well in particular, or is it…

Peter Yabsley:               Yeah.

Rachel Stephens:          Okay. Yeah. I mean, I feel like it’s cliche. I feel like you always have to answer like Stripe and Twilio, when that comes up, kind of the canonical big developer experience companies that come up. Heroku is another one, one of the ones that has caught our eye recently at RedMonk is TailScale, they’re kind of trying to make that networking side of things easier for developers. So that’s definitely something that’s fun to watch. I also talked about docs and I think Stripe and Twilio, those are canonical docs, but some doc companies that I’ve been pointing people to is like companies I really love their docs for, are Feast and Iterative AI. So if you really want to check out companies that who have docs that I like, those are a couple ones that… If you’re somebody who goes and compares docs for fun like me, those are some fun ones.

Mathew Lodge:             I got one, if you love docs, HashiCorp. HashiCorp Is really awesome for software developers and they do a terrific job at developer experience.

Rachel Stephens:          Yes.

Peter Yabsley:               Fantastic. Thank you. Well, actually, we’ve got a couple of specific things about Diffblue in particular, Mathew. So maybe I can just throw these at you before I go back to Rachel with one-

Rachel Stephens:          Could I ask my one that came up during the demo before? Is all that running locally when you’re generating all of those tests or is it running remotely?

Mathew Lodge:             Yeah, that’s all running locally on my laptop. Yes. So that’s all as I’m running on a Mac and the product runs on Linux, Windows or Mac.

Rachel Stephens:          Okay. Got you. Thank you.

Peter Yabsley:               Thank you, Rachel, that actually connects nicely to this one I was going to put to Mathew, because it’s one that’s coming up more, more frequently, I think. And I think you covered it in your presentation, but just to be clear, the question was asking, does this work the same way as products like GitHub Copilot? Maybe you can just give a little bit of color to that, Mathew, because obviously there’s some difference there.

Mathew Lodge:             So it doesn’t work the same way as GitHub Copilot, it’s the short answer. I mean, GitHub Copilot five is an auto completion tool. So really Tabnine and a couple of other companies pioneered auto completion tools. So they basically, what they try and do is predict what the right completion is. So you start typing code or you start typing a comment and they try and predict like, so what’s the completion of that. And so that’s how Copilot works. It’s a very different approach, it doesn’t use reinforcement learning. It’s using a large scale language model. So it’s trained on lots of examples of programs. What it is doing is synthesizing what it thinks the right completion is based on the input you give it, right? So the more input you give it, the better chance it has, but Copilot is a productivity tool.

So it’s similar in that sense, it improves the developer productivity, but it is not trying to write a full set of complete code. So that’s the other big difference. It is guessing what your completion is, and then it relies on the developer to take a look at the completion and decide, is that correct? And the accuracy varies depending on the programs, but as the name suggests it’s designed to work with a developer with human review. And the big difference with tool on Diffblue Cover is that it’s 100% autonomous, the developer does not need to do anything. And the test are guaranteed to be correct and we run all the tests to verify them at the end. So if you were eagle-eyed on that demonstration, you might have seen that happen right there at the end.

Peter Yabsley:               Great.

Rachel Stephens:          Can I ask one follow-up there?

Peter Yabsley:               Sure.

Rachel Stephens:          So there’s no training at all of the ML model for people who are using this.

Mathew Lodge:             No, that’s right. So the Copilot model is basically a transformer - a traditional sort of predictive model where it’s trying to guess what it is that you want. In our case, we are making a prediction about each test, but we try lots and lots of different tests. So we make lots of predictions in the short space of time. We try them all, we pick the best one.

Peter Yabsley:               Fantastic. Thank you. We’ve got one that’s so easy even I can answer it, but just to clarify, one of the questions is asked whether Diffblue only works in Java, and the answer is yes, right now at the moment, Cover is our first product. But as Mathew said at the beginning, we have our eyes on the wider world. So hopefully in future, we’ll be able to expand that out as well. Okay. We’ve just got a minute or two left. Rachel, we did have a question asking how we could get in touch with you and with RedMonk after the webinar.

Rachel Stephens:          Well that’s exciting.

Peter Yabsley:               Would your website I guess be the best place for them to do that?

Rachel Stephens:          Yeah. So if you want to see our research more,, R-E-D-M-O-N-K. We have one of our kind of key pieces of research coming out soon, which will be where we look at how languages are used in kind of open dataset. So GitHub and stack overflow. And so if you want to see more about Java and all of the other top tier languages, that’ll be coming out here at the end of the month, so that’s an exciting one. And you can also always find me on Twitter. So that’s rstephensme, R-S-T-E-P-H-E-N-S-M-E.

Peter Yabsley:               Great. Thank you. And there’s just one final one I wanted to cover, I’ll paraphrase, but they’re basically the question is asking, essentially, is it just a matter of adopting some new tooling if a team wants to focus on developer experience, is that the start point typically? Or are there other steps you see teams taking, if they want to look at this area more closely?

Rachel Stephens:          Was that to be to Mathew?

Peter Yabsley:               Sorry, to you. Yeah.

Rachel Stephens:          Okay. A start point for new tooling. I mean new tools can definitely be part of it. I think this goes back to the conversation that Mathew and I had earlier around culture and tools kind of needing to go together. So tools are important, tools are not a silver bullet, so making you kind of have that behavior change going hand in hand. But it’s one of those things, like when we talk about start points for teams, a lot of the start points that we tend to recommend if you’re kind of doing that digital transformation journey or those things is like your CI system. If you’re trying to get people into a place where you tend to have large impacts for the entire way that you’re delivering software, we do recommend generally speaking, like CI/CD is the group of most good things when you’re kind of trying to do that software delivery.

Peter Yabsley:               And Mathew, maybe then we can just round off by, I’ll ask the obvious question about CI - Cover fits into all kinds of CI pipelines and this is something we see with our clients in how they use it, correct?

Mathew Lodge:             Yeah, that’s right. Yes. So the command line tool is designed to be integrated into the CI/CD system. So there’s a whole bunch of options in there to make that integration easier and faster. So I just showed the basic, just write me all the tests. And then there’s a different mode for incremental test writing, so we give it the diff for that. And so we can figure out which tests need to be re-written based on the changes that have been made because we know which tests reach that code. And so we can do incremental changes. So all of that is completely… So the tool is designed to work with whatever CI system you have.

Peter Yabsley:               Cool. Great. Well, look, I think it’s been a fantastic discussion. I’d like to thank you both for your contributions and thank you to all of the attendees who sent in those questions and joined us here on today’s webinar as well. If anybody would like to learn more about Diffblue, of course you can do that on our website, at, and you can also see a variety of ways to contact us there. And you can try Diffblue Cover for yourself as a free trial version there as well, if you’d like to see what it can do. And as I said at the start, this is a recorded session. We’ll be sending a link to everyone who signed up, so look out for that if you’d like to see it. And we hope to see you at future Diffblue webinars. Once again, just remains for me to say thank you to Mathew and Rachel for joining us and to everybody who’s on the webinar here today. Thanks very much.

Mathew Lodge:             Thanks very much.

Rachel Stephens:          Thank you. This was great. Rachel Stephens:          Thank you. This was great.