206: TDD in Context

Brian:

Welcome to Test and Code. I wanna start a series of episodes where exploring test driven development. I think things are different now. Yes. That's obvious and will always be true, but I think things are different enough now than when TDD and other agile related lightweight processes and stuff like that were developed.

Brian:

And maybe we need to reexamine everything, like the baseline assumptions, maybe the deliverables that we have to deliver now, the processes we use, the tools we have. Maybe we don't, but I think it's worth trying. In any case, this is a start of a series where I'm gonna explore the concepts and practices of TDD and lean and pragmatic programming and agile and even a bit of waterfall. But I gotta start somewhere. So let's start with where I started.

Brian:

I first heard of test driven development really as its predecessor name called test first programming. And that was when I was reading about, extreme programming. Extreme programming to me actually, I'm not gonna cover that too much because as a whole, it seems wacky to me. But the test first thing made sense. So here's the idea that I well, actually, here's where I was at the time.

Brian:

I was working on a team, and this was probably around 2,000. And we were doing starting to do some automated testing, but just, like, playing around with it a little bit. What I we had this big system I was working on, some legacy code, and I was working on a little piece in the middle. And this is an embedded system, test equipment. Anyway, there's a QA team, a different team.

Brian:

It's not they worked in the sort of the same building nearby, but they were doing automated testing, and they were using Python, which was cool. And we were able to run some of their tests, but we didn't have access to their entire sis test rack. But we had, access to some equipment. Anyway, we would do, like, the whole thing of, like, design what features we needed, work on them, try to get them done, run some manual testing and stuff, try to figure things out. And then the QA team, at the same time, was developing tests around the features we were gonna deliver.

Brian:

This is all great, but we get to the part where we think we're kinda done and QA is still finding stuff. So we're definitely in that, like, thing that you heard about on why we weren't doing waterfall, but it sort of seemed like it because we had this cycle of, at least several weeks trying to fix bugs, that we didn't know were there because the tests were being weren't there to begin with. So when I came across test first programming, I thought, this is kind of a neat idea. Wouldn't it be easier if we had the tests all done ahead of time? So that was my first idea.

Brian:

That's not really what test first programming is. But my idea was if all these QA tests would have been done at the beginning, then we could just develop until they all pass, and we could be done. I don't know if it'd be faster, but at least it wouldn't be surprises. So I wanna read you a little bit about, this is from the extreme programming.org, talking about test first. This is, and that's where I came up around test first programming.

Brian:

So it says here, when you create your test first before the code, you will find it much easier and faster to create your code. Sounds awesome. The combined time it takes to create tests and create some code to make the test pass is about the same as coding it up straight away. But if you already have unit tests, you don't need to create them after the code, saving you some time now and a lot later. So the idea their theory is it's gonna take you about the same time to develop test, but at the end, you'll or develop the code with tests.

Brian:

And at the end, you'll have both code and tests. But they're unit tests. Creating a unit test helps a developer to really consider what needs to be done. Now I wanna, like, talk about what a unit test is really in the extreme programming sense. So in extreme programming, they had 2 kinds of tests.

Brian:

They had unit tests and acceptance tests. And an acceptance test was a test that reflected the user stories. So this could be automated user like a store user story around a new behavior. And it could be an automated story automated tests, or it could be a manual procedure to to check out something. And so these were sometimes these were like b BDD, sets of tests, and sometimes they were like workflow tests that we often hate now.

Brian:

But they were there, acceptance tests, and they're not they're not entirely gone away. The big difference between those and unit tests are that they are often written by QA people and not the developers, and they're often these workflow types. They're almost always from the outside user. Actually, they're they're supposed to be from the outside either through an API or more likely through the user interface or through GUI or something. And they're black box.

Brian:

They don't know what the internal state of the system is, that sort of a thing. That's the idea. A unit test is just it's not really a unit like we talk about them now, like these fine grain things. They were just developer tests, and it still was trying to be around, the behaviors. Here's the idea.

Brian:

This is still talking about test first programming from XP. There's a rhythm to developing software test first. You create one test to define some small aspect of the problem at hand, and then you create the simplest code that will make the test pass. And then you create a second test, and then a third, and so forth. Then you continue until there's nothing left to test.

Brian:

This doesn't talk about refactoring right here, but refactoring is part of XP also. Anyway, so that does sound kinda like test driven development. So let's, take a look at test driven development. I'm just reading this straight from the Wikipedia site. So test driven development cycle, I'm just gonna read the titles, is first add a test, then run all the tests, and the new test should fail for expected reasons.

Brian:

And then write, write the simplest code to pass the new test. All the tests should pass now. And then refactor as needed using tests after each refactor to ensure the functionality is preserved, and then you repeat it. So this all sounds great, but there's some weird quirks in the details that are kind of, hangovers from when this was developed. So this the book came out, or I I don't know.

Brian:

Extreme programming was late nineties. Kind of test first programming kind of morphed into test driven development right around 2,000 ish, around there, I think. So what's the problem? So this all sounds great, but there is some there's some some legacy stuff sitting in here. So let's go back to add a test.

Brian:

So why it says add a test, a test. Why 1? The thing is that there were these unit test frameworks that people were using, and there really was one of the things is to do one test at a time because it's nice to have everything green, everything passing. If you have one failure, you're adding a failed test. There's just the one failure, and so you're making trying to get it to turn green.

Brian:

Awesome. But, like, why just one? And it's because of this, you don't wanna check-in a a set of tests with failures in them because, the next person coming around, wants to start with all green tests. This is completely reasonable ish. However, we have more tools now.

Brian:

Like within Python and pytest, we could mark you could mark a test as skipped because the feature is not done. So you could you could check-in broken behavior with a skip test, as long as the rest of the tests were fine. You could also develop multiple tests. You could because of workflow things. I'm the kind of person that I get in a state of testing.

Brian:

Sometimes I just want to stay there and just kind of code up a handful of tests. I don't want to go too far, but if there's a bunch of things that are kind of similar, why not write them up? That's that's my philosophy, and that's kind of how I work. But it's still, like, don't go too far because you might go off in the weeds. Anyway, we'd also do x fails.

Brian:

So there might be a failing test, and you might put an x fail in. It's kind of, that's a it kinda has to have an agreement with the rest of the team and everything. Anyway, so that's where add a test, one and only test, came from is because to try to make the, test suite always stay green. So now run all the tests. The this this is kind of where we get to to some people wanting to have really fast test suites because this cycle is you're you're running all the tests a lot during the refactor stage, all all the time, and we really don't want developers sitting around waiting for tests to fail.

Brian:

Things have changed a little bit. We now we have, CI systems, and we can have longer running tests and more complete tests running in or the whole system. So there is a little bit of a change now that we could make to this to say we we should at least, run the tests in the system that we're working on. We don't have to run the all of the system tests all the time. We could 0 in and narrow in and then have a second stage of testing or a third stage of testing where we're running more stuff.

Brian:

So that has changed. We'll revisit that later. So add a test, run all the tests, the new test should fail for expected reasons. So if you haven't implemented anything or you know that you've just stuck in a return 5 that the answer is wrong, you should you should see that expected fail, not a not an exception or an assert or something like that. But maybe maybe you expect the assert, or expect the exception.

Brian:

Now write the simplest code that passes the new test. This is the weird part, I think, that I definitely will spend a lot of time talking about because there's been arguments and discussions ever since this came up as to what the simplest code to pass the test means. There's been people that teach, you can have hard coded answers come back just to match the test. I think that's goofy. I usually think of this as, write the most obvious thing that you think to implement this, like the brute force method or the obvious answer to you even though it's not elegant.

Brian:

Yes. NL elegant is fine. I don't think hard code is acceptable. I don't think that's what the intent of this should be. That seems weird to me.

Brian:

But, anyway, that's it. Write the simplest code to pass the test. It isn't necessarily the most elegant or the smallest. It's, I think, just the quickest thing to get from point a to point b. 1st, get it done.

Brian:

1st, get it right, then you can clean it up. I think this is part of the first get it done, first get it right part. All the tests should pass now. Yeah. Except for, do we really care if all the tests passed?

Brian:

It it depends on your CI system and how you're doing things. If I'm working on a tiny subsystem, I'm probably not gonna run all the system tests at this point. And then refactor is needed using tests after each refactor to ensure that functionality is preserved. Yes. Again, though, I hope that my architecture is such that if I'm mucking with some subsystem, I'm not gonna break, the unit test of a different subsystem.

Brian:

And then we also need to talk about what the the size is. Note note there's nothing in here so far that talks about mocking or or how small your unit is or anything. That all comes later. That's, like, some stuff later, but and some stuff that I think a whole bunch of other people added to it. But in this simple sense, as long as we can I I say that I still think this is a cool idea as long as we modify it for modern principles that we have CI systems, that we have, subsystem isolation in tests?

Brian:

We we can do things smarter. We don't necessarily have to have this sort of rigorous. I'm gonna test the entire system every time I change one line of code. That seems ridiculous to me. Anyway, refactor is needed.

Brian:

But even on here, and then repeat, of course. Now the idea often, how this is taught, is you do tiny little steps, just a few little tiny changes to your code, and then you hop over and and, make another test. And you do this back and forth. The problem with this is it's that the even though Kent Beck, who's first talking about the test driven development, often talked about separating behavior and implementation, It isn't really coded in here, and it really should be. Its behavior and APIs is where the testing should be, and I believe that.

Brian:

I don't believe that this testing at the like, I wrote a function, so I'm gonna write a test around it. You don't necessarily have to do a lot of that, but you can. Like, for instance, I often do it if I've got, like, this little algorithmic piece that I'm, like, scratching my head around of like, I don't know if this is right. For a little algorithmic thing, there's no state in this the system that I care about. I'm like, well, I'll I'll give you an example.

Brian:

I just recently had to write a function that had it was in Python that I take, I'm pretty sure there's, like, an elegant way to do this, but I don't know an elegant way right now. So I have a a list list of elements or a sequence of elements, and I wanna make sure that that I don't have any repeats. Or, for instance, I want to have equality checks between all the elements, and if there's any repeats, remove the repeats. But not duplicates in the list is fine, but if they're repeated elements. Like, if I have 1, 2, 2, 3, I want to take out the second 2 and just have 1, 2, 3, 2, 3, 2, 3, 2, 3, 2, 3, 2, 3, 2, and repeating, that's fine.

Brian:

Those are still not not duplicates next to each other. So I wanted to have a some code to take out those duplicates. So I could describe it really easily, and I could I just, like, have a hack way to code it. And, but, and I'll probably go back and clean it up. So I did write tests around that.

Brian:

It's a little tiny test around a little piece of the system, but I don't often do that. I I try to most of my testing is an API level. Anyway, so let's move down. In the same, Wikipedia article, there is, and I'll link to it, There's a bunch of stuff, some of it good, some of it bad, in my opinion, but there's, some benefits here and some limitations. Now the limitations are kinda interesting.

Brian:

There's already limitations listed here. It says here, test driven development does not perform sufficient testing in situations where full functional tests are required to determine success or failure due to extensive use of unit tests. See, that's that's kind of an issue for me. I think that one of our I I I think of the test suite as including what we would traditionally think of as acceptance tests because now actually, let's go back to, like, where the state of me is now. Back when I discovered all this stuff, I was on a team where we had a QA team.

Brian:

That's unusual now. At least, it's unusual in my career, modernly, like, now ish, and it has been for the last 10 years. I normally work on a team where the development team even though I've got, like, split up, there's there's a large system with different development teams developing different parts of the system. The development team as a whole is still responsible for the automated tests around it. We don't have a separate QA team.

Brian:

But we do have sometimes an embedded some embedded test engineers to help out with the automated test work, but still the developers are also responsible for at least defining the, defining the test cases for coming up with that stuff and possibly implementing a lot of the that. So it makes most sense to have you can't ignore the functional testing of the system or what XP called acceptance test. So we need to include that as part of these tests. Now there's extensions. There's we'll talk about this possibly in future talks, but, there's acceptance test driven development, which is like we're gonna tack on this acceptance test thing.

Brian:

And it kind XP sort of talks about this of, like, saying, okay. Well, let's take the acceptance tests, automate those, and then repeat, and then shift also have test driven and every time a acceptance test fails, then start the TDD cycle and and have unit tests and have acceptance tests and unit tests on different systems. And, yes, that's doable, I think, but it seems like possibly there's some duplication of tests. So, anyway, I kinda got into some topics that I was gonna talk about later. But, anyway, this is a sort of a brief rundown, semi brief rundown, of of how test driven development, what it is kind of, and also how this fit into my reality of being a software engineer.

Brian:

And, I still think test driven development's a cool idea, but I really wanna change how we think about or at least how teams that don't have a separate QA, that have complex systems, that have, legacy code that they've gotta maintain. I think this is a lot of people, and at least that's me. I'd like to change, how we think about test driven development and sort of change some of these mindsets to make it, more usable and, and less problematic. I was gonna cover an article called, why some developers or something like why some developers are not using test driven development. Maybe I'll cover that in the next episode, but I this like I said, this is gonna be a series of episodes to kind of exploring test driven development and how it fits in with modern tools and how things have changed a little bit.

Brian:

And leading up to and including a discussion around how to, take out some of the waste of test driven development, And that's waste in the sense of, lean methodologies of, trying to get think about all the deliverables we need to do for a saw modern software project and how to do that efficiently and make testing and software development fun. And I hope that I can get there. So, thank you for listening, and we'll keep going next episode.

Creators and Guests

Brian Okken
Host
Brian Okken
Software Engineer, also on Python Bytes and Python People podcasts
206: TDD in Context
Broadcast by