210: TDD - Refactor while green

Brian:

Test driven development. Red, green refactor. Do we have to do the refactor part? Does the refactor at the end include tests? Or can I refactor the test at any time?

Brian:

And why is the refactor at the end? Well, this episode is to talk about this, with a little story. There's a debugging test failures chapter of the complete Pytest course. That's chapter 13, and I just recorded and released it recently. When I was preparing to record it, I decided to add a TDD example.

Brian:

In the past, when teaching debugging with pytest, I'd show an example of a new feature with code changes and new tests already in place. But this time, I made the changes while using a TDD workflow and incorporated it into the course chapter, and it was a lot of fun. What's the new feature? Well, I added a new feature to cards. Cards is a small command line interface based team tracking application, team tasks tracking.

Brian:

And I use it more for a teaching aid than a utility, but I do use it myself sometimes. The cards application already has a list command to list the current tasks, and it has options for filtering those tasks built into the list command, including listing completed tasks. I use this a lot, and it's a bit verbose to type out. So I would like to be able to just pass in a done command. Just say cards done and have the same effect.

Brian:

So this done is a new command, and it needs to have the filter and all the code for it just built into the into the code. It's not a complicated feature, so this is a good one for a demonstration. So what does it look like to add this functionality in a TDD workflow? Well, first, the test. Right?

Brian:

Test first. I knew I wanted the feature to be available both through the command line interface and through the API. It easily could have been implemented in the CLA only, but I decided to implement the done in the API with a new method. And where it's going to be implemented is important before we're testing because I wanted to know at what level to write the tests, whether I would need to write the test just in the CLA or just the API. Well, probably not just the API because it's gonna be both changes in the API and CLA.

Brian:

I've also got the API and command line interface test separated in their own directories, so that's easy to separate those 2. So then I move on to add those 2 tests. I already had a test for list with filter for both of the in both places. So it's a fairly easy exercise at first to just copy paste modify the existing tests. In this case, it seemed the most appropriate and expedient to get the test written quickly.

Brian:

But as for a lot of copy paste modify events, I ended up with code duplication. Actually, the command line interface test wasn't that bad, and I couldn't really figure out how to not duplicate in that those senses or at least have, have mostly a copy paste modify, and mostly it was right. But the API tests were a different story. The API tests had already been split into different test files for each command. And when I set up a new file for the done command, I grabbed a couple of fixtures from the list test file.

Brian:

And I knew I probably wanted to not have that duplication later. Actually, this takes mental effort for me to not clean things up as I go. There are code smells and there are code smells, and this stunk. 2 identical fixtures in 2 different test files. Terrible.

Brian:

Why not just move those to a conftest.py file now and be done with it? Well, because I don't want to touch any file I don't have to at this stage. And for files I need to modify, I want it to just be adding functionality, not moving, changing, or removing existing functionality. Also, I don't want existing tests to fail because I did something dumb like deleted an import that I shouldn't have or something like that. The only failures at this point I'm okay with are in the new tests.

Brian:

And they better fail because I haven't implemented anything yet. But I don't want to forget those obvious refactorings before I move on. So how do I keep track of those? I'd like to say that I have a consistent way of keeping track of this, like adding a to do comment or writing it on a notepad or a whiteboard. But, honestly, I'm all over the map with this.

Brian:

I do all 3. One of the cool parts of Test First is that while writing tests, you have to decide what behavior is going to look like and what the interface is gonna look like. I'd already decided that done was my new command for the command line interface. But when I was writing the API tests, while writing the test in the middle of this made up scenario, it's a decent place and time to decide on what the method name is and what the parameters are gonna it needs and what the return value is and stuff. The downside of this, of course, is that text editors don't code complete for you for methods that don't exist yet.

Brian:

But that's a small downside. So I implemented the new tests, and we have a couple of new failing tests right. And they should fail, and all the old tests pass. Obvious. Right?

Brian:

Well, not really obvious. If I had submitted to the temptation of moving fixtures around or changing existing tests to minimize duplication, there's very much a chance that the new test would fail and also some existing tests would fail. And I don't want that. So I resisted the temptation, and we're all good. Old tests pass, new tests fail.

Brian:

Now on to implementing code changes. Again, first draft first. I'm going to try to mostly add functionality and leave any modifications, deletions, or code movement, or any refactoring until the refactoring stage later. I know I'm gonna refactor, so I don't need to change it now. The API modifications were pretty easy.

Brian:

There was already a list cards method with filters, and I just added a list done cards method that used the old method and filled in the filter. Easy peasy. No duplication. Awesome. The command line interface modifications were nastier.

Brian:

The simplest thing to do, the most obvious, was to copy, paste, modify the list command function into a done command function, change the API call to the new API method. This, this done command wasn't a huge function, 12 lines of code, but most of it was just setting up the table to get printed, and that was duplicated with the list command. There was literally just one line different between these two functions, the call to the API. That's gross. But I did not want to remove that duplication to a new function right away.

Brian:

Why not? Why didn't I wanna remove the duplication? Because I didn't wanna touch the existing functionality if I didn't have to. So one way to leave the existing functionality as is but have the new functionality still still good would be to create a helper function for the duplicated code and then, have the new method call the helper function. The for me, the problem with doing this now is that I might forget to go back and use that helper function in other places where it could be used.

Brian:

So right for now, leave the duplication in place is easier because I will make sure that I don't merge this myself as is. I will remember to refactor. Okay. So I've got the this implemented in the test run, and it all works right. Well, no.

Brian:

The tests fail. Both of them. It's kind of the point of this course section though to try out the debugging with pytest and pdb. Py pdb is the command line interface Python debugger, and it's pretty nice, actually. This section also, we talk about using PDB with pytest and also with PDB with pytest in talks.

Brian:

It's actually kind of fun, and it makes me feel like more of like a traditional hacker and then always leaning on IDEs. So, also, if you have a test failing in just one tox environment, it's good to know how to do that. It's not that hard. Anyway, one of the bugs was on purpose. I intentionally mucked up the API implementation so that we could play with debugging tools.

Brian:

The other bug was an honest mistake I made during the recording of the chapter on the command line interface test. It was a helpful demonstration of how to debug, so I left it in. And it's actually a different bug than the one I talk about in the book, because while writing the book, I made a completely different honest mistake. So far, I've added just added code. I've added new tests.

Brian:

I've added new implementation code. I haven't changed any existing code functions or any existing test functions, but the tests fail, the new tests. So the we have to debug. The good thing is the bugs are probably in the newly modified code or tests. Since I've held off refactoring, I've limited the scope of where the bugs probably are to just that new stuff.

Brian:

Of course, it's possible that because new code is calling existing code, it may be calling it in a way that hasn't been tested yet, and it might make the old code blow up. But I always look at the new code first because that's usually where things are going wrong. So I fixed the API, then, the test CLI test still fails. So I debug and fix that. And now, remember, the tests are new also, so the bug might be there.

Brian:

Once the tests start green, then what? We refactor now. Right? Yeah. Kinda.

Brian:

But before we refactor, remember to commit. Before refactoring, commit your code. You already have it working, and it's a good place to have a save point. Worst case scenario is you can live with subpar implementation if you can't get the refactoring right. So commit your code before refactoring.

Brian:

Now for the refactoring. I had 2 things I wanted to do. Right? I wanted to move some duplicated fixtures in the tests into a shared conf test file. That's in the tests.

Brian:

And I also wanted to make a helper function for the command line interface for the new table layout and use it for both list and done commands. But I wanna do those 2 refactorings separately. I want to start the test suite and make sure that it passes before I start refactoring, refactor 1 bit, make sure the tests are green, and then move on to the next part. Commit when green, refactor when green, and keep refactoring chunks separate. Tests and code.

Brian:

What do you do first? Do I refactor the test or the code first? Doesn't really matter. We could do either in either order. So I'm refactoring at the end.

Brian:

But I'm not just refactoring at the end because test driven development says red, green, refactor, and I'm just following along. I'm doing this because it's less work. Debugging a smaller set of changes is easier. Refactoring when the tests are green is easier. Also, refactoring after the working code is committed allows me to revert if I need to if my refactor had gone into the weeds, or the tests don't pass and I can't figure out why, or the code is just way less readable than before the refactor.

Brian:

I can just revert. Also, fixing code with a revert is sweet. Having all your tests be mucked up, reverting, and now your tests are green, feels good. It may feel a little bad that refactoring the refactoring change I thought was a good idea turned out to not be, but I feel real good when that revert to the last commit goes smoothly and the tests are all good. Does what I just did follow normal TDD flow?

Brian:

Well, I don't know if there are normal t d TDD flows, but it does stray from some other TDD variance. Let's go through some of the deviations. First, I wrote the I wrote 2 tests first and not one. Why? Well, I usually write the tests I know I need all at once.

Brian:

In this example, it was 2. Sometimes, it's 1. Sometimes, many more. I like to focus and be in the flow. And writing a handful of tests at a time often just seems like the best use of my time and attention attention.

Brian:

There is really no downside. I don't go too far into speculation. I do honor Yagni, of course. Yagni is ain't you ain't gonna need it. But sometimes it just feels more productive to write a batch of tests first before switching to implementation.

Brian:

And that workflow works for me. 2nd, I did the implementation first draft of both the CLI and the API all at once, without running or testing, or running tests or debugging in the middle. Again, it's a flow thing. I just felt like doing it both at the same time. But normally, I would probably do those separately.

Brian:

3rd, I know I'm usually going to do a first draft or refactor for both tests and source code, and I try to fix one test failure at a time. So far, same as most versions of TDD. Right? Actually, I think this is how tons of people practice TDD anyway. However, some teachings of TDD show a whole bunch of micro changes of tests and of code, like test without a function, test with an empty function, test with a function that returns a constant, stuff like that.

Brian:

I don't have the patience for that. My first draft usually isn't pretty, but I do actually try to do the whole thing I'm trying to do with my first draft. If it's a large feature and it makes sense to tackle a subset of test cases at a time, that makes sense. Or maybe I don't also don't feel good about the design yet. Sure.

Brian:

I might break it up and use a few TDD phases to implement instead of just one. But often, it's just a first draft or a refactor, and I'm good. Fourthly, was fourthly a word? Fourthly, I usually only test at the API and UI level. I do sometimes do unit tests at a function level for gnarly functions.

Brian:

But I try to stick to the API as much as I can because anything deeper is a test of implementation. Do you remember what the point was? Well, this whole story was really to talk about a few points. 1, write your first draft for new tests and code in a way that leaves the rest of the test and code intact as much as possible. That way, the bugs are probably in a smaller search set.

Brian:

2, refactoring touches existing code, so make sure your tests are green before you refactor and still green after. 3, commit often when tests are green. And 4th, reverting a refactor isn't a failure. It's a victory that you remembered to commit at the right time. That's all for now.

Brian:

Thank you for listening.

Creators and Guests

Brian Okken
Host
Brian Okken
Software Engineer, also on Python Bytes and Python People podcasts
210: TDD - Refactor while green
Broadcast by