214: Python Testing in VS Code

Brian:

And today, we're gonna talk about VS Code and pytest, and and also unit test, of course. But all testing basically, the VS Code, interface for testing has changed recently, and that's kinda what we wanna talk about. Today, we've got, Courtney Webster and Eleanor Boyd. I'd let's start with Courtney. Courtney, can you introduce yourself?

Courtney:

Yeah. Hi, everyone. Brian, thanks for having us. I'm really excited to be here and share all the work that we've been working on in the Python extension. I'm the product manager one of the product managers on the Python extension in VS Code.

Courtney:

I joined the team a little over a year ago, maybe a year and a half now, straight out of college and have been working on kind of the getting started experience and the extension and as it pertains to the different features. And so worked with Eleonore, to get kind of the rewrite rolled out to all of our users. And day to day, I work on things like customer customer development, research, data analysis, help with controlled rollouts like this one, writing docs and other publications, and then working with the engineering team to kind of do our our planning and our road maps for the extension overall.

Brian:

Awesome. And, Eleanor, can you introduce yourself?

Eleanor:

Yeah. Definitely. I'm a software engineer on, the Python for VS code team. When I joined about a year and a half ago, I took over testing. So all things testing is my wheelhouse now.

Eleanor:

So I began with this, testing rewrite, which I took on from some other amazing developers, which we'll mention a little bit about when we talk about this journey. But, yeah, if you've ever had a testing question, you might have seen me on on GitHub. I do all things testing for Python and VS Code.

Brian:

All things testing. That would have been a good name for this podcast, but, you know

Courtney:

Next time, you'll have us

Eleanor:

on again, and then, you know, we can do it all over again.

Brian:

Well, I already changed the name three times, and I think I I think I hit my limit. So Yeah. Okay. So VS Code, it had testing in it, or at least the Python extension well, I don't. When I started using the Python extension for VS Code, it had some testing in it, but the testing that's there now is completely different.

Brian:

I don't know how you wanna tackle this, discussion, but, basically, how do we how do we get from there to here?

Eleanor:

Definitely. I think we wanted to start with the first some kind of background, for what our testing support is like in VS code. So Courtney was gonna talk a little bit about what features we even have compatible for pre and post rewrite, and how those are different.

Brian:

Okay.

Courtney:

Yeah. So when you think about the testing support, I guess, some background too is when you think about VS Code, the Python extension, and then the testing frameworks, they all kind of have distinct roles in this, the story of testing that we're telling. And so the extension has always had pytest and unit test support to provide testing to our users. And the extension really acts as like the middle layer between VS Code and then the frameworks. And so it does the translating the in between part.

Courtney:

And VS Code handles all of the UI, and then the frameworks are doing the heavy lifting generating the data. And so the Python extension is just presenting the data in a way that's easy to read, for the users. And so within the extension, you've always been able to run test discovery, run, debug your tests. You can view output in the testing panel. There's a testing output channel that you can digest your test with and things like that.

Courtney:

And so with the rewrite, we didn't really a lot of it was more like back end changes. Like, we didn't wanna do too much heavy lifting on what the user's experience in terms of the UI. And so most of the rewrite was in the back end about, like, how it's being done, how's how the data is being parsed, and things like that, that Eleanor can, provide some more color to.

Eleanor:

Yeah, definitely. So when we talk about maybe the the user experience for, those of you listening that are users of the testing, you might not have noticed a lot of changes because it, you know, our first goal was to get to feature parity with before the rewrite post rewrite. Because, again, like Courtney said, it was all back end. And so you're probably thinking, why did we did we make these changes if, you know, it's gonna look exactly the same to the user? And it's all about the how well it works for users and also the possibilities moving forward for changes and updates.

Eleanor:

So previously, on testing, we, like to say it was very brittle. What how testing worked before the rewrite was we, parsed all the output from pytest, from unit tests, and that is how we then generated, what you see in VS code. So for example, you know, you see that little green check mark next to a test when it passes in VS code, that was being you know, we were generating that by reading the output, you know, right from standard out standard error, and then finding that test and and returning that it had passed. And that's how you got that UI element. And parsing output can be very challenging, can be super finicky, and was was not the best as we wanted to look to extend to to a lot of other things and increase functionality.

Eleanor:

So that was, you know, not the not we wanted to do. So the team got together. I'll talk more specifically about Pytest because that's obviously more of a focus on this podcast, and for you, Brian. But, we wanted to make something that was gonna look better. So before I came, my wonderful team, including my manager, Brett Cannon, who, Brian, you know, and connected us.

Eleanor:

He reached out to the Pytest team about what's the best way to create this this support and integration and figured out that the plugin was gonna be the best way. So Pytest has a amazing plugin system that allows you to use Pytest hooks to connect into the Pytest the entire run flow from, you know, call to finish execution, and this was gonna be the best solution for our rewrite. So first, we started on the Python side working, or on the TypeScript side, working on things that needed to happen on the extension to make this possible. So before me, it was Kim Adeline Miguel who was working on it, and she did she did the start. And then our wonderful coworker, Anthony Kim, when he was at his internship, he's not returned full time, which is great.

Eleanor:

He worked on unit test support. And then, I came in and did Pytest and wrapped it up. So, it's definitely been a team effort, including a lot of people along the way. But, diving into a little bit of the technical details, because I'm sure you wanna know what's different and how we do the rewrite and and what this plug in is.

Brian:

From my perspective, it seems completely different. So,

Eleanor:

it Oh, good.

Brian:

The and basically so my my workflow often is, create a virtual environment before I even open VS Code. I'm creating a virtual environment, installing everything I need, and then opening and then just hitting, like, code dot, and and it pops up my it it opens this this workflow. That's my workflow. And then Yeah. I go to the little, like, there's a the icon for it looks like a test tube or a beaker or something like that.

Eleanor:

Yeah.

Brian:

And, open that and but then usually I have to make sure that I set the the test runner to Pytest, and then it asked but it it normal if you it just asks you. It says you need to configure this. Yeah. You hit the button, and, and it says, like, you know, do you wanna use unit test or pytest? It's like pytest, of course.

Brian:

And then, and then it asks you, like, from what directory, And that's really it. That's what the that's the Yeah. That's the obvious configuration you have to do. The the later on, where I have to add command line flags, that's that's another something to learn. But, right.

Brian:

Okay. So there now I'm looking at my code, and if I just hit, like, run all the tests or anything, it just works now. And it pops up the results right away. So Uh-huh. That's the thing that, the the the things that I noticed that are really that I love, that I'm so excited about for this is that I can have, like, broken I can have broken code that's not being run by the tests like other code.

Brian:

Yeah. It it used to be that, like, the test wouldn't even load if there was any broken code in my directory. Yeah. Now now the broken my broken stuff that's in my, like, you know, sandbox directory or something like that doesn't break the rest of my tests, which is awesome. So thank you.

Brian:

Okay.

Eleanor:

Yeah.

Brian:

And the other thing was, the test results were, like, I got the green check mark, but I'm like, well, where's the output? I wanna see the output. And and now it automatically pops up. So, I like the experience now, and it does if it if it really is the same as before, I don't see it as the same. It feels it feels brand new to me.

Brian:

So I like it.

Eleanor:

Yeah. Yeah. And I think that that's, like, the rewrite was getting it to the same, and and now you're talking about those feature ads that we were able to do. So, you know, implementing and moving to a plug in now allowed us to have error tolerant discovery. Because now we could instead of waiting for the whole thing to finish, and now we're parsing output that has errors in it.

Eleanor:

Instead, we're sending back positive these payloads from pytest that are saying, okay, here's the tests that we were able to discover. And, you know, Pytest is error tolerant as well. You know, like, it is able to keep going if it if it finds, you know, a file that has a incorrect import in it. And so we're just, you know, harnessing that functionality.

Brian:

So does does That's the point. So does so does VS Code use the Pytest test discovery then? And

Eleanor:

Yeah.

Brian:

Use the output for it? Okay.

Eleanor:

Yeah. So what happens is, you put in you talked a little bit about those testing arcs. So users can put in their their testing arcs, and we'll just take that. And we actually just spin up a sub process and run pytest. And so we'll just run pytest with all the arcs that you submitted.

Eleanor:

And then we also just add in our plugin. And so what our plugin does is it just connects in via the hooks that we discussed, pie test hooks. And then it just creates these payloads that it then uses sockets soon to be named pipes for making a change to communicate back, the data to to the Python extension that we can then display. So instead of waiting for the entire, you know, test run to finish, like, you know, 1,000 tests, instead of waiting for all of those to finish, we can send back payloads whenever we want. And this allows a lot more functionality, and allows for that error handling in in such a more, useful way because we're we're creating our own communication instead of relying on standard out as our only means of transmitting information.

Brian:

Yeah. Cool. Super fast and zippy.

Eleanor:

Yes. Definitely. And that was one of those feature ads is, in addition to the air tolerant discovery, you were kinda talking about the the speed there, but, you've probably also noticed dynamic run, which is now possible with the new rewrite.

Brian:

Tell me what what is that?

Eleanor:

Yeah. Definitely. So if you have a ton of tests and you click run, they'll come in as each one finishes, it will display the results to you instead of you having to, wait. You know, you could watch in the output, you know, applied has, you know, puts little green dots when your tests are going well. You'll notice that in your sidebar, the green checks will show up as you go.

Eleanor:

So if you have a 1,000 tests, you'll start seeing the green checks come up as they go instead of waiting till the very end, and then have all the UI update at once.

Brian:

Yeah. Get, like, 57 little dopamine hits.

Courtney:

Yeah. Exactly. It's better that way if

Eleanor:

you notice something's not working.

Brian:

So do they run-in can you does the, like, access with it could work with it? Can I run the test in parallel?

Eleanor:

The for if you can run parallel tests Yeah. And then have the output yeah. You can run you can run parallel tests.

Brian:

Okay.

Eleanor:

And you can attach the debugger, and all of those things. So if you wanna add some that is actually something I've been working on recently as we switch over to named pipes, making sure that all works. And that was one of the challenges that we really saw. Once you start sending back, info after each test finishes. Now it's harder to know when do all of the tests finish.

Eleanor:

So a lot of dynamic that dynamic run infrastructure. We had to change our design on how we knew when test run finishes, how we correlated output, how we made sure we got all the output. So that's where kind of the big challenges came in in this in this rewrite was how we then handle, you know, your entire run lifespan.

Brian:

Alright. Okay. I didn't wanna derail it. I've got a ton of questions, but I don't wanna derail it too much. No.

Brian:

Is it what do you wanna talk about next with this?

Eleanor:

Yeah. Courtney, do you wanna talk a little bit about, you know, we made these changes and what the rollout looked like and and how that whole process went for us?

Courtney:

Yeah. I think that's a that's a good next step. And I'll also say that kind of to to summarize and, put a bow on kind of all the updates that we just talked about, One of the big focuses, which is kind of like re redefining those roles that I talked about at the beginning. So, like, putting a lot more or allowing the frameworks to do what they do best. So that way, the extension can be more of, a little bit more performant and do all the other things that we're enabling with the rewrite.

Courtney:

So allowing the frameworks to kind of do what they do best and give us the data and parse the data for us. So that way, we can enable a bunch of these new features and add ons to our users. And so a lot of that was kind of like the focus of the rewrite as well was like redefining those roles. But when we're when we're thinking about we made all of these changes, now we need to give that to our users. This was a complete overhaul of our testing code and testing architecture.

Courtney:

And so it was a huge, huge substantial back back end change. And so to do that, it was like, we know that we are going to release these changes. But we don't want to give it out 100% at once because we don't want to miss something in the code that's going to break testing for all of our users. So, we do what we call a controlled rollout to derisk the changes. So a controlled rollout in our terms is where we're able to slowly push changes out to a portion of our user base.

Courtney:

And so if you use the Python extension, you can either use the pre release version or the release version. And so if you're opting to use the pre release version, you're gonna be a quick adopter, you're willing to adopt bug fixes, changes as soon as possible. And we really rely on these users to give us that that quick raw feedback, and they do, which we love, in in our issues. And so in the controlled rollout, we're able to assign, basically a percentage of our users to receive, these changes. So in this in this, scenario, it's the testing rewrite changes.

Courtney:

And as they adopt the changes, they're able to write bugs. So they're like, this used to work for me, and now it's not, or, whatever their scenario might be. And so we're able to triage those issues and talk about, okay, is this working how we wanted it to? And evaluate, is this a scenario that we considered in the rewrite that should be working? And is this user on the rewrite?

Courtney:

If they are and it's not working, okay, we'll dig into it a little bit further. But if they're not on the rewrite, let's get them on the rewrite, have them rerun their scenario. And if it works, awesome. Like, that's that's validation that this is, this is doing what it's supposed to be doing. And so it kind of just went through, I don't want to say trial and error, but it's just kind of like that feedback loop there.

Courtney:

So going back and forth between our users that are on and off the rewrite and seeing what cases are being addressed, what cases aren't. And so we did a lot of, digging that way, and we were able to find some significant bugs that we wouldn't have found otherwise, kind of like working with our users in this way. And so as we, go through the issues and feel comfortable, we'll up that percentage of the rollout and then slowly get up to stable. And so as we up the percentage of users that are receiving the rewrite, we're acknowledging that's reaching more edge cases in terms of like scenarios, setups, configurations that we're able to test on the rewrite. And so just relying on those issue reports is really helpful to know, 1, if it's working the way we want it to, if there's any holes in our testing, because everything that goes into the extension undergoes extensive testing internally as well before, before it goes out.

Courtney:

And so it helps us kind of match our expectations there as well. And so that was a really helpful process for us to go through as a team as well is just, like, going through the issues that we were receiving, and marking which cases needed a little bit more digging into and which cases we're meeting our expectations.

Brian:

Yeah. Any surprises with the rollout that you didn't, like

Eleanor:

I

Brian:

The things that we did broke or anything or the

Eleanor:

Oh, yeah. There were definitely we went at one point, we were ready. We went up to 50% of all users, and then there were so many bugs. And so we took it back down and it it turned up being an issue with, the size for people that had really big repo sizes since we're communicating over sockets. All of the data wasn't getting sent on one socket, communication.

Eleanor:

Okay. And so we had to completely rethink how we were doing that communication piece and and figure out how to to handle really big repo sizes, and get multiple streams of data that were coming in. So, yeah. A lot of people have different setups. And, you know, you can I created a ton of tests?

Eleanor:

As you know, you can create so many tests. But there are a lot of use cases that I I didn't see people didn't expect people to be using. And so it's so useful to get users to give me their exact their exact repo, and then see from there. So that was definitely a big one was was handling size.

Brian:

And it was Is sorry. The the the I I guess there's there's VS Code, there's the Python plug in for VS Code, and there's the is the testing is there is the testing bit like a separate piece that's that uses that handles other languages also other than Python? Or

Eleanor:

Yeah. So what what happens is we have VS Code core, so that's like VS Code, has, an API for testing. And so that allows you to, do all those things like open the testing panel, you know, use the run button, all of those things. And that's managed by my wonderful coworker, Connor Pete. And then I take his API and I implement it for Python.

Eleanor:

So, this just allows me you know, I do everything that kinda takes the output from Pytest to everything you see kind of on the screen. And he just makes it so all of those different screen designs, all those different places are enabled.

Brian:

Was it well, were there improvements in that in the core testing part also that were needed?

Eleanor:

Yeah. There I think, there were there were kinda 2 pieces when we did the rewrite. It was a lot of just we knew what we had to do, and it was a lot of on our end. But now as we look towards the future, there are some things that we want to enable that, do require some changes on core. I think, like, one small example I can think of is as we switch to using, the the test result panel, where you can instead click on each test and you kinda see the, what the failure, before you couldn't search in it.

Eleanor:

And so, you know, that was a that was a nice feature ask that that we could ask Connor, make that searchable, so you could better, navigate that because we knew users really wanted to do that. And going forward, test coverage is actually coming, and that has been enabled by all this work Connor's doing. He's created and allowed test coverage to be possible in VS Code, and so now, I'm gonna look to implement it for Python.

Brian:

Okay. Cool. Neat. So many exciting thing.

Courtney:

Yes.

Brian:

The the so there was a flag. So you're saying you're rolling it out to a percentage, but there was also this flag that you could turn on. Did that, like, bypass the flags the the the percentage thing then to

Courtney:

Yes. Yeah. So basically, the we have an internal tool that helps us with doing these rollouts. And so it'll essentially, bin our users and then randomize who's receiving whatever, rollout that we're trying to give. And then there's also the experimental flag.

Courtney:

And basically, what our tool does is turn on that flag for those users that are in the whatever, say, 25% that we're turning it on. But a user can also opt in to those, manually. So they can go into their their settings dot JSON file and opt into the experiment manually, as well as opt out. So we give the users kind of the choice of, like, do you want to experience this now? Do you not want to experience this now?

Courtney:

But for this specific change, it was like, you can only opt out for so long because this is gonna be adopted fully. And so

Brian:

Okay. Cool. So if somebody complained, you could just say, hey, just turn this off. Yeah.

Courtney:

And it

Brian:

fixes them in seconds.

Courtney:

Exactly. Yeah. Yeah. And we did have users that were that were experiencing that. And so we were like, you can turn it off for now, but just letting you know, you're not going to be able to turn it off forever.

Courtney:

So let's work together to figure out a way that you can still be successful.

Eleanor:

And this, you know, made it made it easier for bug fixes too. Because we'd find something wouldn't work with the new testing rewrite. We would tell the users that were experiencing this bug. Okay. Go back to the previous version.

Eleanor:

We'll get it fixed, and we'll let you know when you can turn the rewrite back on. So this made it a lot easier. Hopefully easier for our users to to deal with the transition as well.

Brian:

That's interesting. Is there so is there a system in place to, like, so if so, to have, like, a particular bug or something and to say, hey. I've got this set of users that had this issue. Once you have it fixed to notify them to say, hey, this we think this was fixed. Could you try it again and let us know if it's still broken?

Brian:

Did you do that? Or

Eleanor:

it that would be great if we could do it for all users. I mean, we are always on our GitHub, so users report, and then we keep, keeps going. So More

Brian:

of a manual thing then?

Courtney:

Yeah. Yeah.

Brian:

Okay. I'm like, that's that'd be cool.

Courtney:

I know. I wish.

Brian:

I don't know how to do that. Yeah. Anyway, so is this, is the I know that a bunch of the VS Code well, actually, I I think that the bunch of the VS Code Python stuff, or at least the Python stuff is open source. Mhmm. Is Yes.

Brian:

Is is the Python this plugin that you're talking about, is this part of the open source package also?

Eleanor:

Yeah. Yeah. It is all our entire Python extension is open source. So everything we've been discussing is open source, which is amazing, and we take contributions. So if anyone wants to see a change with their how testing works in VS code, you can come on over to our GitHub repo VS code.

Brian:

Because I'm I'm really curious about how you're doing this, like, reporting back every after each test run.

Eleanor:

Yeah. Definitely. Definitely. Using the, one of the Pytest hooks that finishes after each run completes. Yeah.

Eleanor:

Then we just created our own, payload design that sends back and got some some socket communication happening to to keep that going.

Brian:

Okay. So one of the questions I had was if there's future work planned, you've already mentioned coverage being added. Is there are there other future things that we can, like, impact?

Eleanor:

Definitely. Yeah. Yeah. For sure. So, yeah, coverage is 1.

Eleanor:

We're super excited about that. Again, really lucky to be working with such a great open source community. Ned Batchelder, who, is in charge of coverage.py, has been a great resource for us as well. So we really appreciate the open source community help.

Brian:

Plus, he's a super nice guy. I don't know.

Eleanor:

Yeah. If you've

Brian:

been interacting with him now.

Eleanor:

Boston too.

Brian:

Okay.

Eleanor:

Bostonians. Cool. Better. So that has been awesome. And then, we're doing Django testing.

Eleanor:

This is, actually been our currently is our oldest issue, the requested on our repo, and has over 200 upvotes, is compatibility with Django tests. So, the rewrite has now enabled us to be possible. So, I haven't started. We have to do a few other things first, which I'll talk about in one sec, to be able to get Django testing working. But now, I've been working with the Django open source community, and we have, you know, like, a little demo of exactly how it's gonna work.

Eleanor:

And now you can now once we get it implemented, you'll be able to run Django tests, which include, you know, setting up databases. There's a lot of other things, other steps that go into Django tests. And now all of those will be compatible with the UI, the run button, all that jazz.

Brian:

Well, the so the built in Django testing is a unit test back end.

Eleanor:

Right.

Brian:

But a lot of people use the Pytest Django. Are those both considerations that you're looking at? Or okay.

Eleanor:

Yeah. We're, we've started with the unit test backed one.

Brian:

I

Eleanor:

think we haven't gotten as many requests for the pytest, backed one. I but, again, this is just based on our repo. So, you know, we would love to hear thoughts from from anyone. I I don't know how common it is in in everyone's scenarios. So

Brian:

Okay.

Eleanor:

We'll definitely consider it if it's common.

Brian:

Well, maybe it's already working if the Pytest is working because it's instead of instead of saying, like, Django test or some I don't know how you launch it. Or,

Eleanor:

but,

Brian:

it's through Pytest, but Pytest is it's got a plug in that does, like you probably already know this. Has a most of the most of the functionality that they built in Django testing does in the plug in.

Eleanor:

Yeah. Yeah. I think plug ins plug ins support, for Pytest is really is really solid right now, and we're trying to get it to a great place that all of your awesome plugins work. But, the the issue with the unit test side was yeah. Normally, people run it with Django or they have to provide additional arguments, so that you can find, these, like, extra Django configuration files.

Eleanor:

So just allowing those to be possible, and configurable because that's another thing that happened before our testing rewrite. Kinda mentioned it a little bit at the start. When you're configuring your tests, you know, you get a few questions and then you can always go back and edit those command line arguments in your settings. But that's kind of like that string array is pretty much all you get in terms of configuration when you're looking at tests and how you want them to run-in VS Code. And so one of our other big upcoming things is more customization in terms of how you configure your tests.

Eleanor:

So, yeah, this is gonna be really fun. So what's gonna happen is you can now have you'll be able to have multiple test configurations. So if you want to run one set of tests, VS another and you want different arts for each of them, you'll be able to do so, and there'll be a little drop down next to your run arrow, and you can select which configuration you want to run. And this is an awesome feature that was already enabled by core, but we just weren't using yet. So, you'll be able to select, you know, coverage, run, have different have a different one for debug, than what you want to just normally run tests.

Eleanor:

And then, additionally, you'll be able to provide a lot more specifications. So, we're adding in a spot that you can provide an environment variables and m file. So we're looking at kind of solving that issue where, you know, you wanna provide some environment variables that your tests then look at to reference, you know, some data. Those will now be possible, with these additions and, you know, a unique run configuration for whatever environment variables you wanna set.

Brian:

What and will the run configuration stuff when you add that, will that make it a little faster to add arguments to my test run?

Eleanor:

Yeah. Definitely. There's a a great UI that, you know, you'll just kinda click a down arrow on the run button, and then you can click, like, configuration and it will pop up. So hopefully, it will be easier that that flow will be much nicer. And then, you know, again, as we continue to use all the these features that that core has enabled, as we get more requests for people that, you know, they want something even easier, those are those can then be conversations about what testing looks like moving forward.

Eleanor:

So we're excited to see kind of after we do these changes, layer where the community wants us to go.

Brian:

Okay. So right now, just to to just remind people, to add arguments right now, I think you have to go into settings, search. Maybe there's a faster way. I'm doing go to settings, search for Brightest, and then add arg. Right.

Brian:

Okay.

Eleanor:

Right. Yeah. I think yeah. I think that's the that's the flow. Not as fun.

Eleanor:

It'd be much easier if you click this, like, little gear icon, and it pops you right there. So that should be what it looks like, once we get the configuration.

Brian:

There's, like, a shortcut for getting to settings, though. I always forget it. Isn't there? Like, oh, it's like Yes. Meta comma or something.

Eleanor:

There's a yes. It's I have a Mac. Command. Yeah. Command comma or you can use the command palette.

Eleanor:

I'm a command palette person, so then I'll just type stats.

Courtney:

But Me too. All the shortcuts get lost in my in my brain.

Brian:

What's what's the command palette?

Eleanor:

That one's command shift p. And then shift p. Yeah. Right.

Courtney:

Oh. Yeah. So it's the little drop down menu. It has like a, what is that like a greater than symbol and then you type all the commands. So that's where all the commands in VS Code and for all of the extensions live as well.

Courtney:

So if you type like, Python, for example, it'll list out all of the different Python extension commands, as well as every other extension that you have installed in your VS Code instance.

Brian:

Okay. But I can't I can't use that to get to the arguments though. Can I?

Eleanor:

Then you would just have to go for settings. You would have to. Yeah. Settings JSON or settings UI from there.

Brian:

So That's, I guess, that's my bad for using, like, so many editors. I sometimes forget how to get to the settings.

Eleanor:

Oh, no. I think you're you're on the the right one with the with the command and then the, the less than sign to get you there faster. Okay. So sounds like you're the you're the power you you're a power user for sure.

Brian:

I just don't like to use my mouse. But

Courtney:

We get that a lot. We hear that from so many users.

Brian:

Really? Yeah. Yeah. Because also the first thing I install, even before Python, is the the the Vim extension.

Eleanor:

Mhmm. Yep. Yep. We get that. We get that a lot too.

Eleanor:

And it's great that there are extensions because I'm a mouse person, probably, maybe a newer generation of coder, or just a different type of coder. So I'm a mouse person, so I can use my mouse. But then, every you can have extensions, and people can create new extensions that allows them to configure the editor how they want it to look.

Brian:

Yeah. And that's I think that, so I I think of myself as a VI user, but I haven't act I don't, mean, I only use VI when I, like, am SSH ing into some whatever. So I'm I'm it's always, like, either VS Code or PyCharm now, usually. But, anyway Yeah. Super cool.

Brian:

What haven't we covered that you wanna talk about?

Courtney:

Anything?

Eleanor:

Question. I feel like we've done a great job hitting all the major points, Courtney.

Courtney:

Yeah. I was gonna say, I think we've hit most of them. Did you have any other questions that were

Brian:

okay. So one of the I'm just excited that there's more stuff coming. I'm excited to see what what's going next. How do people find out? They just, hopefully listen to Python bytes and we'll let them know there, or or Python test.

Brian:

Yeah.

Courtney:

Yeah. We, yeah, we also, post all of our iteration plans as well as roadmaps publicly. So people can check out our, iteration plans in the public VS Code repo. We have a testing section. So testing specific updates, you can track, all of our progress there.

Courtney:

We link all of the issues as well as just overall extension updates for those that are interested in more than just testing, and then issues in our repo as our number one form of communication with our users, and we love interacting with our users through our issues. So, file issues, engage in discussions, things like that to to stay up to date.

Brian:

Okay. I'd like Courtney. Sorry. Go

Eleanor:

ahead. Courtney writes a release blog too, which

Courtney:

she uses

Brian:

all the time.

Eleanor:

So you should definitely, the blog is great. And if I'm not mistaken, you can subscribe to get an email. Right, Courtney?

Courtney:

Yeah. You can subscribe to get the release blog, in your email, which

Eleanor:

is great. Email. Yeah.

Courtney:

A shameless fob. Follow along.

Brian:

And hopefully an RSS reader somewhere or feed, but hopefully. The change a lot. Yeah. The other thing I I just wanted to wrap it up by saying, so anybody listening, if you tried VS code before and tried to test with it and was frustrated and we're frustrated, try it again because it's a lot more less painful experience. I mean, that's probably like a bad way to put it, but it, you know, for some people that might be

Eleanor:

It's better.

Brian:

Appropriate. Yeah. It's better.

Eleanor:

No. For sure. We we really appreciate people coming back and and trying it and seeing, you know, if this work we put in is is making a difference for people. We've closed, I think, over a 150 issues related to things that people were wasn't working for people since I've started the rewrite. So we're looking at a lot of, you know, the all those nitpicky things.

Eleanor:

Those little things that you don't think you, you know, shouldn't bother you as much as they do that are just, like, ruin your workflow. Hopefully, we got to all of those.

Brian:

Awesome.

Courtney:

Yeah. Cool. Yeah. It's really great to hear that you tried it again and thought it was, such a different experience in a good way too. So we love hearing that feedback and how, how delighted and surprised you were as a user trying it out again.

Brian:

Well, so I I'm to to be honest, I've I've tried it on my, like, public, so open source projects. I haven't tried it on some of the big gnarly projects that I have. So that'll be next. So I'll, if it breaks, I'll let you know.

Courtney:

Please do. Yeah. Let us

Brian:

know. Yeah.

Eleanor:

Definitely. Brett's best for me was the packaging library from Python. He was like, get it running on there. That's that's pretty good. So definitely let us know if you find a repo that it's not working on.

Brian:

Okay. Cool. I will. Thanks a lot.

Eleanor:

Definitely. Yeah. Thank you for having us.

Creators and Guests

Brian Okken
Host
Brian Okken
Software Engineer, also on Python Bytes and Python People podcasts
Courtney Webster
Guest
Courtney Webster
Courtney Webster is a Product Manager on the Python extension in VS Code team at Microsoft. She joined the team a year and a half ago after graduating from the University of Texas at Austin with her Master’s in Information Technology and Management. Her primary focus since joining the team has been the getting started experience and as it pertains to the different feature areas.
Eleanor Boyd
Guest
Eleanor Boyd
Eleanor Boyd is a software engineer on the Python for VS Code team at Microsoft. She has been on the team for a year and a half, joining in August 2022 following her graduation from Georgia Tech with a bachelor’s in computer science. Prior to graduation, Eleanor spent a summer interning with the team investigating pyodide applications in the VS Code ecosystem. Upon returning to the Python for VS Code team full-time, Eleanor’s main focus has been a redesign of Python testing architecture.
214: Python Testing in VS Code
Broadcast by