Effective PyCharm Transcripts
Chapter: Unit testing
Lecture: Measuring test quality with code coverage
Login or
purchase this course
to watch this video and the rest of the course contents.
0:01
So we have some tests for our little application here and granted it's really small so that's pretty easy to do. Do we have enough tests?
0:09
Are we testing the important parts? Are we ignoring the unimportant parts, also important for your sanity and actually getting stuff done?
0:18
But are we covering what we need to cover? Well, if somebody came to me and said, I have this project and it's really amazing
0:26
because it has unit tests and it has 1000 unit tests, it's really great how much testing we're doing, that's a pretty vacuous statement,
0:32
it doesn't really tell me a whole lot what you're doing, yes, you have a lot of tests, that's probably good, they may be poorly factored or whatever,
0:39
but how does it actually address covering the important parts? It's been a ton a time like mocking out logging
0:45
and testing that logging works, like who cares, it's kind of useless, you should be focusing on the core thing that your application does.
0:52
In this case, we should be focusing on booking tables and all the validation and data access around that to make sure that's super solid,
0:59
so then when we do get to our program and consume this core library, it's ready to roll. So the way you answer that question is with code coverage
1:08
and the idea is we run our tests, the system is going to watch what other code is executed
1:14
while our tests are running, and then give us some kind of report and even better than that, so watch this, we can go over here and run this test
1:22
but instead of just running it like this, with debugging, we can run it with coverage. We try to run in it, it said no, no, not so much,
1:32
you need to install coverage.py or enable use bundled coverage, I am going to install this. So it's installing that into our virtual environment
1:44
and in just a moment we should be ready to roll, click the button again, test run, just like before
1:51
however, it takes a little bit longer as there is a report here so come in here, project and unit testing, now we have some interesting stuff,
2:04
so notice we're not covering, not interacting with program.py at all from our tests and that's probably okay,
2:12
test everything, that is kind of not meaningful but core, this is what's important, this is the thing under tests, the subjects of test
2:19
we can export this stuff out and so on. Notice over here in the left how awesome this is
2:24
like granted this stuff isn't run because I jammed all these projects together but in a real project, these numbers would tell you meaningful stuff
2:30
about how your test addressed covering your project, that is super, super cool so this is great, this is greater I think that is better,
2:40
now final bit of goodness I think is probably the best, check this out, if I go over here, it might be hard to see but notice the color
2:49
we have blank on the blank lines, we have green, we have red so we click this, this line was hit, we click this, I have red, this line was not hit,
3:02
right so this line here, not run. So you can actually click here and navigate between the various places and see what was hit and what was not hit,
3:13
but the important part is I can look at this code file in the editor and just like github, it tells me these are the areas
3:19
that you still need to work on testing. We are testing the book a table, and we're testing this stuff, that's all good,
3:27
and we're even testing these if statements, but we're not testing the failure condition,
3:32
so we're not passing in in this case a poor or missing table id, in this case, we're not passing in a booked one,
3:39
let's go undo one of our tests here, let's do the non-existing one, rerun it again with coverage, port being generated, and down here,
3:52
if you look this is now 96% instead of 93%, that's an improvement, and if we look, now this line down here right there line 40 has turned green
4:03
because we now tested that error condition, actually sorry, no, line 34, we tested that error condition, still we haven't tried this booked one,
4:12
so we'll go and uncomment this, this is the test that tests the failure case of trying to book a booked table,
4:19
which is exactly what we're doing on line 36 and 37. So we'll run it again with coverage, wait,
4:27
now how are we doing, we're up to 100% coverage on our core, that's pretty awesome, all green, everything is good.
4:37
Now I don't want to leave you with the sense that I think you should have 100% coverage on everything, there is definitely a point of marginal return,
4:44
the Pareto principle, that 80-20 rule of there is some core essence of your app and that should really be tested and you are kind of wasting your time
4:51
with a lot of the supporting bit around the edges, but it's really nice that we are testing the core functionality of our app
4:59
and PyCharm makes it super easy to discover and work with getting that done.