|
|
10:28 |
|
show
|
0:12 |
Welcome to the course.
Getting started with pytest.
Pytest is python's most powerful testing framework, and I'm really excited that you've taken the time to take a look at pytest and choose this course.
|
|
show
|
0:18 |
First off, what is pytest.
pytest is a software testing framework that means it has a command line tool that finds, runs, and reports automated tests for you.
It's extensible through plug ins and hook functions.
It has a lot of built in functionality and it helps keep testing fun.
|
|
show
|
1:10 |
Let's take a look at why you might want to choose pytest as you're testing Framework.
First off, pytest is very easy to start using and when you need extra power it has.
Pytest can produce very readable tests and readable tests are very important.
Pytest uses Python's built-in-assert method, so you don't need to learn a whole bunch of assert helper functions to use with your testing framework.
Pytest has Fixtures, Parameterization and plugins built in and we'll talk about all of these.
You can use it to run unittest tests.
So if you already have a project that started using unit tests, you can use pytest to run those and gradually switch over to using pytest Pytest can be used for Python packages and that's partly what will show in this course.
You can also use it to test Python applications.
Actually, the project will be testing is both an application and a Python package, really, anything that can be any application that can be controlled via python can be tested with pytest and that's pretty much everything from web apps to hardware to apps in different programming languages.
|
|
show
|
0:58 |
Let's take a look at a simple example of a test and compare pytest versus unittest.
In pytest, you just really have a test function starts with test underscore and use normal asserts.
In unit tests you need to derive from unit test test case.
So that means that every test has to be a method in a test class.
And you can't really use assert.
You can but it doesn't give you enough information.
So there's helper functions like assert equal.
With pytest.
You can use classes if you want, you just don't have to and if you do use test classes you don't derive from anything.
You could just have a class and you can still use the normal assert.
These assert helper functions.
There's a whole bunch of them in unit test.
I'm only listing a few in pytest as I've shown on the left.
It's just assert.
So you any boolean expression can be used in an assert statement
|
|
show
|
4:30 |
Let's talk about everything we're gonna cover in the course.
First off we're gonna talk about test functions.
They're simple but powerful and we need to know some of the features of pytests that helped make test functions so powerful and then we'll talk about test fixtures, parameterization, markers, plugins in configuration.
But really what are these things?
Let's jump in quickly to kind of cover what these things are before we go too much further.
Test functions like shown here a simple test function that starts with test underscore and uses an assert around this.
We'll talk about test discovery, How does pytest find these things?
We've said that pytest uses the built in python assert.
We'll look at these enhanced trace backs.
Normally you fail a test with a certain and or pass a test with a passing assert statement.
But we can also use pytest fail and other exceptions, we'll take a quick look at those and sometimes we have code under tests that we want to throw an exception in certain cases and we can test for that too.
We'll also briefly look at test classes because there are sometimes great reasons to run test classes after that.
We'll jump into fixtures what a fixture is kind of like set up and tear down So unit test has set up and tear down methods but pytest, you can use those but normally we use fixtures and in this case we're showing a dB fixture that database that has some set up the yield, sends the database connection to the test and then the rest of it just hangs out until the test is done and then the tear down happens.
So we'll talk about set up and tear down and returning data.
You can also use multiple fixtures protest and even fixtures can use fixtures.
Pytest comes with some built in fixtures that are pretty powerful and we'll take a look at some of those.
You can also scope fixture so what scoping means is a fixture can use run the setup and tear down phase after each test or it can do it once per module or once per session and a whole bunch of other options.
You can also share fixtures between tests and between test files.
If you throw the fixtures into a conftest.py file so we'll play with that parameterization is super powerful.
It's a way to write one test that covers many test cases in this example I'm having this test finished test function that runs the test cases of a beginning starting state starting in to do or in progress or done and then running the same test This is taking the place of three different tests all in one.
It's pretty cool.
So we can talk about in this case parameters, tests will cover that but you can also parameters fixtures, it's a little bit different syntax but we will cover that of course and using multiple parameters can make it very powerful to use multiple parameters per row or you can stack these parameterizations so that you have a test matrix.
Markers are pretty powerful.
So we'll cover those in this example.
We are using the mark skip marker that will tell pytest to not run a test.
There's some reasons to use that, especially with skip if it's very powerful.
There's also X fail that will cover and you can use custom workers and those help for selecting subsets of tests.
You can also combine them and use markers plus fixtures And we'll definitely cover that because it's super powerful.
Next is plugins plugins are a way to extend pytest functionality and there's some amazing third party plugins will cover x test is one that you can allows you to run tests and parallel to speed up testing.
And we'll also use 'cov' to collect test coverage.
These are just too though there are hundreds of plugins available and there you can even write your own.
We will also talk about configuration because that's one of the kind of a neat things about pytest is the configuration options You should have a config file and we'll discuss why you need a config file but we'll also the config file could be a pytest any or Pyproject.toml, and we're going to use pytest in this course and we'll also look at some cool command line flags.
So one of the config options.
I'm showing is ad ops and that is a way to add flags to run every time, and there's good reasons to use some of them all the time, and then other settings, like shown here as test paths and markers.
|
|
show
|
1:27 |
Talking about pytest would be a little dry if we didn't have an application to test, I've developed cards and cards is a small command line interface application.
The intent is for a team task tracking but that's really just the example the intent of cards is a learning tool but I'm showing here exactly what it does.
So it's a command line thing you say card like for instance if you've got nothing in there yet you can say cards add, do something and assign an owner of brian and then I've added something else and then cards without any extra commands will just list what cards you have and shown here.
I've just got shown the list of cards in there.
There's two you show the ID.
What state it's in if there's an owner in the summary and that's about it.
The reason I chose cards is because it's fairly representative of many applications that you might be testing in the future.
So cards is layered, there's a user interface in this case, a command line interface, there's an API layer which is really all the logic in the guts of the application.
That's not always true.
Sometimes an API Is on top of other application logic and then there's a database layer to save all the data to in this case into a file.
It's simple but not too simple.
We will test it both as a library through the API.
That is most of the courses.
Talking about testing through the API.
But we'll take a brief look at testing as the system through CLI.
|
|
show
|
0:49 |
What are your prerequisites?
What do you need to know before you start this course I expect you to know a little bit of Python.
The basics is enough though.
We don't cover very much.
And python 3.7+.
That's actually just what you need for the cards application and Pytest currently.
Although there are pytest versions that support 36 and before.
But modern pytest is 3.7 and up.
I am using 3.10 in this course because I like using python 3.10 right now and some sort of terminal.
So, so this could be bash or command prompt or power shell or whatever terminal you like to use.
And even if you don't know how to use it, a little tiny tutorial should be enough because really I'm just using Cd.
Cd as changed directory into the directory where our test code is and using a pytest or cards to run the commands.
|
|
show
|
0:24 |
So who am I?
I am Brian Okken.
I'll be your instructor for this course.
I'm a lead software engineer.
I also host a couple of podcasts.
I host Python Bytes with Michael, Kennedy and Test&Code.
I am also the an author.
I wrote Python Testing with Pytest and I wrote it two times.
The second edition is currently out.
Actually, this course has a lot to do with that second edition.
|
|
show
|
0:40 |
It's hard to say whether or not the python testing with pytest book is a companion to this course or if this course is a companion to the book, but they make a good companion to each other.
The book covers all the topics we're going to cover in this course, but in more detail, it also covers testing, strategy and github actions and using 'tox' and advanced parametrization and building your own plug -ins and mocking and more.
And the reason why those topics are not in this course is because I wanted this course to be a quick getting started course, but give you enough power to show you the power of pytest.
Now, let's jump in and get started learning pytest.
|
|
|
44:01 |
|
show
|
1:26 |
Welcome to Chapter two.
We're gonna talk about test functions in this chapter.
We'll start by downloading and installing the course materials and pytest for the course.
The course materials are in the github repository which will clone and we will create a virtual environment with which to do the rest of the work.
We'll install cards locally.
The cards source is part of the repository that will clone and we'll install pytest from pypi.org.
We'll next go through a demo.
We'll make sure both cards and pytest are working properly.
We'll play with cards a little bit.
So you understand the application that will be testing and we'll write a couple of tests both passing and failing.
Just mostly to see how pytest works.
But then we'll jump into learning more about Pytest.
We'll learn about test discovery, which is how pytest finds our test code.
We'll look at how we determine pass or fail.
Within a test we use assert and we can also use pytest.fail.
And really any exception can cause a test to fail, especially when we have asserts.
Pytest will put in place 'Enhanced tracebacks'.
to make it a little easier for us to determine what's wrong.
Well look at test structure.
This is how we structure test code to make it easy to maintain and also easier to write.
Also take a look at test classes which are one way to help structure our test functions.
|
|
show
|
3:44 |
The materials for this course are on github under talk python under getting-started -with-pytest-course.
It's a lot of dashes but it reads really well.
So anyway this is the link to go to to grab the materials I'm already here of course and I'm gonna just grab so if I go to the code I'm gonna you can you can download the zip and unzip it.
If you want to do that, I'm gonna go ahead and use git and clone this and so I'm gonna just highlight that and the whole thing gets highlighted.
I don't have to go all the way over, I can just click on it and click that copied button so everything is copied in my paste buffer and now I'm gonna go over to my command line.
I've got a terminal open within my home directory.
I am at users akin kind of projects directory that I like to use for.
Projects that I'm working on.
I'm gonna go ahead and put the course in here so Cd Projects and and I'm gonna do a git clone of the the directory that I copied earlier and if I just hit enter right now it's going to create a directory called Getting started with pytest course.
I don't really want that.
It's a bit long so I'm gonna shorten this by adding another pytest course and that will create clone the repo as pytest course instead and copies everything down and go into pytest course and take a look, I've got my license, read me projects and a bunch of chapters awesome.
Now we can set up our virtual environment and get everything installed that we need the course.
Let's start with updating pip And We'll Install Pytest first.
pytest just comes from PyPI.org so we don't have to do anything special.
Now let's install the cards package and it is local to this directory.
It's right there.
Cards project and we're gonna tell pip that it's local by including a dot slash.
I'm just gonna double check all the versions to make sure they are as I expected.
And I double checked that if I use these commands directly I get the one in the virtual environment in my local directory.
Now we're ready to start writing some tests.
|
|
show
|
7:20 |
Now that we have our environment set up.
I've got I'm in my project directory, project pytest course.
I have a virtual environment activated and it's the same one that I already have pytest involved installed in.
And now let's write some tests.
So I'm gonna open up a code editor.
I'm using VS Code but you can really use anything and let's pull down the terminal so that we can see both the terminal and the editor.
So within chapter we're going to put it in Chapter two and we want a new file that starts with, it needs to start with test underscores so that pytest can find it easily and the rest of it doesn't matter.
Test something works and now I'm going to write a test function.
So let's start with a test function.
Just any this first one I just wanted wanted to pass the name of.
It doesn't matter except for it does need to start with test underscore still both the file and the function should start with that and we can really do anything we want within this function but I'm just going to find a tuple 123 and assert that that value is equal to the expected value of 123 that should pass.
So let's write this is so I'm in the directory.
I'm not in the Chapter two yet.
So we'll go into chapter two and go ahead and run pytest.
Cool.
So we ran pytest within the direct chapter two directory.
and it found test something.
It did list some information about our environment.
It showed what our directory is, the root directory of the testing and it collected one item.
The one item is the test that it found.
If there were two tests which will write shortly, it would say two items.
And it said test something with a dot, the dot, the green dot means everything passed.
And then down here it says one passed and then the time it took.
So this is pretty neat already.
it gets even neater.
Well, if this isn't enough information for you and you're like, well what if it found something else?
We can give it a -V flag and it'll tell you specifically which test function and the two colons between the the the path.
This right now it's just a file name but this is the if we were somewhere else and there's a subdirectory this would be the full path and the then two colons and then the test name, this is called the the test node ID.
And it shows up in pytest documentation.
I probably won't refer to it too much but that's the note.
I node name or node ID.
So let's make this a little bit more exciting by having a failing test.
So let's do a failing test and to make it fail, we'll just make the tuple not the same.
So 123 is not equal to 321.
So this should fail.
So if I run high pytest again I get more output So I get this test session start the same information here did collect two items It's these two items and the dot is passing and the red f is for failing.
And then below below that we see the list of failures.
And in this case it's one test says this is the test name, test fail.
And it points us exactly where the assert is and where the where the problem is the failure is right here it says assert 123 is equals 321 that is not right and the index zero is different.
So it noticed that this is different but we also know that this is different.
So hopefully if we give it a -V as it says it says use -V to get more diff.
So let's do that and if we do that then we've got the -V.
One of the things we'll notice.
We also get the the more verbose test listings, we have both the tests this one past and this one failed and then we have a full diff that explains has little carrots which is nice exactly where the differences.
So that's pretty cool.
And pytest will show us it also has a little summary and then also one test, one failed in one passed and the time it took.
So that's nice.
Um But it gives us as much information as it can come up with.
So if we have, so this is this is something where there's placement, there's there's that's wrong.
If we have something else, like let's do another test, that's 123, but then add another element, so 1234, those are both equal also, but there isn't how is it going to show us exactly how that's different?
So we can give it, let's run that again with the -V.
So we see the full diff and we get it runs both.
It runs all the tests even, it doesn't stop.
It's got, it'll show us the this full list and the the same failures before and then this new failure and has a different creative way to say it, it says like this is different.
So the right one contains one more item, which is four.
So that's handy.
So anyway, Pytest will try to give us more information than as much information as it can as to why the failure happened, which is really nice and we don't have to do just equals, we can do not equals or less than or really any expression that can be converted to a boolean, which is really anything that but usually, you know, these sort of algebraic looking sort of expressions are common in in python for asserts.
Anyway, this whole section here.
This is what the it's called the trace back is this part and it gives us both trace back for both failures.
So we have actually quite a bit more to learn about pytest, but we're going to use instead of like 123 and comparing tuples for the rest of the project and the rest of the course, we're going to use the cards application for testing.
So let's take a look at the cards application next.
And if you had any difficulty with this, trying to get these two tests, two or three tests to run.
this would be a good time to try to figure out why because hopefully this is pretty easy stuff.
The if you're having difficulty, one of the problems might be that you're not in the right directory is the same one or you might not be in the right environment.
Virtual environment.
You also, if you installed pytest, make sure that pytest is installed and that it's part of your virtual environment.
Um that's pretty much it also python version.
so check the version.
I'm using 3.10.3 Pytest supports 3.7 and above.
So hopefully you're using 3.7 or above.
|
|
show
|
3:43 |
So let's take a look at the cards application again I'm in my virtual environment and so we should have cards still installed and let's see what it does.
So right now if we run cards by itself it lists this blank like not much of anything but it does say ID, State, Owner shows some stuff.
So what cards is a kind of a to do application so we can add things so we can say cards so I can add item do something and give it the owner of Brian and I can do something like cards add do something else.
And then if I list cards just run cards by itself again it does a list and now it shows me all these to do items so I've got a two items in here.
They're both in the state of to do and the owners set to one and summary is do something and do something else.
So one of the other things that so cards does it help and it has we can add add a card we can look at the configuration we can see how many cards there are deleted card finish a card list, start update and version so we've done version before but now I cards has a database and if we run config we can see where it is, looks like it stores it in the user directory under cards DB.
I can add I only had an owner on one but if I add, you can update it so I can do a cards update to give it an ID.
And I'm gonna update the owner let's do Okken and now we have owners there on everything but that's not my name I mean that is my name but let's do it Brian so they're both both Brian neat.
Let's start one so let's change the state of one so if I change the state so start if I start start one It shows the card one in progress so that's cool.
Um let's finish that and start the 2nd one two and now I have one finished to do item and one that's in progress now since it's done I don't need it anymore so I can delete it and now I just have one.
So this is a very kind of a simple to do application to keep track of what people are working on but this is the application we're going to test It's it's got a front end, a command line interface, it's got some logic to it and it has a database so it also has an api so most of our testing is going to be through the API, but we will start with data structure in the next video.
|
|
show
|
3:13 |
So you want to start taking a look at writing a test for the cards.
Project.
The cards, project source code is in the directory, in the pytest course directory, there is a cards project directory within the editor we can see it as well.
And inside there there's a source directory and there's some a test directory and license make file and stuff, within the source directory, there's a source cards and the CLI is the command line interface.
So that's if we type cards, we get the the interface and or you know, help or whatever there is.
That's the command line interface, the API Is below that.
It's a three layer structure and so all the CLI will talk to the API.
So hopefully in the CLI we've got an import of the API.
Somewhere import cards.
It's just importing the cards directly.
That's because we have init here that allows that to happen within the init it imports everything from the API So the CLI imports cards which indirectly imports the API.
The API will import the database.
And so we've got the cli on the top calling the Api which calls the database we're going to take a look at a data class within the API.
This class this is called the state of structures called card and it holds a summary and an owner and a state and an ID.
And that represents each individual item.
So the ID.
State owner and summary.
The there's some fancyness going on here if you're not familiar with data classes, that's kind of what we're doing right now is testing this.
Um and we're gonna write some knowledge building tests around making sure we understand how this data structure works.
And the data structure is used to pass information between the cli and the API.
And back.
And so it's kind of an important data structure within our application The defaults are set on the right so these are string string, string and int and the defaults are none none to do.
And this is the field is a part of the data classes interface that is setting the default to none for an int for the ID.
And setting compared to false.
And that is so that when we compare two cards that have equal state owner in summary they will show up as equal even if they have a different ID.
And then I added a couple of helper methods.
This is used within the source code to pass to switch between dictionaries and card the card class card objects.
And these are not necessary.
You can use the as dict or the star star but I for me I think from dict and to dict are a little more clear so I've added those so I'm gonna write a couple of tests to sort of make sure that I understand how this data class works.
|
|
show
|
8:50 |
We'll go ahead and close up this the source code and within chapter two we got our test something that we've already played with I'm gonna add another test and I'm gonna call that actually I'm gonna use this new file thingy and call it test card just like the command line interface was doing I'm going to say from cards import and then whatever I need to import right now we're just going to test the card data structure so that's all I need to import.
One of the cool things about data classes is we can do something like access the fields within it in a dot syntax.
So let's write a test to test that functionality and I'm just gonna go ahead and create a card.
You do that with the same card and then filling out information something and then the owner let's do Brian.
And just to do for the the state and then we'll go ahead and do an ID.
Field in there.
Now what's this going to look like?
So we when we access that we can do something C.id Owner state or summary or we can use that from dict.
That's cool.
But so let's if I do summary say summary that should equal to something because I just did that let's go and just stick that in assert and while we're at it let's just test everything to make sure everything is accessible So the summary is something and then the owner is Brian and then the state is 'todo' And the id.
should be 123.
Cool Is that it can we run that?
Let's go ahead and try.
So where are we at?
We're in the we need to get into the Chapter two directory and we've got our file test card pytest -V test.
Sweet.
It worked well that was easy.
Let's let's try another one.
Let's do the default so if I don't fill anything in just defaults.
If I do C equals cards but don't do anything in it.
What am I gonna get?
Well we're gonna do I'm just gonna copy and paste to make this quick.
We're gonna do the summary should be what should the summary be?
It should be none and the owner is none and the state is todo we I think I remember the default being todo let's take a look again the API.
The state for the state is to do and the integer should be done also so we're gonna say none.
Oh no todo.
And this should be none.
Oh and I am already seeing some logic problems here so I just copied in pace but we really shouldn't do summary anything equal to none so when comparing to none it's best to do is none.
Oops we do want equal for strings is none awesome so let's try that.
Sweet.
That worked to what do these look like?
What if I what if I want to print(c), Let's go and print them.
Print, see and oops, forgot the braces, I'm gonna print those.
Let's try that.
Nothing verbose, doesn't do anything, not verbose, still doesn't do anything.
That's a bummer.
What's happening is there's no failure going on.
So pytest will if there's any output within a test it will capture it and only show it to you if there's a failure.
So let's make a failure happen.
Yeah.
Ha ha.
So is our print out their collected two items test summary Oh it did captured standard out call, captured that output.
So that's cool.
But I don't really need the, I don't want to just add a failure just so I can see the output.
So how do we do that otherwise?
There's a, there's a flag for pytest run it like that, there's no failures.
If we do dash capture equals no, which is kind of a lot of typing, we can see that.
Let's do the in verbose also.
So they separate a little bit.
So it's printing out the, what the card looks like within here.
For each one of those, it's on a on the same line.
That's a little annoying.
Let's do another print just to create a new line.
I could have done a dash and also.
Okay, so field access, this is what a card looks like if we print it And with the defaults states there that's neat capture equals no is a lot of typing.
So there's a there's dash S.
That does the same thing.
That's cool.
Let's take, we don't need these prints though after we figured it out.
So what I really wanted to show you here is just how easy it is to kind of learn about a data structure with tests.
We've got these little rendable things and as we build them up, I'm running everything right now.
But you can easily for running all of it you can easily just run one.
So we can just say colon colon test defaults and just run run the one you're working on.
So this is neat now for we could write a whole bunch of tests and I went ahead and just pasted a bunch in so that we could talk about them.
We can test for equality to the the API Bit that says that the if there's two cards equal with the same same even if they have the same ID.
They should the equality should work.
So we've got testing testing for equality.
Well and what if the we said that it shouldn't matter if the idea is different.
So we're going to test for that too.
And then we can test for inequality.
So if things really are different then not equal should work.
From Dict.
Here's the example from taking a dictionary and so we've got a card that has something brian to do in 123 the dictionary that kind of looks the same and we can create a card from a dictionary and then make sure that those show up is equal that should work.
And then also a two dicts.
So if you create a dictionary from from an existing card we would expect it to look like this and assert that those are equal.
Let's just go ahead and run that and see if that works.
pytest -V.
Ohh.
It's got our other things in there to test something.
So we just want pytest.
Let's clear that test card, sweet everything passed.
And it just was really easy.
These are just little tiny snippets of things to make to test our understanding.
I really like this use of pytest and of unit testing just to understand kind of how things work.
|
|
show
|
3:27 |
So all of our tests are passing, which is really cool, This is great.
But I want to show you some of the differences between how pytest handles asserts and how Python handles asserts.
So python versus pytest with asserts.
And it's kind of some magic stuff that happens behind the scenes.
It's not magic, it's engineering, but it's really great for us.
And so let's take a look at it.
We'll take one of these equality test, equality type tests and just make it for our own uses.
So we'll grab one of these and I'll create a new file test, test card fail and we need to import card and I'm not going to do all of this.
We just need a couple of things to be different.
So we'll do Brian and Okken.
Those are different and something and maybe foo so those are different.
So the assertion for equality should fail.
Test card fail.
Yeah, there we go.
We have quite a bit of going on here, this is actually awesome.
We get this assertion error.
But and it says, it says a lot already, it tells us that the there's one omitting one identical items but there's differing attributes of summary and owner.
And for the summary it's telling us that summary something is not equal to foo definitely not.
But it also says use -VV.
So let's try that for more information VV.
And very cool.
We get it actually tells us that there's this is neat that the state is the same.
The state matches, but there there's two different 1 matching attributes and two different attributes That's pretty neat.
Um That's definitely coming from pytest and then for the different attributes it tells us what's different.
So for for the summary something in foo were not equal and for the owner Brian and Okken are not equal.
That's really handy.
What's the difference?
Why?
How do I know that's coming from pytest and not python?
Well you can definitely try exactly what python gives you right out of the box if we just run it directly.
So let's do an if name equals main black This will make it so that it doesn't run when pi test imports it, but it only runs it when we run it directly from python and test equality fail So we'll just run it from python and it still fails.
We get that assertion error.
But it doesn't tell us much information so I definitely like this much information better.
Thanks Pytest.
|
|
show
|
2:29 |
Assertions aren't the only way we can cause a test to fail.
We can also use really any exception.
And there's a special way within pytests to raise an exception and it's just called pytest fail.
So let's try that.
Let's do a new test card, alt fail, and I'm gonna go ahead and copy this bit.
And instead of assertions, let's just say if If C1 is not equal to C2 pytest fail, they are not equal since it's coming from pytest, we have to import pytest, and now let's try running that.
So we still got one test failed, so that worked, but it just doesn't give us very many very much information, so I definitely would recommend using asserts when you can, but it's nice to have this around as a another way.
Now we can also just do any exception, so we can say let's do another one.
Just raise an exception, and the second one also failed with just a raw exception so really any exception any exception works, but that's just weird, I if I were to see this in a test, I would I would not understand what's going on.
However, a pytest fail makes a little bit more sense.
Anyway, personal preference, but I prefer probably assert first, then Pytest fail and then some other exception, but it is good to know that if any of your code raises an exception that you're trying to test that you didn't expect, it will also fail a test.
So that's good.
|
|
show
|
3:38 |
I want to take a break and talk about test structure for a minute.
All the tests that we're writing so far kind of fall into a similar structure with the asserts at the bottom.
And that's not on accident.
So let's grab one of these test to dict and talk about it specifically.
We'll stick it over here and just so that we can run it if we want So here's our test.
And the structure comes in several forms and it's been it's been talked about by Kent, Beck and Dan north and many others.
as either Arrange, Act, Assert or Given, When, Then.
So let's talk about Arrange Act Assert at first.
So at the top of the test we're gonna arrange whatever we need to arrange and then we're going to do an action and then then we'll assert some stuff.
How does that look like in our tests?
So in our test we've got c1.
we created a card.
This isn't this is just kind of getting ready to do an action.
So this would be the arrange.
And then the action we're testing is 'to dict'.
We want to see what 'to Dict' does so to dict is the action.
And then the assertion at the end is this bottom bit where we have I think the expected value creation is part of the assertion.
So we've got the assert here and a lot of people really really kind of resonate with the arrange act assert idea.
So if arrange act assert works for you to keep these in kind of a good order.
That's great.
The main thing we wanna do is we want to keep the arranging stuff at the top.
With no asserts.
Hopefully the act obvious and the act should be kind of obvious from the test name as well hopefully.
And then the assertions at the bottom.
What we don't want to do is inter leave these.
We don't want to start doing multiple actions interleague with asserts.
Because those are difficult.
What resonates more with me is given when then so let's take a look at what that would be like.
So given a card with known value when when we call to dict on the card then we get the expected dictionary.
The reason why I really like the given when then model of it.
It's the same as Arrange Act Assert.
It's just I like thinking about a given state.
So given a certain state when I do some action then some expected outcome happens.
I like this because then I can come up with different given states.
So for a particular action, what are the all the different states that I need to act on?
And then for a given action, what sort of maybe what sort of expected outcomes could I get even if I come up with different states?
And this sort of thinking helps me.
And so for me I usually think and given when then and not Arrange Act Assert but really either one if it works for you then awesome.
|
|
show
|
2:40 |
I'd like to show you how to group test together with classes.
I don't often do it but a lot of other test frameworks use classes and you can use classes with pytest as well and some people really like to use them mostly for grouping.
So I want to show you how that's done within our test card file.
We have a handful of different equality tests.
Here's these three equality tests.
As an example for classes.
Let's just grab these these test functions and put them in a class.
So for a class we would say something like class test equality and then we can just put our our functions there.
But they need to be methods so we will indent them for one.
But then they also all have to start with self.
So we'll stick a self everywhere.
And I think that's it.
Let's see if we can run it.
So our tests have run let's do verbose and we can see that it's a little bit of a test node, different test node.
So the test node name now has the class name in front of it and we can take that and run it individually.
So if we say pytest and then plump that whole thing down, we can run one individual, let's make sure we did that with V, yep, ran that one individual test.
So this is just sort of a grouping thing.
If we can have other tests here.
So if we had def test foo.
The reason why you might want to do this is just to run the class.
So if we do the entire funk file, we get everything including foo, But we can also just say I want to run all of the test equality, That one class full of tests.
So that's handy.
The when we get into fixtures, we can use fixtures at the class level, too.
And that's another reason to use classes.
But I just wanted to show you that.
Yes, this is a way you can group tests with classes pretty easy.
|
|
show
|
3:31 |
The last thing I want to show you in this chapter is how to run different ways to run a subset of tests.
So in this test classes we used test class to group some tests to just be able to run those maybe, but we've actually done quite a few written quite a few tests right now.
So let's see how many we've got.
So if I go to my chapter two directory, there's a handful of test files and if I run them all, I know there's gonna be some failures and I don't really want to see the trace backs.
So there's a flag that we're going to use now called tb equals no, to turn off trace backs and we still get this summaries, which is nice.
But really I was kind of just looking at how many failures we've got, we've got 18 tests collected test functions or methods and 13 of them passed and five failed, I'm pretty sure that we called them all.
So there's five failures, I'm pretty sure we called all of them test fail or something like that.
Yeah, so all these these failed things.
So one if we want to call all the tests with fail in the name, we can say '-k' for keyword and just say I want to find all of the tests that have fail in the name and that will just run those.
They all failed here.
If we can also use the -V of course to have them listed out there listed below here as well.
So that's cool.
So we can use keywords to grab a test we want we can also directly call these test nodes, we can we can copy this whole thing and call that and or we could call the let's do the without the fail.
Let's do all of the tests with that are not failing.
And so the keyword thing, you can you can add some logic in here, you can do and's and or's and nots, so I'll do not fail all the not failed tests.
That's cool.
So we can call we can use keywords with with logic in there.
We can do call the test class, we can do a test if we call just the test method we could call that or we could call the test just a test function we can use that as to to test just that.
The keywords kinda neat though.
Let's do what if we wanted all the equality tests that are not failing?
so we could do equality equal and not equal and not fail.
That's neat.
So there's a whole bunch of options you can do with this keyword but the combination of keywords and specifying something directly is really pretty pretty powerful to select a subset of tests you want to run and I mean this isn't a big deal for all of our tests but for the ones we have now because if we just you know, if we just ran everything, it really doesn't take that long.
We've got 0.09 seconds.
But we might have a lot of tests and we might have tests that take a while and it does sometimes is handy to zoom in and just run a handful of tests.
|
|
|
52:29 |
|
show
|
1:21 |
Welcome to Chapter Three All About Pytest, Test fixtures, Test fixtures are one of the amazing features of pytest and they can really, really helpful to your test strategies, fixtures are incredibly powerful and important.
Yes, they can be difficult to get at first and it's really gonna be hard to think about them until we start looking at some code, we're going to go through several examples just to kind of let you see fixtures a few times.
It'll help with the learning process.
So fixtures can be helpful for set up and tear down, there are a way to take the set up and tear down code of a test and pull it out of the test into the fixture function, fixture functions can also be used for test data because there they have the ability to pass information to the test.
We can use multiple fixtures, protests and you can combine them in some cool ways.
There are built in fixtures built into the pytests that are really powerful, we'll look at a couple of those and there's several scopes and now scoped isn't gonna make much sense without, until we start talking about fixtures and start seeing some examples fixtures can also be shared between test files by using a conftest.py file and it will use in example and show that in this chapter.
|
|
show
|
2:31 |
As the first example of fixtures, I'm gonna take this simple test, I've got test cube, which checks, I've got a number 42.
So this is my given stage.
I assigned 42 to a number and then I'm taking the cube root of that number with the star star and then I expect it to be the same as multiplying the number three times.
And so I'm asserting here, given a number is 42.
When I cube it the output, I assert that the output is going to be the same as multiplying it three by itself three times.
This is not that complicated but let's go ahead and run it just to make sure it works.
So yes that passes, this is not a surprise to us.
What happens if we, you know, do something else?
Let's not the same as four times.
Yes.
Just make sure that we understand that that fails and that's a pretty big number.
Okay, put that back to normal.
Clear now.
How can fixtures help here?
Well, the 42 getting ready is right in the test, we could push it out of the test and this is a kind of a simple example but I wanted to show you the structure of how we use fixtures.
So we have to import pytest first and then we use a decorator called pytest fixture and just decorate a function and it can be really any function and the return value, it does action and it has a return value.
If you don't return anything of course and python it returns none.
But here we want to return the value because it's going to be used by the test and we just put the name of the fixture right is a parameter to the test and when the test runs, so pytest is gonna see this and it is gonna see the num and it's gonna say, hey I need a fixture called that.
So it's going to find this num fixture.
Run that take the return value and fill it in for the parameter.
It seems kind of cool and it is cool.
So let's try to run that just to make sure that works.
Pytest test cube.py, no just cube fix.
Yeah, that works too.
In this case it doesn't make a lot of sense to pull this out I am just showing the structure of how to use fixture.
|
|
show
|
6:26 |
In this last example we didn't have much going on here.
We're just in this the num fixture, we just returned 42 but we could conceivably think that there's some actions going on here and in the next example we'll definitely take a look at pushing setup and tear down into the fixture.
So let's say we've got this test DB file where I'm I am importing DB object or DB class from some DB In the test, I have to connect to the database and then and I do that with a call to the DB class in sharing the object, I do some action on the database and get a result back from that.
And again I'm checking to make sure that the result is 42 and then oops I'm doing the tear down and I'm closing here.
So I have to to close the database after I'm done with it.
This is sort of a common scenario, you can grab a resource, utilize it and close it or do something.
Now this this is a problematic as we have it right now.
Hopefully you can see the problem when everything passes everything is fine, but if things fail, we if this assert fails, if some action returns something other than 42, this close is never gonna hit.
So we can't actually do the assert their and the tear down, we have to move in this case we would really want to move the assert down to the down to the bottom set up and tear down for the database is within the test and that's a little problematic now just to as an example, so let's go and run that Chapter three.
Still pytest test DB.
That does work and it works because I've got this very simple database object.
So in for this test case I just have this this class and it doesn't really do much other than return 42, but it has an init to connect to the database and disconnect.
It's not really doing anything except for printing.
And but let's take a look to just make sure that that is doing that within the test.
And again you can use -s to turn off output collection so that we can see those printouts.
And sure enough, I connect I do the action and disconnect.
That's exactly what we want to have happen.
We were talking about this problem with the a certain having to tear down here just to be fair python without pytest python has some ways to get around that namely context managers.
So let's look at an example of that same example if we use the context lib context lib has a cool closing context manager that we can use and we use it by just using a with block for a context and wrapping this called the DB It creates, it returns the dB object after we're done at the end of the with block, it will call the close.
So I actually wanted to include this because context managers are pretty cool and you can use them with in testing also and the closing is cool.
As long as there's a close action that will work, let's run that.
And sure enough we've got I'm just gonna leave this bigger, yep, we connect, we do some action and the disconnect gets called when this context manager leaves, so that's pretty neat.
We can use a fixture instead though.
So the the normal flow was set up call an action close if we use a pytest fixture, we can do the same thing.
So like our our initial num fixture.
This DB call call does some action and so this is the the setup and this is the tear down at the close of the tear down the yield.
So in the, in our first example with the test dB fix, In our previous example with the num fixture, it just returned 42 but in this case we want to do work after the return So instead of return, will use yield and that will return the value and hang out and this DB object is available within this function to be able to do the tear down afterwards and then we're passing the DB object to our test and now we can do some action on it and we don't have to worry about it that the fixtures doing the set up and tear down of our resource for us.
So just to make sure this still works, will run test dB fix awesome.
So this is still calling the connecting, doing the action and tearing down because we're yielding that DB object to the test.
Now, one of the cool things about fixtures is you can you can still use context managers.
So let's combine the two concepts.
Now, in this example we are combining the two techniques, so we've got the context live with closing and we've got a little fixture DB fixture that's using the context manager.
So it's saying with closing DB as DB Yield DB, So we're not directly calling close, we're letting this context manager called close for us and we can just yield within the fixture.
The test is the same and we'll get the tear down happen as part of the exiting out of the context, let's make sure that that works.
Yeah, we're connecting the database, doing some action and disconnect all of it's working great.
So for the fixture side, we can just show you these again, set up a resource, yield an object, close the resource afterwards and this happens after the, all the tests are done that.
Use this fixture.
Now, if we want to yield within a context manager, we can in this context just sticks around until the the fixture is ready to call the teardown.
|
|
show
|
1:30 |
Set up and tear down is so in green and so crucial.
And part of how fixtures work, pytest has a really cool tracing function here.
So when you're working with fixtures, sometimes, especially with multiple fixtures, you might really lose track of what's happening when in our example here, I've used print statements but usually you're not gonna have print statements around.
So let's take a look at this.
This example, again, the last example we did was the context manager fixture and instead of printing it out, so if we didn't have all the printouts, printouts, we just see that.
But pytest allows you to do a setup show which is really cool to trace what's going on within your fixtures.
So it'll run the test but it will tell you exactly the order of things are going on.
So it's showing you that it's setting up the DB and it's a this is a fixture name, so it's the DB fixture And then it's running this test and it lists the fixtures used because sometimes there's more than one and then it does the showing the tear down this f is talking about scope and we'll talk about scope later.
I haven't specified the scope area.
So if I did, it would be in this inside the fixture call but the default is function scope and that means that it this, if I had a multiple tests, it would run before and after each test.
|
|
show
|
6:15 |
Now I'd like to take the concept of fixtures and apply it to the cards application So I would need to pick a feature to test and the feature we're going to test is the count feature it's a fairly simple concept if I run cards with nothing in it, there's no cards listed but I can run count here and get zero.
Awesome.
Now if I add something, cards add something now, cards lists it and the count is one.
That's easy enough.
Let's see if we can turn this into an automated test In Chapter three, I've moved the previous learning items we've got into a learning fixtures subdirectory and we're going to use a test cards subdirectory under Chapter three still.
And let's get started with this.
So I generally, I know I'm gonna have to do import cards so import.
That's good.
And I want my first test to be testing for empty and how should this start?
So I kind of want to say given an empty database when count is called, Then out, return zero.
That should be easy enough.
So I need to have a database that will be our setup and then call count and then check, assert that it's zero.
So let's get started database, we need to set up that.
So cards, cards, DB.
And the cards DB takes a pathlib path object.
So we're gonna have to create one of those.
So I'll just go ahead and cardsDb, the DB.
path that we'll have to pull in pathlib pathlib equals pathlib path and now I need a directory.
So to get a directory, I'm gonna utilize something that python has called temp file so from temp file import and it has a temporary directory class, but it can be this temporary directory class works great as a context manager.
So we'll go ahead and use it as a context manager with temporary directory as maybe db dir and shift those over.
So now I'm creating a temporary directory, creating a path and passing it to cards.
Now, this should be since I'm creating in a temporary directory, this should definitely be empty.
So I think we're good there.
So we just call dB count count equals dB count.
And then we should assert that the count equals zero.
I'm in test cards.
Director subdirectory of Chapter three.
Great pyTest test count initial.
Yeah, wow, that actually works.
That's pretty cool.
So this is the setup.
So this is our given given this setup when count is called count, returns zero.
So there's Oh, I forgot to tear down the database.
So we have to clean up, turn down maybe call that.
Turn down db.
close the same thing as the camp assert before.
So we want the dB close to be before we assert So go ahead and do there.
So we've got the this is our setup This is our tear down kind of in the middle.
The real part, the real part of this code that I want to be obvious in this test is just really this line and this line the rest of it is just extra stuff that is sort of distracting.
It's necessary boilerplate for the test.
But but I don't really like it because we'll give you an example.
I wanna test like one item.
If we test not non empty or one item given an empty directory when count is called then count return zero.
But but I want to have a return like 2 so I can't do it empty here.
Given a database with two items, I need to put some items in there.
So Db.
Add card and we know how to do we know what cards are card and something.
Let's have a couple of cards.
So now we've got two items.
We started out with this empty directory, empty database.
We added a couple of cards that should end up being 2 name error.
What'd I do wrong?
Oh I forgot to import card.
No we've got two tests.
Oh that's this, this is one item.
So I really want one item, make sure that still works okay.
And one item get aways with one item.
This there's still a lot of redundant stuff between these tests.
So let's try to fix that by putting in to a fixture.
|
|
show
|
4:05 |
Alright.
Ready for fixtures.
So I'm gonna grab, I'm just gonna grab all of this and put it in a new file.
We'll call it a test count because this will be our real test.
Now I want to pull up the the setup into a fixture.
So let's I have to import we need to import pytest so we can do a pytest.fixture.
And let's call it since we're calling it within the code is DB.
DB is kind of general.
What's let's do a Cards_dB, And then we need to do all this.
We're gonna move this set up and tear down into this with temporary directors.
db_dir,db_path We don't need the count.
So this is the setup, this is the tear down in the middle will yield the database.
So this goes to the test and I can put a little space in there.
So this works.
So the cards db.
Is doing the is a fixture when a test calls it or uses it High test to run the setup and then run the test and give it a database to the test and then close it afterwards.
This is kind of exactly what we want to have happen.
So let's see if we can use it within our empty call.
So if we use cards DB here then given an empty database, it's already empty when count is called.
Now I can move this over when the count is called.
We don't need to tear down.
count return zero.
Well that's a lot shorter.
It's so simple that I don't know.
No, no.
really need these comments.
Sweet.
That's a little tiny test.
Now one Item.
Does that make this one better?
Cards Db given one item.
So we start out empty.
So we still need to add it.
Oh we changed the name.
So I was using DB before we changed the name to cards DB.
So we better change that.
I can move these over cards.
DB.
Add card given a database with one item.
So it starts out empty and I added one item and then called count.
And we don't need to close it because that's being done for us by the fixture count returns one.
This really focuses our tests on just really what we're doing.
Uh This is pretty cool.
So let's give this a try.
We don't want the initial test count.
Something's wrong by using DB somewhere still right there.
Sweet.
Let's take a look at what's going on.
If we do the setup show.
So pytest test count.
Set up, show and we can see that the cards DB the setup is getting called.
And then we took the pytest calls test empty and then the tear down and then set up the second test and tear down.
And because it's happening around that we know that it's gonna be and it's going into a temporary directory.
It's gonna be empty before each test.
So this is awesome.
This is exactly what we want.
|
|
show
|
5:05 |
With our fixture in place, taking a look at this and in the setup interference happening.
And this for our little cards application, the the setup of a temporary direct creating a new temporary directory and initializing the cards database.
This database is pretty lightweight.
So this probably isn't going to take a lot of time.
But there are other types of databases where this initialization could take quite a long time And right now when we're running even just two tests, eventually I can see us using running a lot of tests, but we're setting up and tearing down the entire database twice and it might not be necessary.
So this is where we can utilize test scope or fixture scope.
So I'm gonna take a grab all of this code and stick it in a new file called mod scope.
So this is the same.
So let's just make sure that pytests.
mod scope.py, Cool.
Still works.
Obviously we haven't changed anything but I'm going to do one change.
So we've got we're using using cards dB here and here.
And this is just giving us an empty database.
Right coming in.
Empty empty database, checking the count to zero and here we are taking an empty database, adding one item so that we can have one item awesome.
But why not just use the same database?
Why do we have to tear it down between?
So let's change the scope.
So in within this fixture decorator we can specify a scope I'd like to specify here module and we just give it a name so it can be the most common ones by default, it's a function scope, which means that like we've seen before, it runs runs around each test function.
And if we do module, it should just run once per module.
And this whole this test file is a module.
The other scopes available our class.
So for test class, you can run it for each test class or you can do package for each directory or session.
So once, once for the entire test run with multiple directories.
But right now we'll use module and that's it.
That's just the one change.
Let's see what that does.
So making sure that both tests still pass.
They do, but let's do set up show and see what happens then.
So now we've got a little m and it says it's running cards DB once and then it's running the two tests and then it runs the tear down for for here So that's pretty awesome.
This is just initializing the doing the set up and tear down once for the entire module.
And if so that again, if this takes a long time this setup, then we're saving this time when we're running a whole bunch of tests.
Now we have a subtle bug here and it's just a bug in the test ordering and I want to demonstrate with the module scope, fixtures or higher level fixtures.
Sometimes this can happen.
So we've got in our module scope.
our database is just getting created and torn down once for all the tests and it happens to be that we're testing empty first and then we're testing one item but the database is still sticking around after between tests so I want to demonstrate why this is a problem.
So grab all of this code and I'm gonna stick it in the fail file.
And the the only change I'm gonna do is I'm gonna take the empty and put it after.
So now I'm gonna test, we're gonna run a pytest runs tests in order, let's go ahead and tighten this up.
Uh pytest tests them in order that they appear in the file by default.
There's ways to change that but so it'll just one item and then it'll leave the date.
This one item is probably going to stay in the database and then when we check for an empty one it is not gonna be empty.
Let's see if that have failed as I expected to.
So the mod scope fail.
Yes, as expected, the second test, first test passed the second test failed because the count was one so we've got some corruption in the state of the system between between tests now some people might say, well that's why you should just totally always only use function scope but I think there's a way around it so we gonna look at couple of fixes.
|
|
show
|
1:16 |
So with test Mod scope fail.
The problem was that the database was not empty between empty between the data the two test calls.
So just to verify that this test fails account equals.
Yeah, the count is one at the second test test empty because we added an item in this first test and we haven't cleared it out.
Let's go ahead and copy everything in this file and make a fix for it.
So the easy fix for this is because we are assuming that the cards database is empty but it's not before we do anything within each test we can go ahead and make sure that we do a cards DB delete all and we'll do that in both tests.
Give me an empty database.
Delete all when count is called.
We don't need that here but let's just put it in there anyway when count is called then count returns zero.
So let's see if that fixes it.
So that fixes it.
We haven't really been given an empty database, for deleting here so lets come up with another fix for this.
|
|
show
|
4:58 |
So to fix this, we did a delete all within the test but I'd like to do it at the fixture level, so but the fixture is being used for both tests, so let's add another fixture in the mix to solve this problem.
So I'm gonna grab all of the code that we had and go into multi level because that's what we're gonna be doing multilevel fixtures and this sounds complicated, right?
You know, suddenly we just learned fixtures, man, what are you doing going to multilevel already?
But bear with me, this is going to be pretty good.
Um we just copy and pasted so I'm sure it still fails pytest just or it should actually work, but everything's working.
But I'd like to make sure that the it's empty at the fixture level, but I don't want to change this back to function because I want this to just run once and the close to only one runs at the end.
I just want to put this in here.
This is only gonna if I if I just take this line and make sure it's empty.
Doesn't really do anything.
Doesn't really help at all because I already it's already just an empty directory, it's going to be empty and this only happens once per module, not per function.
So I want this delete all to happen once per function and for that to happen, let's just put another fixture in.
So I'll just grab, grab that def dB fixture cards DB.
Well these are these can't be the same name.
So I could do this one to be empty empty cards DB.
And then cards_db delete all.
So this would work.
So what's happening here is empty Cards_dB.
Is depending on Cards_db.
And then it's deleting all.
Oh I have to return cards_db and then I could change then all the tests would change and they'd have to change to use the empty one.
But I just kind of wanted to always be empty when a test gets it.
I don't want to change the name because I don't want to change everywhere.
I use cards DB through here and maybe I've got dozens of tests by now, I don't want to have to just do a search in place.
I mean it's not painful but I don't want to do it.
So let's change the name here.
It just changed the initial cards_db, to cards_db module module and then cards_db module.
We have to change it here dear and here.
So and then we move that back to Cards_db.
So are the tests, the tests are all depending on this one now.
Instead, I just changed the fixture underneath them and Cards_db is dependent on the DB module and hopefully everything just works works smoothly.
And since I'm not, I don't really need to tear down at the, at this level.
I'm just doing, delete on the setup.
I can just do a return and this will still work the tear down for the the parent level module fixture will still run at the right time.
I hope we'll see.
Is this enough?
We've got this running this one's cards, cards_db, calling cards_db module doing the set up tear down and the rest of the test should just run.
Let's cross our fingers and hit go and see what happens.
Sweet.
Okay.
Anti climactic though.
I mean I'm really excited that that worked actually but let's watch it in action so set up show we can use to see what happened.
So yes, the card, the cards_db module was run and then cards_db fixture.
Oh it's still module scope.
So what's going on?
How did this pass?
Oh I'm still deleting all within the tests so delete all, delete all and and so that should fail, yep, that fails because I copied and copied and pasted the module scope here.
We want the cards_db that does the delete all to be at the function scope So between before each function call the delete all to make sure it's empty.
Now I think I think we're good.
Yeah.
Sweet.
So now we've got the module level fixture running at the beginning and then the cleanup at the end and then between tests were just running these the cards_db it still says tear down, We don't have any tear down but if we did it would run there.
Awesome.
|
|
show
|
4:58 |
Okay, we have this pretty cool setup, we have a couple of fixtures that work together.
We have a function level one that makes sure that the database is empty.
We have a module scope one that connects to and closes the database done And then we have some tests and I'd like to be able to write a whole bunch of tests in different test files, but not have to keep these all around in each test module.
So I'm gonna next talk about sharing fixtures with a conftest file.
So I've got a different directory set up.
So this is all in test cards, but I've got a different directory called fixture sharing that I'd like to to share with you and I'm just going to keep this open so we can copy some code out of it.
So with within conftest we're gonna put the the fixtures gonna grab the fixtures and put it in conftext and this is just the naming convention, it's conftest.py, It can be anything else.
This is what this looks for and you can have one of these per directory if you want, you can share fixtures within this directory, I can't share, use it outside of it, it can be in here, it can be lower too.
So if we had sub directories under under fixture sharing, we could use it there too, but we don't, it's just here, so this is that's just the fixture part and then just one we'll put just one in the one of these in test one copy that just when it goes here, do I need to import anything Cards_db is just gonna come from conftest So I don't need it cards_db.
But right here, cards.card, I'll have to import card for that, cards for that card object.
The rest of us should be okay for the 01 test empty With that in zero.
And this I'm just using the cards_db.
So I don't need to import anything that's kind of amazing and that's it.
Right now, let's take a look at what this test output looks like.
So we're gonna go into fixture sharing and we should be able to just run any everything here.
So if we do pytest -V, we get test one item passed.
Test empty passed.
So yeah, this works, how is it working?
Let's take a look at.
So we got this we don't need that multilevel anymore This conftest file that's sharing that.
It can be used to share fixtures.
We have the test zero, it's just empty for zero.
And test one, test two more.
Let's let's watch it with the set up.
Shell.
Okay Bummer.
So we have something going on that I dont really want I really wanted the cards_db That fixture to be to run to the connect to the database to run once per session, then I think we have the answer.
So it's once for module here and so it's going to run once for test zero and once for test, test one, what we want is session and I probably should have named this something different now because it's this module everywhere, but it's only used here.
It's not used any in any of the test.
Just replace cards module with, you know, session if we wanted to, should have chosen a better name anyway, but let's look a little module session.
Okay, so change the name cards_db session.
Now, the tests still just refer to the cards_db.
We haven't changed anything with the tests.
What happens now?
Do we have to update the tests?
We shouldn't have to Oh, here we go.
So we've got cards_db session running at the beginning and at the end and then our function scoped cards_db running around tests, awesome.
So now we have this these fixtures within this conftest file and I can write as many tests as I want and group them together.
Just use these fixtures within this directory.
This is a great way to share fixtures around the test directory.
|
|
show
|
3:08 |
We haven't talked about built in fixtures yet, so that's what we're gonna talk about now and it's gonna come around trying to get rid of this temporary directory thing here.
it's, works but there's a cleaner way and I also want to address one topic before we move on and that is I've got these, these different test files that one of the issues with having fixtures in a conftest file and contest files that can be one for directory.
So in theory I can have time in my project and or my pytest course directory in chapter three in fixture sharing, I could have a conftest file here, I could have one in chapter three also and I could have one up at the Pytest course level conceptually and any of those places could be where my fixture lives, you could share them like that.
So how do you tell where they are?
So if I'm just looking at the test file, I know this Cards_db is here and I need to know where it is.
Well I can use pytest to show me where it is.
So if I, if I'm in this directory where I'm at, I'm in chapter three right now.
I want to go into the fixture sharing directory run pytest, verify, it works.
$ pytest --fixtures is a really cool way to find out what fixtures are here and available and it's for this directory for the directory I am in and I can give it a specific test file if I want because there might be fixtures in that test file.
So let's just go and do that in test one.
What are the fixtures available?
So I've got a bunch actually there, so these are some of the built ins we're going to talk about, but at the end I've got the fixtures divided from conftest and it tells me exactly it's got the cards_db on line, 17 of the contest file and cards_db sessions.
So that tells me exactly where these are.
Unfortunately, I didn't add a doc string so let's actually fix that before we move on.
So I guess it's kind of nice to have a doc string there and I'll show you why.
So the cards_db session that is what is that doing?
It's, we're going to say that this returns a connection to the cards_db and well, so does the other one, but this one's special returns and a connection to an empty current e an empty cards_db So this is an empty different one, so slightly different.
Maybe you can come up with better wording this works and we can see how this doc string helps us with this fixtures call, huh?
Now that, that doc string shows up right there in our help text for fixtures.
So,thats pretty cool
|
|
show
|
4:43 |
But let's take a look at the built ins here.
So we've got some things that look like they might be kind of cool.
The ones that I want to look at right now is the the temp path ones So down here at the bottom we got Tmp path factory and tmp path where we are, we're looking at this fixture, that's a sessions to go fixture.
there's two related fixtures that pytest gives us tmp path, which is a way to have a temporary directory unique to each test function, invocation.
That's pretty cool.
But we're setting up the test directory as a session scope fixture.
So what we need is this Tmp path Factory and that returns a factory instance for the test session.
That's actually could be better.
doc string there, but I'll show you how to use it.
So we've got all our code within fixture sharing.
I want to leave this here so that you can play with it at it as is, but I'm going to copy everything to a built in fixtures directory.
So where we were here and copy everything *.py into building fixtures.
Cool.
So now the built in, let's close these off and built ins have the it's the same code we had before even including the doc strings because we just copied.
So that's cool, I'm gonna go over and make sure that it still works.
Of course.
Good to start out working.
Let's start replacing this.
So we don't need this anymore.
I'm gonna take, so this temporary directory and what was that fixtures again fixtures, the tmp path factory, and that's what we're gonna use.
So instead of this temporary directory and just this fixture itself is going to use the Tmp path factory fixture.
The way this works is you create directories with it.
So I create temporary directories.
I'm gonna say db path equals tmp path factory and you say make temp and you have to give it a base name.
So I'm gonna just give it cards_db temp.
It really doesn't matter because the whole thing's created in a temporary directory and we have a subdirectory within their And so that's my db path, at the session level.
And now that's already a pathlib object path lib path object.
So I don't need this set up and I don't need it as a context manager anymore because the Tmp path factory already works like that because it's a pytest fixture.
So leave this setup comment there, restless coming over and now, I don't need pathlib anymore.
I don't need a temp file anymore.
And now I'm just using this built in Tmp path factory to create a temporary directory.
I make the temporary directory here, we pass it to the Cards_db and everything else just works as normal.
And then when I close the database and then pytest will it's part of Tmp path factory when it cleans up it will clean up the directory.
So we're in the built in fixtures directory where I just changed this.
So let's make sure everything works.
But now let's look at the, the session again, pytest set up, show to see it, so the Tmp path factory gets called, then the session.
And I also wanted to point out, I am pointing this out earlier, every level shows you which fixtures are used.
So the session uses the factory, Cards_db uses Cards_db session and then the test is sort of inherits all of them and uses the Cards_db and Tmp path factory.
This is pretty great.
Really, what I wanted to show you here is that there's a whole bunch of cool built ins within pytest and it's useful to check them out.
So you can check them out here and there's with --fixtures, you can see all the different ones available.
But I definitely recommend using, looking at the looking up the pie test website.
So you go and check them out.
|
|
show
|
4:34 |
The last thing I want to talk about with fixtures before we move on is to talk about multiple fixtures.
I know that within the built in fixture section and others we transitivly used a lot of fixtures because like the test one uses Cards_db with in Cards_db use a session and indirectly we use a whole and tmp path factory and indirectly use a whole bunch of them.
But each individual place, we only used one fixture at a time but that's just because that worked for our needs.
I just want to demonstrate that you can use multiple fixtures.
So this is just gonna be a pretty simple example import pytest, we're doing this so that we can declare a fixture.
just fixture thats spell right and this can be anything.
So of course fixture can be whatever return, let's just return some stuff return to let's return it slightly different, capitalized.
Maybe we need a baz fixture bar and let's just go ahead and do three.
Why not complete the few more baz Okay, so these three fixtures room here test multiple multi.
Sure, that's good enough and I want foo bar and baz and so all three of these fixtures are going to be used and can just print them, print foo just just to demonstrate that we can use multiple fixtures.
So let's just go and run that we're in the right directory and pytest of course they won't print anything because we are not failing but if we do -s it'll put stuff out and all three of them got printed so that's it's not that surprising but I guess it's just good to know that you can use multiple fixtures at at a time.
You can also, I mean at any level so they can depend on each other.
Multiple fixed tests can use multiple fixtures, fixtures can use multiple fixtures, but what you cannot do, so I can't say one of the things I have to respect scope, so if I do scope equals module, scope equals session, this is gonna be backwards.
So if Bar uses foo, I can't do that and baz uses bar, they can they can use both.
So let's say it uses Foo and bar, so what's wrong here?
So this is a function scope by default, it can use anything above function scope.
It can use a session of module but bar can is session, it can't use there's nothing above session so it cannot use module So this should blow up and it does luckily it tells us exactly what happened.
You tried to access module scope, fixture foo with session scope, fixed request object.
I don't think I did that in any way case.
It's a it's a scoping problem.
If we swap these it'll work fine or I can do module here too.
That should work, yep, function to module two.
Session or work.
Yeah.
And the order.
What if the order makes a difference when we were using all these three here I guess show foo bar baz and then baz bar foo.
If they're all the same, let's just leave it like that so that works and play with it.
You can play with these to see how it's going.
I do encourage you to do simple tests like this, that if you forget about pytest functionality or really any library you're working with.
Just make little play test functions and test fixtures to play around with it to see if it works and have that separate from your test code.
So this is completely separate from the cards application.
I'm just exploring how this functionality works.
So that's a great, great thing to do.
I do it all the time.
|
|
show
|
1:39 |
Let's review everything we were hoping to cover in this chapter to make sure we got through it.
So fixture for set up and tear down.
Yes, we did that with setting up a database, we did that with with the cards_db, and fixtures for data actually the cards_db returns data too.
But we also showed even with that last example of foo bar and baz how you can just return data with a fixture using multiple fixtures.
Yeah, we just covered that built in fixtures we quickly looked at the list we didn't, we covered tmp path factory.
That's a mouthful but please check those out and with new releases of pytest, sometimes they add more of those so it's always worth checking out what the built in ones were there, looked at scope, function module and session scope and hopefully that is sort of understandable for you and then sharing fixtures between test files with conftest.py definitely covered that And we also covered some new flags that I wanted to highlight to make sure that you didn't forget we've got a -s to turn off output capture.
You can also use a --capture personnel to do the same thing.
We used setup, show to trace, fixture execution very handy to help understand what's going on and then --fixtures to show what the available fixtures are and show you where they're defined so if you can't, if you forgot where a fixture is, you can use fixtures to figure out where that is next chapter.
We're going to take a look at parameterization, which is another really cool feature of pytest, and I know you're gonna love it.
|
|
|
42:03 |
|
show
|
0:27 |
Welcome to Chapter four ParametrIzation.
Parametrization is about turning one test function into many test cases to test more thoroughly with less work parameterized.
Testing refers to adding parameters to our test functions and passing in multiple sets of arguments to the test, to create new test cases.
In this chapter, we're going to learn how to parameters, test functions and parameters, fixtures and we also use multiple parameters.
|
|
show
|
1:41 |
Let's take a look at one of the cards features called finish.
So let's say I've got a cards database already set up.
I've got three items and they're in three different states there in we've got write a second edition that's done and I've got a record pytest course that's in progress and I need to release the course that's still to do.
So these are the three states that are possible for a card item within the cards application to do in progress and done.
And there is a feature called finish which changes the state.
Let's just see what that help looks like.
So the help will list and it gives us a definition that finish will set the card state to done.
Well let's try this for each of these items.
So let's say we finish the, record the pytest course, the second one finished too and list out again now that's done.
Okay so it works from there.
We can say cards finish can we finish it right from to do so go straight from to do to done.
We finished the third one.
It looks like they're all done now.
What happens if we redo refinished like something So it's already done and we call finish on cards finish one and it's already done.
But what happens?
Well it doesn't seem to mind that and it just leaves it in a done state.
Let's convert those experiments to test.
|
|
show
|
8:09 |
In the Chapter four directory.
We've got a test finish initial we're going to start testing this finish operation.
I've got a test started the states that we need to test or that we wanna run finish from either an in progress card or a done card or todo card and make sure that the end result is that it ends in done.
So I've got a test finish in progress, that's the most natural one.
I've got something that already started and I want to finish it.
The cards_db fixture is the same one we used we ended up with in the last chapter I've got it stuck in a conftest file where it's the same as we had before, there's a session scope fixture that connects to the database and then we've got the cards_db that does it's a function scope that does delete all and then returns the database to the test.
So just reusing that for this chapter and let's start with the, what do we have to do?
We've got we want to start with a card.
So let's just C equals card and maybe something that doesn't really matter what it is.
And the initial state of the card we want is in Prog we've got our card that's in progress, we have to get it into the database.
So I'm going to do a cards_db.add_card let's go look at that again That pop up, adds a card and returns the idea of the card.
That's great because I need that ID.
To get it later.
So go ahead and add card but we need to grab that index.
What are we going to do with the index?
We're going to use that index to finish it.
So card_db finish that takes a card ID.
Which is our index I.
So at that point so I've got a card I've added it to the database and now I've said it to finish that card starting from in progress and now I need to get it back out to see what state it's in.
So I'm gonna do a cards_db get get card.
What did it tell me what to do again Get card.
No I just give it the index now that's going to return a card.
We can save it or we can just let's just save it.
So cards_db Card equals cards_db.
Okay card.
And now with that card I have to check the state.
So maybe a final state equals card.state that should do it and then assert final state equals done.
There's our little test And We're in Chapter four.
Chapter four pytest test finish initial.
Cool that worked just to make sure that we didn't let's just change this to in progress.
Just just watch it fail to make sure we're really doing this occasionally.
I like to just do that watch something fail just to make sure yes it really is done instead of in progress.
Okay so that workflow seemed to work.
I'm gonna look I'm looking at this card thing and I know this is this might be a premature optimization but I'm only using it in these two places.
So let's I'm gonna tighten this up a little bit by just saying state this spot and just say cards, the final state is cards_db Get card ID.
Get card of I.
And then taking the state of that.
So that's what we want now.
This is the finished state is also only being used in one place.
I mean we could go crazy with this.
We could we could since we're only using the index here, we could put the the add card into here instead of the I.
And then for that matter we're only using the C.
In this place.
So we could do we could like great let's just try it, see what this looks like.
I am looking for readability though so we want to make sure that we stay readable So if I grab this, the value of I we could put it in here and then the card we could do that just go ahead and do that.
However, I really like my test to tell a story and this doesn't really tell a story to me.
This is a bit confusing.
So I personally don't like that version but what do I like about this one.
So here's the story.
I've got a card kind of a given thing.
Given a card I added to the database.
I finished the card and then I'm getting the final state.
Yes this is I'm getting the card to get the final state but really I'm just trying to find the final state so this collapsing is okay with me but the the others seem to break apart the story in a way that I can't read it.
So.
All right I've got one test here to test finish from in progress so I've got the in progress one done we need to do to do and done also So let's just should be straight forward we test finished to do and our initial state is to do and the rest should be the same.
We just wanna make sure that we can get from todo to done.
And we also want to test it from done.
It's already done.
I want to be able to see if I can finish it from done.
That should be fine.
Actually I'll notice that this something is here is because card requires a summary in there.
But do I need it?
Should I assert to make sure that this something is the same.
Does that add any value?
So let's just make sure that I'm not grabbing this do final summary.
Final summary is summary assert final.
Is that adding any value though.
So let's just go ahead and run it to make sure that things are still working Oh final summary?
Oh that's not final summary assert that the variable final summary is something, okay so that passes everything passes so far.
I've got two items.
Oh I forgot to change the name.
Start finished from done.
We should have had three tests okay we've got three tests finished from in progress finished from to do and finish from done.
But this extra assert its not really we don't we know it's not going to change I mean we could test for that but that's not really what we're looking at.
So this confuses the story so I'm not going to I'm not going to do that I think we're focusing on checking to see that the state changes.
If we wanted to make sure that this the summary doesn't change during this action.
I think that would be a different test.
So.
All right so I've got three tests I can test this but there's a lot of redundancy here so let's clean it up a bit.
So there's we're really all of these are all the same.
All we're doing is changing actually all of it's the same except for the initial state so that's where parameterization comes in really handy but we're gonna do another thing that seen before we get there.
|
|
show
|
3:04 |
One of the ways we can deal with these three tests being mostly the same except for a different start state is we can just combine them into one test.
So let's just go ahead and grab one of these tests and combine them into one test.
So we just have test finish, we're gonna deal with all the states and then for C.
In and then we're just going to have a list of different cards.
So instead of one card here we'll do we'll start with one three.
Just three is fine and something something something we don't really care about this summary but the we need to do and in Prog and done right the three states.
And since we're listing them here we don't really need it up here.
We could grab that, delete that and move that over there.
So for C.
In each of these cards we'll add the card, make it finished and do a final state of assert that it's done when it's done that works.
One of the we can just we have all these the same but since we're doing three cards we could do change them anyway just for fun.
1, 2, 3.
Great.
Let's see if that works.
Test finish combined, test finish combined.
Well that passed doesn't tell us much and we end up with passed.
Yey I have actually seen this and I've guilty of it myself a few times of testing more than one thing.
We're really testing three different test cases within one test.
What's wrong with that?
I mean we're testing everything we want to write.
Well, one of the things is it doesn't, we have no documentation really of what we're testing and I really like to have the test nodes have enough indication of what we're testing within here because there's really three test cases and it's really only showing up with one And that's one of the problems.
The other problem is if one of these fails, like if the to do one failed or the in progress once failed, we won't finish the rest of them.
So if the todo one, if it fails right away, like let's say we just fail, this will fail all of them but it'll it won't hit all of them.
It'll just hit one so it'll stop the first time it hits it.
How do I know it's the first one and not the last one.
Well I know how python works but also I can do -l which is short for show locals and it will show me it shows C which is the card and it says the starts the state is to do in the summary is one.
So we know it's our first test case and it didn't test any of the others So parameterization is just as easy as this but even better we'll get that next.
|
|
show
|
5:52 |
Now finally let's take a look at function parameterIzation.
So we're gonna take this one test that let's fix that so that actually passes if we try to run it again.
That starts against done.
So we'll take this test and turn it into a function parameterIzation instead.
So it's this is not too hard it just doesn't give us the detail and the information we want.
So it's still we're just still gonna do one test but we're gonna parameterize it the right way.
First off we need to import pytest and then we will decorate it with pytest mark, parameterize and then we have to give it a list of things we're parameterizing So in this case we've got both the summary and the state that are different So we'll pass in both those summary state and it's the starting summary in the starting state.
We could specify that or we could just leave like this I think this is fine.
So for each of those we're gonna have a list of a set of those.
So a list of we've got the list and this will be For the first one.
We'll just do it's gonna be 123.
And todo in Prog done.
That's what we want.
So delete that that's really what we want and we just need to have the well it's kind of all we could probably put all this stuff here.
So that's not bad.
So that's a parameterize summary state.
1 2 3 to do, in prog, done.
So each of the times through the list we're going to pass in this pair to the test so I need to give it the the parameter names, summary and state.
And then we will go back to assigning just one card.
So C equals card summary state.
Oh we need state equals state.
We don't need to specify the first one.
We can if we want summary equals summary in the summary.
Equal summary.
This is may be confusing if you haven't seen this before this is the parameter name and this is the value.
So let's actually put start here so we're not confusing ourselves.
Start summary and start state.
And then now the value of state that we're passing in card is start state and start summary there.
Now we can go back to this here we go.
Inventing looks weird.
Well so now we've got is this it.
All right.
This list parameterize and the parameters test, finish, start summary.
Oh this is wrong.
Start summary Start state start summary, start state.
They get matched up with here and let's go actually let's print it just to make sure because this I don't know if I trust it print let's do an F string start summary equals start state vehicles.
pytest.
Let's just make sure this works.
Oh yeah it works a little worried there for a second.
If we do verbose it will show up.
This is neat actually.
So that the last one when we just did the combined test Func combined.
No, not test.
Func combined combined finish.
That's right.
Just so showed one and now we're showing that the different test cases and it shows that with test brackets and that's what parameterization looks like you got one todo 123 and todo in Prog done.
Yeah.
So let's go through that top, walk through it again just to make sure that we're all understanding what's going on.
So each time through the test this thing one is gonna get assigned to start summary to do is gonna get assigned to start state and then it will run through it And then it does the same thing with the second set and then the same thing with the third set.
And let's go ahead and print these out just to make sure it's clear as mod wrong.
One finish func params.
There we go.
The print out in each one of these.
It's got the it's got the test node name and then it's printing start summaries, one star status to do is to do 123 to do in Prog done.
They're pretty easy.
Don't you think?
Let's let's split that so we can compare those.
It's not too bad.
Oh I can see one thing that right off the bat that we're we're sort of doing too much work then we need to.
So this is we're parameter rising to things.
We really only need two parameters.
One.
I'll show you that next, but I think it's pretty good.
The benefits far outweigh the downfalls.
We don't need this Print summary anymore.
We're pretty happy with that.
So let's take a look at how we can simplify this with just one parameter.
|
|
show
|
3:08 |
I had went ahead in parameter is two with two parameters.
The start summary and start state.
Just kind of to show you how to do two parameters and it's this list thing It needs to be basically an a list or tuple of lists or tuples and I usually the outside there could be lists of lists or tuple of tuples It doesn't matter but I usually do a list with tupils inside just to keep the brackets clear in my own mind.
Otherwise we've got if we did tuple of tuples we'd have parens,parens,parens at the end and that it doesn't really matter.
But this is my preference.
So let's take this though and simplify it a bit because we don't really need to start the summary as we've said before.
The summary change isn't really part of this test.
We're really just testing that the the state changes.
So instead of passing in the start summary we can just do something again.
That's pretty easy and we can take that out and we're just doing one parameter then and we don't need the start summary.
So it's just the start state that we're parameter rising and now we don't really need a list of lists.
We just need a list of things so we can just do the to do in Prog done that all fits kinda on one line.
I'm good with that and that's it.
So that's really simple right now let's compare that with our original simpler than that.
But then the the combined one that we were using combined Compare those here we got one param and the combined this does pretty much the same thing.
I think it's just as readable or or better it's I mean this part of it's the same right now let's try to run that.
So we've got test func one program, pytest test Func one param it's three items.
And the other thing I like about this is I could parameters but a whole bunch of stuff but in the output it's really focusing on the thing that we care about that's different.
What we really care about is that the state is different in each of these test cases for the initial start state.
And so that's what highlights it.
Like let's say we had that test in the combined one where if we if it failed it failed one of them and didn't run the rest of them.
Let's try that.
Now in the parameterized version we don't need to trace back either because it is showing us that they all failed.
Each of the test cases will run even if one of them fails.
So that's really cool.
We don't need them to fail though right now.
We're good.
That is a function parameterization and I hopefully use it all the time because it's really powerful to get a whole bunch of work done really fast.
Oh, I wanna show something before we go just a moment.
|
|
show
|
3:30 |
What if we really did care about the summary for some reason and so I'm going back on this.
I want to show you stacked parameters and this is fun So I've got a parameterizing start state and what if I wanna parameter rise the summary too.
But I don't want to do it like one summary for todo one for in prog for done.
I want to really have them all match up commonetrically let's just stack these up So instead of doing instead of doing start summary here I'm gonna do it as a stack thing.
Let's do start summary and not to do in Prog done but we'll do like one and two and I'm intentionally doing a different set to two instances that go into start summary and three that go into start state.
What happens then smarty pants start summary and we'll place replace that there.
We don't need this for our testing our cards application but I just want to see what happens.
pytest -V, Test func stacked.
Look at that it's doing how many?
123456.
It's doing six of these because for each start state it's doing a start a start summary of one and a start summary of two.
So to do as one and two in Prog has one and two done has one and two and you'll notice the order.
The order is from bottom to top it doesn't matter which order these are in.
So if we swap these around and put start state on the top and start summary then we'll get the other orders.
So we had the start state and then the summary like that.
Now if we swap it around it's a summary and then start state.
I kind of like to if I'm gonna start stack these up just to keep these straight in my head.
I like to have the the order that the test node shows.
I like to have that be in the same order as the parameters to the test.
Even though it doesn't really matter to pytest, it matters to me when I'm looking things up.
Start summary is first in the list.
So I like to have it right there right on top of the test function.
And then as I'm stacking to the right, I'm stacking up for parameters.
So we could even have another one if we wanted to.
Doesn't even matter if we use it.
If we just do foo and 134.
Sure Why Not?
Doesn't matter.
But we need to list it.
That's a whole bunch of test cases.
Actually.
Isn't that funny.
So I'm actually not using foo anywhere in here.
So if you're in one of those jobs where you get kudos for actually just adding test cases.
This is a real easy way to just multiply your test cases out without doing a whole lot of work but we don't need that there.
We'll leave the stacked one just for fun for people to play with.
Now I think we're ready to move on to fix your parameterization.
|
|
show
|
4:07 |
Alright.
So this is function parameterization again this is the one with one parameter and it starts state and we're going to take this as a starting point to show you fixture parameterization.
So this is function parameterization were parameterizing.
The test function with fixture parameterization.
We are parameter rising a fixture.
So we need a fixture.
We're going to have the fixture be the start state and we're just gonna move this start state out of here and put it into a fixture.
So to make a fixture of course we do pytest fixture.
There's parameters you can give to pytest fixture and one of them is there's kind of a whole bunch.
We're not gonna cover all of this in this course but one of them is params and with params we just pass in the parameters that we want to use a list.
That's it.
Now we need a fixture so that's how you parameters.
The fixture will make a fixture start state but to grab this we need to be able to grab the the value out and to do that.
We'll use a built in fixture.
That pytest has called request and in the pytest book.
I kind of talked about this a bit for now we'll just trust that we can get the parameter out of here and we get it out of there by we want to return it request param.
It's important for fixture parameterization to actually return the value because I assume that's why we're returning it.
I mean I guess we wouldn't have to if we're not using the value of start state within the test.
We can we don't have to but we are we're using it within the test so we need to return it and so we don't need this parameter rise and so that's it.
We take that off, we don't need this one start state stays there because it's the the fixture and before the test this will run and the value gets return to the test and we can use it within the test.
That's it.
Let's go ahead and run that pytest -v test fix param there.
We have it.
It's just kinda like we did before.
It just shows the actually we could even run the other one just as a comparison text test func one param.
It looks kind of the same, right?
It just has the test nodes, let's pull this up so we can look at both of them is very similar.
It's just a different different file.
So fixture and function parameterization are kind of indistinguishable from the test node name standpoint But what's different about them in action when this runs before it runs the fixture gets run.
We can even I mean, let's go ahead instead of walking talking through it.
We can use setup show to, show what it's doing, wow, it's doing a lot.
Let's get that one again, whole bunch tmp path factory.
All right.
And then the db card session and then the cards_db.
And then the start state and then the test and then we unwind part of it We unwind to go back to the Cards_db.
We're not doing this session and factory except for at the end again.
Cool.
That's nice.
It's a lot of information there.
I kind of forgot about all that.
One of the things that I wanted to show is the set up and tear down of start state.
We're not really doing anything here, but we could if we had some work to do, we could do setup work here.
And if we had tear down, it's not going to run with a return, but we can do a yield and now it will run.
So that would be how you do a set up and tear down within fixture parameterization.
But for right now, we will just return the value.
|
|
show
|
1:52 |
Now what if we wanted to stack them like we did in function parameterIzation If we wanted to do like a cross product kind of thing.
If I wanted to do summary two summaries and have summary to summaries to with it todo and two summaries within Prog and two summaries were done.
How would I do that?
But similar?
So let's just grab these again, we'll just do another fixture.
This one's going to be the start summary and it can be really anything we did one and two before didn't and that's all we need start summary request param, the rest is kind of identical and now we need the start summary fix your name and use it as the value.
Now test fix multiple.
Now we'll have we should have 6 two times three sweet six items passed six passed six items.
And the order that we're showing here, one todo and then two todo in Prog done.
The order of this is the order of how they're listed here.
What happens if we replace them?
We can do it in any order really.
Now it's state and then summary date and then summary.
But since I'm calling this with summary and state here, I think that's confusing.
So I think it'd be best to keep those in the same order that we are using them with in the code.
|
|
show
|
3:58 |
One thing to note about parameterization is that you can get a whole bunch of tests really fast.
So so far in chapter four we've got these these many files And how many tests are they?
So if we just do everything so pytest, It's 31 tests that we've generated so far.
That's quite a few.
Let's take a look at him.
That's a lot.
So there's a we can get a lot of tests really fast now.
These are all sort of testing the same thing so that's not really going to happen in practice hopefully.
But you will have different parameters for different tests.
So this isn't unheard of to get a whole bunch of tests with because of parameterization.
If we want to just run a subset, I want to show you another way to run a subset of tests called keywords.
Before we do that, let's just talk about the other subsets we can do.
So we can do even with parameterised we can do a single test of course a test test file func Let's do stacked stacked one.
So that runs all of the ones within.
Just that that not -F, -V.
That file runs all those.
What if I just want to run?
And I can I can still pick a test name even though it's parameterize.
I can just say I want to run that paste that one function and it still runs all the parameterizations of the function.
Now.
I can also just run one like let's say to do or in prog let's just grab the to do one copy.
And what happens if I just paste that?
It's gonna blow up because there's no matches found.
Why?
Why is it doing that?
It's because these bracket things are getting mucked up within pass with the within the shell and everything.
So with parameterization, it's important to always use quotes.
And now pytests will be able to find it in UNIX C type environments.
You can do either double quotes or single quotes with, if you're on windows, just use double quotes.
But let's go back to our full list.
There's another way we can subset these and it's through keywords.
So I can say for instance, with all of these, all the ones that do to do.
So if I I say I want to do all the all the tests that do todo, I can just say K for key word and say to do and it'll pick all the ones that have to do in them.
And then we can do how about not to do if I'm doing complex things, I need to put it in quotes, I can say not todo and it does all the other ones, but what if I want not to do and let's see combine it with something maybe not todo and three and three picked two of them.
I didn't know how many would be, how about not todo and one of the test names and fix object, what's going on here is we have a similar thing to that we looked at before with the not and and and or and parentheses.
We can do parentheses too.
So let's say not not to do or done And is there any that we can do?
Okay.
There's I have too many quotes not to do not to do or done and fixed object.
Okay.
That narrowed it down.
That's a weird way to pick just that one.
But as you can see, you can do kind of complex things within the keywords and its fun.
|
|
show
|
5:21 |
In the multiple fixture example.
We had two different fixtures that were both parameterized.
and we ended up with six values.
But what if we didn't want that, what if we wanted to use fixture parameterization but we really did want to have the summary and state pair it up.
In that case we can use one fixture, we're gonna have one fixture and we're gonna call it state summary because it could have starting state starting summary but that would be kind of long.
So we'll just have it state summary for the params I'm gonna give, each param is going to be a tuple so it's gonna be a tuple of one to do two in Prague three done now I'm not really going to do it, I could do something, I could unpack them within the the the fixture to do work if I needed to but here we're just going to return it.
So what this is going to return is going to return a tuple of values and then the the test is going to have to, if it wants to use them separately we need to unpack.
So this line is just unpacking the tuple into a start state and a start summary and then passing the rest on just as we did before.
Now that we have we know that we can do really anything within params it doesn't have to be just strings or just managers or small individual values.
It can do be things like tuples.
It can also be objects.
So let's take this one step further and say fix your object.
So this is the same, it's really the same test.
But instead of a tuple we're going to just go ahead and do the card now in.
I can just say start card.
So now I'm going to iterate through here and I'm going to have card one card two and card three and I'll just return the card.
So it's just start card it'll get returned to the test and now I don't need to create card within the test because it's already created.
I can just pass it on to add card.
Now let's verify that both of these work pytest test fix multiple one fixture and test fix object sweet.
Both of them work.
And if we do have verbose, hmm we have a little bit of a problem.
So the little bit of a problem is that's not very interesting.
Start card zero start card one start card two.
And same with the state summary state whatever here.
1012.
This does distinguish them but it's not that interesting.
And that's what pytest does with any object or any high level thing if it's not an obvious thing, how to print it just does that 012 thing.
So what do we do?
We have an ID's.
That we need to use.
it's an extra feature of fixtures and my there's a lot of ways to do it.
I cover a whole bunch of this within the py textbook.
But the easiest thing.
Hopefully if you're comfortable with lambdas is to throw a lambda in there so ids Equals.
And then I'm just gonna do a lambda expression lambda and each of these is going to get an item like passed to it for each one.
Each idea it's gonna be a function that takes one value and it needs to return a string.
So lambda X.
And then return X.say summary If we do that that will fix this one does 123.
So that's an ids.
with a lambda let's show what ids Looks like in the multiple fixture case it's gonna be let's do a different function.
So you can do lambdas.
If you're not comfortable with lambdas you can just define your own functions.
So id Func I usually do id Func for, it's just a habit of mine.
And let's also give it X.
Which is terrible variable name but you know habits so that's gonna get passed in one of these tuples.
And so if that returns if that returns the 1st one zero that should be good if I want to do like the the both of them together I can do that too.
So let's go ahead and return F.
This to an F String of X[0]- X[1] and now the same thing I pass it into the fixture and do ids IDS Equals id func.
Now they're readable.
I did the dashes in in the in the one case and just picked the 123 because they're different.
They're different but they're that's like not meaningful.
Right?
It's the wrong one.
We don't really care about that do we?
We could do the same, we could do the ids.
Or we could pick the pick the state instead.
Now it's todo in prog done.
That's a little bit more interesting.
So that's how you deal with passing objects or tuples or other things or multiple parameters to one fixture for parameterization.
|
|
show
|
0:54 |
Okay.
Let's review what we've covered in this chapter.
In this chapter we covered parameterization but we covered quite a bit of things.
So we've got parameterizing test functions.
And remember we'd use pytest mark, parameterized and gave parameter name and then a list of parameters and then pass that parameter to the test function.
That's really all there is to it.
We also used multiple parameters by saying things like thing one and thing two and passing in a list for each test case.
We parameterized fixtures with passing in a param and grabbing those out.
We can pass in multiple fixtures, pull those out with tuple unpacking on the command line.
We used quotes to to run parameterized, tests and we also used K along with and or not and parens to select tests using the keyword.
|
|
|
23:35 |
|
show
|
1:03 |
Welcome to Chapter five markers, markers come in a couple of different flavors.
We have custom markers.
These can be used like tags or labels for tests.
They can be used to help run a subset of tests or avoid running a subset and then there's built in markers built in markers are things like pytest mark parameterize which we've seen and they can change the behavior of tests.
The built in markers, there's a bunch of them, but the most commonly used built in markers are parametrize, skip, skip if and X fail.
Custom markers are used just like a normal built in markers, but you just make up the name in this case, Mark smoke.
They need to be declared in a pytest any file or some other config file and they can be used on multiple tests and then you can use the -M flag to run only the tests you've marked or you can combine it like not smoke or smoke and not something else.
|
|
show
|
4:02 |
We'll go through examples of all the markers that I use frequently X fail, skip, skip if and custom markers and let's start with a simple test of sorting So the card object that we're using within the cards application, it seems like it would make sense if we would be able to have the user be able to sort the cards if we wanted to.
So what I mean by that is if I have a list of cards here, I've got just filling in the summary with Z's and A's and if I call sorted on that I should get back a sorted list and therefore so I put a with Z to begin with an A to be afterwards and then we should end up with A.
And then Z.
That seems to make sense.
I don't know if this works yet but let's give it a shot.
We're in chapter five and we do pytest test, sort the less than operator is not supported between instances of card and card.
Okay, so that's a bummer.
Set that aside for now.
What do we wanna do?
We could just say that this isn't supported in this feature in this version yet, maybe in the next version.
So maybe let's skip this test but I think I want to zoom in.
So it's said that the less than isn't supported.
So let's zoom in and do a test that just tests less than so def if I just give it a couple of cards and block that in card one card two a task can be tasked.
Now I should be able to compare them.
C1 should be less than C2.
So I should get the same error is as before we'll do a short trace back this time.
Yeah so we get the same error.
So both of these I could focus zoom in and focus on both of them and that would make this sort of work.
But for now it doesn't.
So for now we can compare this with equality but we cannot compare these with less than just for sanity sake let's make sure that we know that we can do equality.
one test passed and it's our that last one just for reference it is the equality works but the others don't.
So let's skip these for now.
So for now until we get a chance to implement this feature.
I'm gonna just skip them.
So we skip them by doing pytest mark, skip except for we have to import pytest first.
Oops let's do this in a different file so that we don't mess this up as a as a general thing.
So like copy this.
Let's do it within our skip file.
Okay I'll clean this up, I'll clean this up later or now.
So that's what we started with and then our skip is here we've already got it started.
We want to skip these two for now let's just see if that works for skipping different file.
We did skip now skipped with unconditional skip.
Don't really like to leave it at that.
So let's put a reason for these.
So I always like to do a reason for skips and my reason is sorting or sort not supported.
Maybe less than not supported.
And then we can do the same thing for the other test there.
Now, if we run this we get less than not supported.
Yeah, we could do sort not supported because sort doesn't support it because of less than But cool, sort not supported.
Less than not supported.
Sort then.
Okay, I'll get this right eventually.
Great.
Now we have skip figured out.
Next we look at skip if.
|
|
show
|
3:11 |
Now let's say I've decided that this sorting and the less than I'm not going to support it now, but I'm gonna sort it support it in version two.
So right now the cards version is 1.0.1.
And I'm gonna I think that the next major version will support this sorting for.
I don't know, I'll come up with a reason why we need to go ahead and copy this into a test skip if but I want to show you what we're going to use to compare the version.
See the version string is just a string.
It's like 1.0.1.
Let's take a look at it.
It's in the init and it's just this string.
So I want to compare the version to say that this major version is less if it's two or if it's less than two, then I want to skip these tests.
That's the comparison I want to do to get at that.
I'm going to use something from packaging.
So it's the parse function from packaging.version.
And packaging is a third party installed.
You have to do.
So you're gonna have to do pip install packaging to use this.
But I want to show this to I've already got it installed.
I want to show it because it's really pretty cool and I've used it in a couple of projects.
How it works is it's probably got a bunch of stuff in there.
But what I use it for is to pull out the major minor and micro versions so that I can see those and I have this test put together I've got a print so that we can see it.
And I'm just asserting that if I put it all back together the major minor micro with dots it will equal the version that we had before.
So let's go ahead and run that just to see how the packaging version parse works So pytest -S test, skip if.
Major is one minor.
Zero micro is one and I'm pretty sure that test passed the packaging.
Yes it passed.
Let's just leave that in place while we're playing with the rest Now now that I have that ability to parse that out I can say give this a condition.
So instead of skip I'll say skip if and then the first thing that skip if wants is a condition and I'm going to give it this say that this parse command.
If this is the object that gets passed back I'm just gonna look at the major and if that's less than two then we'll go over Go ahead and say sort not supported on version one x.
And then we can do the let's go ahead and just copy that.
It's gonna be a little so less than not supported.
So now we have skip ifs instead of skips and we were skips before let's run it again.
And less than and support not supported on version 1.x wonderful so that's how skipif works.
|
|
show
|
3:32 |
For X fail, it works a lot, like skip If it you can pass in, if I just say, instead of skip, if I just say X fail, what's this is gonna do is it's going to it, you can give it a condition and you can give it the same reason less than not supported by one X.
And then instead of a skip, we're gonna get get an X fail if this fails, but I guess the difference is this actually runs it, this will run the test instead of skipping it, so let's try this.
Just the only thing we did was change skip to X fail.
So pytest -V, test xfail and yeah, that's it, we got an X fail here.
it's kind of neat to see what happens if this passes, so I'm gonna show you what this looks like if it passes instead of fails, but let's just leave this one in this test in place and do a like an xpass demo, let's pass.
And the reason is X pass demo and X file doesn't require a condition.
You can use a condition but it's not required.
So this is an xpass demo.
And what happens?
Oh, we wanted to actually pass, so instead of comparison less than we can just do equality and actually have them equal.
So that should pass, let's try a running X fail now, so the we have X fail.
and X pass and they show up, they don't show up as passes or fails, they show up as X passes or X fails.
And so that's a different sort of a thing.
And when we don't do verbose, if we just do normal one instead of dots, we get this X for the X fail and and X for X pass Now, we can also say that I so actually I do think this isn't really gonna fail, I'm not expecting it to fail, but I can say, I really think that I'm certain that it's going to fail You can say strict, not it might fail, but I know it's gonna fail and that way if it passes, if it does fail, it will just be like a normal X fail.
But if it passes and you said, I think it's gonna fail, it will show up.
Let's watch, it shows up as a failure, that's kind of what we want.
So strict strict turns X passes into fails.
So that's really what strict is on your X fail marker X fail in all its flavors.
You got, you got X pass X fail.
Well, this is an Xpass demo, but this is how you use X pass or fail without a condition.
You can use X fail with a condition, so skip, has skip and skipif.
X fail, it's just built into it, you can give it condition or you dont have to give a condition.
|
|
show
|
8:08 |
Now I'd like to show you how to do custom markers.
So custom marker is like a tag or label that we apply to our test just so that we can reference a set of tests by themselves.
So let's take this example.
So this example I've got in in chapter five it's test custom and we're also going to use the pytest any file.
There's nothing in it yet.
Test custom is from test card from Chapter two.
It's a bunch of original tests that we just were playing with.
I'd like to take a look at this and there's a few of these that are similar.
So we've got testing field access test defaults and then we have equality inequality with different ids And then inequality.
And if I want to run just these are the these tests I could do something like pytest mark.
And like when we were doing skip it would just be skip here.
But I actually want to run these.
So I'm just gonna mark this with equality and then use the same mark on these three different tests.
That's the part that you do to the test to add a marker to your own tests.
But there's something that will happen if we run this just as it is.
Let's make sure we're in the correct directory.
And we are.
So we'll run pytest test custom and oh pytest is not defined.
That's the first error.
So we'll have to import by test but then we get these warnings showing up for all all three of these that we had.
And it is saying unknown pytest mark, equality.
Is this a typo and then it gives us a web page to go to if we want to register custom marks and I'll just show you how to do that right now.
So to register custom mark, that's where we go to our pytest Ini file, this is your settings file will cover that in the next chapter.
Actually a couple of chapters after this.
But if you don't have one already, it's usually at the top of your project.
So pytest any file and it just looks like this.
It'll say pytest in it and then we'll have settings related to pytest and we're just gonna add a markers setting, we're gonna give it a multiple line because I'm gonna have a couple of these equality and then we give it a description for what this markers for.
So these are tests for equality and inequality.
That's all we have to do to register it.
Now these warnings go away.
What did we do?
Nothing so far We have it we're just adding markers to these tests and then we registered the marker or you could do it in the other order of course and then they have marks on them.
But if we don't use the mark doesn't do anything But if we use the mark and we use it with the -M and we pass it in the mark, the custom mark quality.
Now it just runs those three tests.
And we can actually combine this with keywords even so we can say mark equality mark.
But we maybe say we only want the ones with with diff in it Let's add another one just to show you how to combine them.
So if I do another one say foo some foo tests.
And now within our test file we've got equality.
And let's add foo to this one.
So inequality is foo And Let's also do the last two.
So you can have marks on multiple tests and single test can have multiple marks.
So now we've got you on a few tests and equality on a few tests.
And so what does that do?
We can combine these we can say equality and foo.
So the ones that are both of those.
So that's just one.
Equality or foo is either of them.
And then we can also say like equality and then combine them in all sorts of ways.
Like And not foo that will select the tests that are marked with equality but not marked with foo.
And then we can also do the reverse.
Of course not equality in foo.
Custom marks are handy and they're waiting for you to select which test you want to run or which test you don't want to run.
And of course there hopefully if you've done it right there declared within your your pytest any file but you can also see them with with a marker.
So you can say pytests markers and it'll show you all of the built in markers that we can use.
But it also shows at the top of the list is our custom ones and it even shows are descriptions.
So that's nice.
The last thing around markers that I want to show you is that you can do strict.
So what happens if we misspelled something like if we called this foe instead of foo what'll happen?
We know we'll get that warning right custom and we'll get that warning that is this a typo we can make this warning not a warning.
We can make it be stronger than a warning if we pass in strict markers and then instead of a warning we get an error.
I personally would rather have it be an error than a warning.
So I like to have strict markers so in order to and I but I always forget to like pass it in so in order to always have it passed in we can add another setting to our pytest init file addopts and I usually put addopts at the top but I'm not sure why but we can just add it here strict markers and we don't have to pass anything in sweet but I'll go ahead and fix that because I don't want that error in the code.
|
|
show
|
2:49 |
One more thing I want to talk about before we leave.
Our discussion of markers is a thing called -R.
So it's just an extra flag that we can use to help see what's going on in chapter five.
We have a handful of tests already and we have our test files we've got our custom one and we have some skip this skip examples skip if an Xfail and are sort let's just run everything.
But I'm gonna turn off trace backs because I don't want any of the fails I don't need to see tb=no.
So what we get the output here is we get green dots for all the passing.
We get SS for skips, we get the F's for fails and then we've got X fail and X pass here and then at the bottom we've got reasons but our reasons for the skips and the Xfails are gone.
We can use instead of let's clear that instead of just tb no we can turn on -ra.
So this is my favorite option for this -r.
And that just gives us more information at the end.
I'll just run it and show you what's going on.
So the '-ra' added.
So we also already had these failures listed but we also now get reasons for the X pass.
So the Xpass demo reason for the X fail and reasons for the skips.
I like to see all of this in my output even if I'm turning off trace backs.
So I often run with '-ra'.
Now '-ra' means all except for passing.
So if we did capital A that's all including passing and that's kind of annoying.
The default is -'rfE' So failures and errors.
So that looks like we had before.
So you can also specify, you know, fine grain if you want.
So if you want to just see the skips, you can just say '-rs' and then you can just see the skipped reasons or the X fails.
So they are 'x' for 'xfail' and capital X for 'XPASS' But you know why do that fine grain stuff.
I just remember '-ra'.
And then I get it.
But now that we've already mucked with our pytest any file, let's take a look at that and just stick the '-ra' In there.
So here for our pytest, any file, we can just add '-ra' And save that and then that should be fine.
So if we take that off and just do tb Equals no.
Now we get these.
Yeah.
So what did we do?
We did the '-ra' flag and stuck it in our addopts along with strict markers.
|
|
show
|
0:50 |
In Chapter five, we covered a lot about markers.
We looked at the built in markers, skip, skipif, xfail, Xfail with condition.
And using custom markers, We also learned some new command line flags, '-m' selects markers to run can be combined with and or not in parentheses.
To fine tune it, markers lists all available markers and strict markers, turns undeclared markers, warnings into errors.
And there's also '-ra' which reports reasons for non passing test results.
We also learned how to declare markers within pytest any and we used addopts to include command line flags that we always want to use when we're running tests.
|
|
|
11:38 |
|
show
|
1:12 |
Welcome to Chapter six plugins, pytest plugins are awesome.
There are tons of plugins available and if you need to extend pytest, there's a decent chance that there's already a plugin doing what you needed to do but you have to find them and you can find them at 'pytest.org' There's actually a plugin list right there in pytest will show that in the course and at 'pypi.
org' there is a trick for finding plugins and it's to use the framework pytest.
You can also just search for pytest that usually does pretty good.
And there's also a group on github pytest-dev That is the group that supports pytest but also supports a whole bunch of pytest plugins.
So it's good to look at that.
We'll take a look at a few plugin examples, we'll look at pytest repeat.
That's used to run tests multiple times.
We'll also look at pytest-xdist.
which is used to run tests in parallel and it does so much more than that though.
There's also pytest-randomly and that's used to run tests in a random order.
And again, it also randomizes seed values for other tools and does so much more.
|
|
show
|
2:29 |
There are a lot of places where you can go find pytest plugins.
One place is directly at pytest.org and if you go just to the home page there is a reference guide called pytest plugin list.
And the plug in list contains over 1000 plug ins and it pulls them directly from pypi.
So you can take a look through here.
You can also you can also search of course because it's a big page.
We can also go to pypi as you we can search for pytest stuff here too.
But it's a big list here.
It's anything that actually even references pytest.
So this is a lot a good place to start is well supported pytest plug ins usually will select the framework pytest classifier if they're publishing it well.
And this limits it a little bit.
And we can also look for things like if we look for Django for example, that a list of 50 projects for Django with the pytest framework classifier for instance here's a pytest Django ordering pytest, plugin for preserving the order in which Django runs tests.
So this is kind of neat also like that there's an order by so you can order by relevance or date last updated, not sure what trending means, but whatever we can go to the github pytest dev group, This is the group that supports pytests itself, but it also supports quite a few pytest plugins.
Well, it doesn't really support them directly, but it is a plugin author can request to be included in here, and this allows for more help but doesn't guarantee help, but it allows for it.
So this is pretty cool.
The three that I want to demonstrate in this course are pytest repeat, which allows us to repeat a single test or multiple tests a specific number of times It adds this.
This extra command and allows us to run something multiple times pretty handy randomly allows us to randomize the order of our test runs to make sure that they're not order dependent.
But it also seeds random tools such as faker, Numpy such and then Xdist is used.
I use it to run tests in parallel but you can also distribute across different test nodes, which is pretty neat and it adds the dash and flag.
You can give it auto for it to just select, which is nice how many processes to run on?
So we will try all those of it.
|
|
show
|
2:17 |
Let's say I've got a test that's a little slow.
So this test is actually just slow because I'm sleeping inside of it.
Not a good thing to do within test.
Try not to have to sleeps.
However it's useful for demonstrating a couple of things that we're going to look at.
I wanna let's now let's also pretend it's got a bug in it that it's you know it's failing once in a while but not all the time.
And I'd like to run it like 10 times or something like that to see if I can repeat the problem.
I could maybe parameterize it I could do pytest mark parameterize and give it maybe X.
And maybe range 10.
But then I have to include X.
That I'm not really using and I have to import pytest This will run it 10 times I think.
Let's try it By test test slow 123 sure ran at 10 times about 2.5 seconds.
Now the problem with this is I'm modifying the test and that's not terrible but it is what it is.
And also we're using this parameter to don't really need it.
So instead of doing that let's take that out and let's use the repeat plugin So pip, install pytest repeat.
Now we can run the same pytest test slow we'll do -V.
So we can see it in action and we'll give it count equals 10 and now it's running it about the same time as it did before about 2.5 seconds.
And it gives us this nice little one of 10.
2 of 10.
3 of 10.
4 of 10.
10 of 10.
Really cool.
2.5 seconds to run 10 tests.
But I can repeat it.
So now I don't have to modify the test.
If I say, oh well I'm not finding the problem with 10.
Let's go to 20 or 100.
I'm not going to make you wait for this, but let's do something shorter.
So like five now.
I can just do it five times.
Sweet.
I like that.
It's nice, nice and clean.
It realizes that it doesn't need to count.
If you just do one, it just pretend you didn't call it and there you have it.
|
|
show
|
1:27 |
So maybe we want to run this like 10 times and our tolerance is okay for a quarter second test.
But what if it's really a lot slower?
So let's make it more annoying and just have it be an entire second and instead of 10, let's do six times.
So if we do that six times, it's about 2345, 6....6 seconds.
A little over six seconds to speed this up.
Let's run these in parallel by using the pytest.
xdist plugin.
So pip, install pytest xdist and now we run the same thing, but we'll give it a -n equals auto, see how fast we can do it.
It was like a little over six seconds before, so it ran in about two seconds.
A little over two seconds.
Did we do six?
Let's if we did like, so it was six seconds for six.
If we did two processes, we would expect it to be about three seconds.
Right?
And it is a little it's a little over three seconds.
So there's a little overhead with the xdist.
So I really want to make sure that it's useful for you before you go ahead and get started with it but running things in parallel will be very handy.
|
|
show
|
3:01 |
The last plug in.
I want to show you is pytest randomly.
I'm gonna demonstrate the pytest randomly because by using just four tests that run in order 1234.
If we run it, we're in the right directory pytest -V test order and we get these in order 1234.
And in these there's not much going on here and running them in a different order wouldn't make much difference because they're not doing anything.
But what if we have an order dependency that we're not aware of.
Here's an obvious order dependency.
We've got test one that assigns a global value to X of X to one and then still one, make sure that it's one, three, changes it to three and then still three makes it still three.
Now this is intentionally obviously order dependent and it passes like this.
But if we run them in a different order or maybe in parallel there might be problems.
So let's run them in parallel for instance, this is a problem to test for order dependencies.
One of the best plug ins I like is pytest randomly.
So let's pip install pytest randomly.
And now let's run our test again.
So I'm not gonna do it in parallel.
Just run it once and now they are in a different order.
Said one, still three, still one in the three different order.
What happens if we run it again?
Let's turn off, trace back equals No, so it's it's reordering them each time, which is kind of cool.
We see the order dependency causes these tests to mess up.
And this is, you know, clear that this is gonna be a problem within this test, but a lot of times it's not it's not so obvious.
You've got, you're not going to have the state here, it's going to be state of the system changes with a different order.
So I really love pytest randomly to run your tests in random order just to make sure that things are okay.
One of things I haven't noted yet, which is a cool thing is pytest will list the plugins that you have installed in the header, which is really pretty cool.
And pytest randomly will use the random seed and it shows this.
So it picks a different seed each time we got 13, still three and one and if we want, I really, if this is the order that causes the problem, you can copy that and then some paste it that work right, then we'll still see the same order.
It doesn't really randomize that again, it uses the same same seed.
So really handy to repeat if you're using randomly and you get a failure in a certain order and you really want to run that order again and use the randomly seed which is very handy.
|
|
show
|
1:12 |
To review.
We covered pytest plugins and protests.
Plugins can be found at pytest.org.
There is a plugin list that you can look plugins up with and on pypi.org.
One of the tricks we used was to use the classifier FRAMEWORK: :PYTEST.
Although not all plugins used this, so be careful of that.
We also found a bunch at github.com/pytest-dev.
Now the plugins that we showed in the course where pytest repeat to.
You used to run test multiple times pytest xdist.
to run tests in parallel and pytest randomly to randomize test, order to take you further if you would like to build your own plugin pytest is massively extensible.
So there are a lot of plugins and if you have a specific need, there may already be a plugin to fill your needs You can also build your own relatively easily.
There's an entire chapter within the python testing of pyest, second edition book and it walks you through building and testing a plugin.
Specifically, it walks through the pytest, skip slow plugin that we briefly looked at but didn't shown example of them.
|
|
|
23:00 |
|
show
|
2:48 |
Welcome to chapter seven configuration files.
We're going to talk about configuration files in this chapter.
Actually, we're going to talk about the files that you find in test suites that are not test files.
So this of course does include the pytest ini file which is the primary configuration file for pytest, but you can also use tox.ini or a pyproject.toml or a setup.cfg These are alternate config files and you can also store your pytest Config in there.
So if you already have one of these files in your project, you can use that to store your pytest configuration.
You can also add, even if you're using one of these, you can add up pytest any if pytest at the same level of pytest finds a pytest any, it will use that instead of one of the others.
There's also conftest.py, we've used conftest to store fixtures.
It can also store hook functions.
We haven't talked about that in this course, but that's one of the other uses also __init__.py.
This file is used to avoid test name collision but there's some confusion over its use for pytest within the questions online.
But I want to cover the really the only reason why you need to use it pytest.ini is the main configuration file that looks for, you can use other configuration files but in the pytest cards project for example, pytest.ini is used and I just like to use keep it separate.
You can well, like I said, we'll show different ways.
You can put it in other configuration files if you want There should only be one and it should be at the top of your project This should be at most one conftest.py per directory and this one doesn't make sense to put it above in the project level.
It makes sense to put it in the tests, what it does is it stores your fixtures and hooks functions and applies to anything below it.
So in this example we've got three, we've got a tests level conftest.
If fixtures there will apply to the whole project, you can use those fixtures within the API or the CLI tests conftest.py, that's only in the API.
You can't use those fixtures in the cli and vice versa.
So that's why there's options for multiple of them because you want to share between the test but you might not want to share them with all your tests Another file we see in test suites is often a __init__ file?
You can and I recommend having one within each directory which within each subdirectory you don't need one at the top level test directory, but the individual ones and this is so that you can have test names be identical in both sub-directories and we will show examples why that makes sense.
|
|
show
|
5:14 |
Let's take a look at the cards project itself that we had in our pytest course directory.
So I'm in the pytest course directory, there's a lot of stuff here, but let's take a look at the top.
So we've got actually let's take a look at it in VS Code.
We've got the cards project and there's a tox.ini file and read me and a pytest.ini and a pyproject.toml.
I did mention that these can be used tox.ini pyproject.toml are two of the files that can be used as alternates, but in this case we're just using tox.
We're not doing pytest Config within here.
It's just, it's running tox as a test environment.
So we didn't really cover tox within the the course, but the project has it here, the book covers it if you'd like to learn more.
But the pytest ini file has the best pytest config, we'll take a look at the pyproject.toml doesn't have.
This is a flit related information and project related, but I don't have the pytest stuff in there.
So pytest right now is stored in the pytest any within the test directory We've got a conftest.
Those are top level fixtures used by the entire test suite and then within the cli there is, there are __init__, there's nothing in them.
But there's a conftest that at the Cli level that uses there's fixtures that are just for cli testing.
Then the API, there is no conftest, so I didn't feel the need to have another level.
Actually, the API level fixtures are at the top level because there used by both by both the Cli and the API.
Or the Cli and API tests.
That's the cards project.
Feel free to play with that.
Where are we at again?
In pytest course, I'm gonna go into the cards project clear and the test directory is there so I should be able to run pytest.
Oh, I've got errors.
So what are those errors fixture, faker not found?
Oh yeah, the project here.
The full test suite also uses the faker package.
So if we install faker then our tests should run.
Yes.
So you can take a look at these, we didn't talk about the cli tests within the course, but you might find those interesting.
That is the pytest or the cards project tests that I wouldn't know what I wanted to show you.
So if we run the tests here and we look at the head at the top, there's interesting information, it tells us the faker plugins there but it says that the root dir is this card's project.
And the Config file is pytest.ini and test passes tests, we'll talk about test pass here in a minute.
Now what if I go into one of the directories.
So if I do tests/API and I run the test from here I just run these tests, it still tells me that this rootdir is at the same level and the Config file as Pytest.ini.
So what pytest does is it walks up from where I am and it, it looks in this directory and it says, is there is there a Config file here?
If there's not, it keeps going up a directory until it finds a config file it first looks in each directory up, it first looks for a pytest.ini.
If you can't find that, it looks for one of the other projects that has a pytest.
Config in it.
And if it finds a py test, Config that's your that's your root directory.
And so what does that mean?
If I run the tests from here, it it still finds the configuration and the fixtures from the top level.
That's one of the reasons why I recommend always using a config file like pytest.ini because if it didn't find one at the course level, it would keep looking and look at the project level and then look at the okken level and look at users level.
And if I had a pytest ini file in one of those directories, I'm sure it would not be correct for this project.
So I always stick a pytest.ini file around or another configuration that has a pytest.ini setting.
Even if that's empty just to make sure it's there.
So we're way down in the API.
If we were to even create an empty pytest.ini and we run, it's not gonna work because it can't find the fixtures.
Let's just look at the head.
It'll say this is the prime root directory, the Config files, pytest.ini and but it didn't find all the fixtures it needed so it couldn't run the tests so we'll remove that pytest.ini.
So let's take a look at the settings that I think are really common, the settings that I usually put into config files.
|
|
show
|
6:29 |
In the chapter seven directory.
We've got a settings folder.
So I'm thinking this is settings directory is a little project so I've set it up with a pytest.ini file a tests directory and a source directory to put some source and and so this these are something.py has a simple function in it Test something calls the function and make sure that it returns the correct number oh expected equals shouldn't have that repeated.
It should be a test func, something func equals expected.
So that should be right.
So those are just a simple test simple project.
But let's look at this any file.
So this is a project that is it isn't ready.
It doesn't have any installer in it yet so far.
So I haven't made it into a py install a package.
It's just some code, like a some python code and some tests to go with it.
But still these are some of the common settings I often see in my projects in the pytesting file, it starts off with this pytest bracket thing and that's just any document, any syntax to say .ini syntax to say hey, this section applies to pytest.
The first one I often add is addopts and addopt is a is a way to set up options, command line flags that you always want to use.
So we've talked about '-ra' which means I'd like to have extra reporting on everything except for passing tests.
Things like skips and xfails.
I'd like to know why those were skipped or X failed at the end.
Strict markers is where we say if you find a marker that's new, that's not declared in the ini file.
Please give me an error instead of a warning and the strict config we'll just demonstrate the importance of that.
It's similar.
It's like right now if I were to misspell one of these.
So if I had a python paths instead of python path for instance, and that's really easy to do because this is kind of like test paths and python paths when you need because you can have more than one here.
So that's an easy one to do and I want to make sure that that shows up as an error and not just silently ignored.
So strict config says make sure everything in in your config is valid.
Those are those flags, test paths.
I often put in place so what test paths does is it says hey py test if you get run at the at the top level directory at like this directory in or down really anywhere.
Look for test within the test directory.
So if I say clear this 'pwd'.
I'm still in that top level directory go up a little bit Chapter seven settings.
Okay, so if I'm in in this directory and say pytest by itself, it says look in the test directory, that's what the test path says.
If I give it a directory and say tests.
It looks at the directory I gave it.
Of course.
That's why I told it to look for tests.
But if I don't tell it, it looks for everything in subdirectories.
So it'll look for tests within the source directory also.
And I'd just like to save a little time and say, hey, you don't have to look in the source directory.
There are any tests there.
So I'm just telling it to just look in the test directory.
And the python path is it seems kind of similar, right?
It looks kind of similar.
It's saying that that the source code is in the source directory, but it's not really that.
So what we have is when this test directory, we've got test something and it says import something now.
If I don't have this here.
So let me comment this out.
What happens is if I go into the test directory and tried to run pytest something, I get a module not found error.
There's no module named something.
And that's because python has a variable called python path and it's the place where it looks for things to import.
And that's often like things that you have installed or the current directory.
If I had that other file here.
So if I let's copy this first directory something to to this directory.
And then I ran the tests, it would find it.
But that's not where it is.
I I wanted it in a different directory on purpose.
I wanted to keep my tests and source separate.
What the python path allows you to do is to add an extra project directory or two or three.
You can have multiple ones here and list them.
So that python can find python and pytest can find them.
So pytest just adds these to the python path.
So now of course when I don't have that extra something.py, I can run it just fine because that python path was added.
So that's, that's what that's for.
We don't have that with the cards project because the cards project is installed and it doesn't, it can be imported because it's an installed module doesn't need to be found anywhere.
The last one is markers and we've talked about that and I listed a few of them that I often put in place.
So I often put a smoke marker in a project.
I'll pick out a handful of tests that are or many tests that are quick tests of the major parts so that I can pick out tests to run to just quickly validate that.
I think the the system is pretty okay.
Isn't a complete test, but maybe a quick test so quick would weird work as well.
bugs.
I actually don't use this much, but it's kind of interesting idea to just say the tests to reproduce issues.
Maybe you can have a marker for that if you want to run all of those to make sure all the all the previously fixed bugs are still fixed.
That might work or slow this one I do use once in a while.
So a marker to mark some tests to use the.
'-m' not slow to try not run those all the time addopts test paths, python path and markers.
These are things I often use in in my any file And even if I'm not using any markers, I will almost always use this addopts section with strict marker, strict config and '-ra' these are settings I use all the time.
|
|
show
|
1:39 |
So let's take a look at what these settings would look like in alternate forms.
So I've got a directory set up with the alternate chapter seven, alt and I just have the there's a tox.ini setup.cfg, and a pyproject.toml.
so if I wanted to, if I've already using pyproject.toml.
Oh, so maybe I'm using black or maybe I'm using just a flet or something with the pyproject.toml already and I just wanted to put my pytest settings there.
Let's actually just load all these up so that we can have have them here and I'll stick this side by side split right?
That's what I wanted.
So the pytest any is similar to pyproject.toml but in toml I have to specify a different top level thing.
I need to say tool.pytest any options and then addopts is listed the same but I've got to put them in strings and the lists of things need to be actual like lists with strings.
It's toml syntax which is different than any syntax but this is what this looks like The setup.cfg.
I'll pull this one over as well.
Setup.cfg.
It looks very similar.
The difference really is that I need to say tool.pytest up at the top tox.ini tox.ini is also in any file so it's gonna look identical.
It's just that will include a pytest section within the tox section and you'll most undoubtedly have a tox section or else why would you have a tox.ini file?
But you could just leave it as is.
|
|
show
|
2:05 |
So if these are the common settings that I recommend that implies that there are more settings than this and there are.
So I want to show you where to find information on the other settings that you could possibly put in your ini file.
One of the great places to look is just within pytest help.
So pytest help lists a whole bunch of stuff.
But near the bottom we've got environmental variables, a reminder about the markers and fixtures flag right above that.
These are all the any settings.
So these are the any options are listed near the bottom and it even listed there Say any options in the first pytest.ini on any tox.ini, setup.cfg file found.
So these are all of the markers lists of course there's our test paths.
We've seen that there's a whole bunch of these things and they're they're described briefly here.
But then I want to show you another place to find them as well.
If we go to pytest.org, we will see the home page and there is along the left side, there's contents and if you go to reference guides and then API reference actually there's a lot of great stuff here.
So there's we've got functions like Param and X fail.
We've got marks that are built in, we talked about skip and skip if and X fail of course fixtures.
We've used request, there's a few others that are in here hooks, we didn't cover those, but near the bottom is sorry for getting off track.
There's a lot of great stuff here, but at the bottom there is configuration options.
There's also command line flags, so you can take a look at that.
But configuration options, those are the options that you can add in.
And with that, I guess without addopts, you can do the configuration flags as well.
These are bigger descriptions than the little one line thing that you get with the help Let's take a look at the flags.
This is just sort of the same as the help, but here's the flags as well.
Really like the configuration options page.
It's really helpful.
|
|
show
|
3:52 |
So I want to talk about the __init__ files and there is some confusion about these.
The reason why they're used in within test directories.
Within source directories.
There's a different story within source directories.
They're used for packaging.
It's a python thinks of a directory of stuff as a package.
Not a package to package, but a package.
If it has a __init__ in it, but if that doesn't make sense to you, don't worry about it.
That's not the topic.
What we're talking about, the test directory.
The reason why Dunder in it is around is sort of similar to say, hey, python kind of imported like a package, but they don't have to, there doesn't need to be anything in it.
And the reason why I'm doing this is because right now is because I've got a test_add.py in both places.
And let's say I had a test add in both places and add from empty in both of them.
I don't right now, but I could so if I had the same file name, so this is our cards project still same file name in both places and I had the same test name in both places and since I'm trying different things out in both places, there's no reason why there shouldn't be so like test count, there's one here.
Test count, there's not a duplicate name, but there could be, you know what I mean?
I've actually set up a situation within chapter seven where we do have duplicates in chapter seven.
There's a dupe subdirectory and I've got two directories under that.
There's one test no init and test with init.
So these two directories are the same except for the presence of a net.
So I've got a test.
Any file also that's empty.
Just like I said before, just to tell pytest is that this is the top Don't go looking for it anywhere else.
But then we've got these test ads and they are duplicate tests.
And so the API And the CLI Have the same test.
Obviously there would be different tests but the same test name.
And test no test name here.
Why is this important?
So this is important because we want to have this freedom and Yeah.
So well let's just play with the difference of these So if I'm in I mean dup chapter seven dup.
Have these two directories.
I go within the I could start it from the top actually.
But pytest test what I get is an error.
It says file mismatch, imported module.
Test add has has this file app tribute, which is not the same as a test file we want to collect which is in So there's this two directory thing, remove cache files or use a unique base name for your test file modules.
So a unique base name.
Well, I'm just confused by this actually but it's not bad.
It's like I know that something's wrong.
And at least it lists these two directories.
So how do we fix this?
This is not obvious from this error message.
How to fix it.
But the fixes with init.
So the __init__.
If we do the same thing with the stiff the within that directory, it's fine.
What's the difference in the first one?
We do know init?
I can test subdirectories so I can test the API.
And I can test the cli I just can't do it together.
And this is frustrating.
And anyway, so that's why I recommend just sticking __init__ within your test subdirectories so that you don't run into that confusing message.
|
|
show
|
0:53 |
As a review of the configuration files and other non test files within the project, we've got pytest.ini, which is the primary configuration file and the alternates tox.ini pyproject.toml and setup.cfg.
There's conftest.py That's used for fixtures and hook functions and __init__.
That's used to avoid test name collisions.
You can have and should have one pytest.ini or some other kind of set up at the top level.
conftest.py.
You can have a maximum of one per test directory or test subdirectory and __init__ one per subdirectory is recommended to get more information on settings.
You can put in your ini file, you can use pytest help or you can go to pytest.org and look up reference guides and then api reference and then configuration options.
|
|
|
2:29 |
|
show
|
2:29 |
Welcome to Chapter eight.
The wrap up.
Congratulations.
You did it.
You made it to the end of the course Well we covered, Well we covered pytest and we covered test functions and fixtures and parameterization and markers and plugins and configuration and so much more.
We also covered tracing test flow with setup show we used -m and -k to select subsets of tests.
We grouped tests with classes and we also learned how to structure test functions using given when then.
And we also use tests to learn about data structure.
So we used used tests to learn about the data class data structure called card.
We also talked about keeping tests readable so that they tell a story now you're ready You can now go out and test your own code or hopefully with a team but you're never really alone.
So let's talk about some resources.
So if you get stuck out there and you need some help, of course there's pytest.org if it's pytest specific, especially check out the contact channels page here.
I'm listing it out so that in at least a whole bunch of places including stack overflow and instructions on how to tag your stack overflow question there's a great a lot of great resources but also there's a slack channel that's associated with the podcasts that I work on and the book called pythontest.com/slack.
And this is specifically around testing.
There's a lot of people in there and they, they answer questions about how to test stuff with pytest and different applications.
And of course there's the book python testing with pytest, second edition.
Excellent Resource.
So what do you need to know about this?
It talks about all the stuff we talked about, but in more detail, there's more testing strategy there.
We also talk about tox and github actions as well as advanced parameterization techniques and building plugins.
We used plugins within the course, but we didn't build one within in the book.
We build one.
We also talked about mocking and how to avoid mocking, and so much more so to keep in touch with me.
You can follow me on twitter @brianokken.
I'm also one in a couple of podcasts, python bytes and testing code.
And I also blog at pythontest.com and there's context forms on those to get in touch with me as well.
Thank you so much for taking this course and I hope that you can use it to save time at work and have fun while testing.
|