|
|
10:03 |
|
show
|
1:29 |
Hi, and welcome to the Modern Python Projects course.
I'm excited to have you here, because I have a lot of cool things that I want to show you.
But first of all, what's the deal with this course?
Is this a yet another Python course?
No.
This course is for people who already know how to write Python code, but they want to learn some useful tools and good practices when they write.
Despite on code, there is a gap between knowing Python and knowing how to write a good Python project.
Maybe you're confused whether those virtual environments that others keep using, whether you should use Poetry or Pipenv for something else.
What's the best framework for writing test, How to document your code and things like that?
My goal in this course is to bridge this gap for you.
So, I want to explain you how to do various things as you build your Python project.
No matter if it's your 1st or 101st, How to structure it, How to install dependencies, How to generate documentation?
and I want to show you some tools that will make your life much, much easier, Mature tools that many other pattern programmers are using.
So not something that I found yesterday on hacker news, but something that I know it's proven, and it will work for you for years.
We have good tools.
Writing tests can actually be fun, and writing documentation can be easy.
And hopefully, at the end of this course, you will have a solid development environment configured, and you will know, how to build any Python project that you want.
|
|
show
|
6:25 |
What exactly we're going to talk about in the course?
I will start by showing you, how to set up visuals to their code and how to use it when working with Python.
I know that a lot of Python Programmers are using PyCharm, but also, a lot of people use VSCode.
So, if you're one of them, I hope you will enjoy this chapter.
Then we'll talk about installing different Python versions and Python packages on your computer.
By default, you can't have two different versions of the same package installed at the same time because of the way how pip works.
So, I will explain you what virtual environments are and How to use them, When you work on different Python projects, I will show you three tools pyenv, venv and pipx.
When you learn how to use them, you will be able to easily Install new Python versions on your computer, instantly switch between them and even install packages globally without messing up their dependencies.
Then we'll talk about how to start a Python project.
Starting a new project can be hard.
You stare at the blank folder, wondering, what should be the first file that you will, write.
Okay, maybe it's not that difficult when you only have one Python file.
But as your project grows, there will be more files.
So, we have to figure out a good project structure.
To Avoid problems in the future.
I will show you how to use a tool called cookie cutter to generate the initial structure of your project.
Cookie cutter is great because a bunch of smart people created templates for some typical projects.
For example, there is a template for a PyPI package or for a Django website, and you can take those templates and use them to start your Python project, which is often much easier than writing everything from scratch.
Next, we'll talk about how to manage your project as it grows.
Where to put some Python files, Where to put some typical tasks, like running tests or building the PyPI package.
We'll use a tool called Peep tools to pin versions off our dependencies, and then I will show you how to use poetry when building a Python project.
After that, we will talk about the writing Python code.
I want to tell you how to write a good Python code, but I will show you some tools that will complain when you write a bad code I will explain what PEP 8 is and how we can use a tool called black to automatically format our code according to the rules from PEP 8.
And we'll also talk about other static code analyzer.
so tools that can monitor our code and pick up some errors.
We will use Pylint and Flake 8 on some ugly Python code to see what errors it can find.
I will show you some popular Flake 8 plugins that you can install to make it works even better.
But Flake 8 and Pylint are not the only great Linters out there.
So, I will also quickly show you tools like bandit, Prospector and Sorcery and explain what's the difference between all of them.
Once you are done with writing code, you probably need to add some tests and documentation, so I will show you how to use pytest and Sphinx to do that.
Both of them are very easy to start with, but they offer a lot of amazing features, so, I will show you some cool tricks, like testing the code examples in your documentation or automatically extracting the documentation from the Docstrings in the source code.
We will take an existing test written in the unit tests and converted to Pytest to see how much easier it gets when you use Pytest will use some fixtures parameters some tests and add some marks, so you will not only learn the basics of Pytest, but also some more advanced features.
And since running your tests or building your documentation manually each time you change, something in your code is boring.
We'll also talk about some ways to automate this.
I will show you how to use tox, which is the best friend of every Python developer who builds packages because with a simple configuration file, you can run test under different Python versions.
Then we'll take a look at Git hooks and use the pre commit tool to.
Add some automatic pre committed checks.
That way, you can quickly check that your call is correctly formatted and doesn't have any easy to spot errors each time you create a new Git commit.
But configuring tox or pre commit on everyone's computer can be error prone when you work with different people.
Someone might use the old pre commit configuration.
Another person might forget to use tox and send a failing test to the git repository So, to solve this problem, we will talk about continuous integration services like GitHub actions or GitLab CI.
They can automatically run a set of checks each time someone creates a new pull request or sends a new commit.
They are a great way to check everyone's code without making every person on your team set up something or run those checks manually under computer.
And that should cover everything that you need to know to Build a great Python project But this course wouldn't be complete with a bit of practice, so we'll have three more chapters where we will build something in the first one.
We will build a command line application, and we'll use poetry to manage this project.
Then we will build a Python package, but this time we will start with a cookie cutter template, and we want to use poetry so you'll have a comparison of how it is to work with and without the poetry.
And we'll also publish this packet on PyPI and finally we will build a simple GUI application with the window where we can put some text and the button that you can click.
I will show you how you can package it as an executable Python application.
That way, you will be able to send it to someone on the windows or Mac computer, and they will be able to run it even if they don't have Python install.
And in the final chapter, For those of you who are thinking about building a website, I will show you how you can deploy that website.
We'll compare using a virtual private server, a platform as a service like Heroku and a containerized solution like Docker.
We will look at pros and cons of each of them.
And then we'll deploy a very simple application to Heroku and then to Digital Ocean as a Docker image.
Yeah, that was a lot of things I know, but that's basically everything you need to know to build a great Python project from scratch And I hope that when you finish this course you will have a great development environment setup.
You will have your code editor configured, and you will at least know where to start.
No matter what kind of Python project you want to build.
|
|
show
|
0:42 |
To follow this course, you need to know the basics of.
Python.
I'm not going to explain the Python code that will be using, but also, I will try to not use some very complicated code.
We will be installing packages.
So, you also need to know how to use pip And if you know what Django or Flask is, that's great, because I will be using those two Web frameworks in some examples.
But if you're not a Web developer, then don't worry.
You should be able to follow everything with no problem.
In one of the chapters, I have a very, very simple Flask website that's literally 10 lines of code or something like that and we Only needed, so I can show you how to debug, code in the code editor so you should be good.
|
|
show
|
0:44 |
And in case you're wondering, who's this guy behind this course?
Hi, my name is Sebastian Mitofsky.
You might know me from some Python conferences, but if not, then it's nice to meet you.
I work as a Python consultant, freelancer and trainer, and I help companies fix their software architectures and improve the tools and practices that they're using.
And that's actually how I got the idea for this course.
I realized that there are so many people that already know, how to use Python, but they still struggle with setting up their development environment, and there are so many great tools and good practices that can make your life much much easier.
So that's what we're going to talk about in this course.
|
|
show
|
0:43 |
And just before we move on, I have a very short to disclaimer.
Throughout this course, I will show you a lot of different tools.
Some of them are open source.
Some of them, like Digital Ocean or Heroku, are paid platforms.
So, I just wanted to say that I'm not affiliated with any of them.
I don't work for any of those companies.
I'm not involved in the development of any of those open-source tools.
And I was not paid to include any of them.
Actually, I don't think any of them knows that they are part of the course.
So, everything that I will show you is here because those are the tools that I use, and I can recommend them.
Or at least I know that they are very popular and well received in the Python community.
|
|
|
1:43 |
|
show
|
0:46 |
Since this is a Python course, you probably think that you will need to have Python installed on your computer, right?
Well, actually, no.
I mean, obviously we're going to use Python, So, if you have Python installed, that's great.
And I guess most of you already have Python, but I will show you a tool called Pyenv that you can use to easily install and switch between different Python versions.
So if you have some version of Python installed on your computer, that's great.
If not, then you will have some problems following the next chapter where I will talk about the code editor.
But in chapter four, I will show you how to install Python.
So, if you don't have Python installed right now, maybe start with the first few lessons of Chapter four and then come back to chapter three, But otherwise you are ready to go.
|
|
show
|
0:32 |
You will also need a code editor.
Unlike many other instructors, I am not using PyCharm.
But you are free to use whatever code editor you like.
So, if you have PyCharm and you're happy with it, that's great.
However, I will be using VSCode.
And in the next chapter I will explain you how to set it up and use it for programming in Python VSCode is a great code editor, especially when you work not only in Python, but also in other programming languages.
But, as I said, use whatever code editor you're most comfortable with.
|
|
show
|
0:25 |
All the source code that I will be using in this course is hosted on GitHub So, if you don't want to type it by hand, just go to this GitHub URL, and then you can download it or clone it.
Or you can just click around and browse the code in your browser.
And with that, I think you are all set to start the course.
So, let's start by talking about the code editors in the next chapter.
|
|
|
49:18 |
|
show
|
4:35 |
Let's start this course by discussing one of the essential tools that every programmer uses.
A code editor.
No matter what kind of tools and work flows you use when building a new Python application, you will still spend most of your time inside the code editor.
It's a very sensitive topic for many programmers.
There's even this famous comic stripe by XKCD, where different programmers argue.
What's a code editor for real Programmers?
Don't take it too seriously and use whatever code editor you find comfortable.
I, for example, really like VSCode.
Since this a Python course, you might be wondering, Why am I not using PyCharm, as many Python programmers do?
Well, there's absolutely nothing wrong with PyCharm.
It's an excellent code editor for both beginners and Advanced Python programmers.
I have not used it personally, but I watched some tutorials where the instructors were using PyCharm, and from what I saw, it works great out of the books.
It has a much better refactoring capabilities in VSCode, and you really can't go wrong by sticking with it.
There is a free community version, and there is a paid version that offers more features, but the free version is perfectly capable to get you started.
But the chances are that maybe Python is not your only programming language, and you need a more versatile code editor.
Maybe you want to be able to customize more things.
Or maybe you would prefer your code.
Editor To be Open Source.
There are many different code editors out there.
Stack Overflow creates a survey every year.
Let's take a look.
In 2017, the most popular Code editor was visual studio.
Then we had note pad plus plus, sublime text, vim and then Visual Studio Code.
As you can see, this survey is not limited to Pyton programmers.
There is PHP Storm or Android studio, so the results are from programmers working in any kind of programming language.
So that was 2017.
In 2018, you can see that there are three code editors that are equally popular Visual Studio Code visual studio and note pad plus plus, let's check the next year.
In 2019, Visual Studio Code really dominated this survey.
Over 50% off responders are using it.
And what about 2020 Well we don't know, because in the survey from 2020 there was no question about your favorite code editor, So I can't really tell if VSCode still that popular.
But I think it still is.
There was no new code editor that would get so popular.
VSCode.
And if you really want to customize every possible little detail of code editor or you want a code editor that works in your terminal, then there are editors like Vim or Emacs.
Those are text editors, not IDE’s, although some people joke that Emacs is a whole operating system.
But when you install some plugins, you can turn them into a pretty capable code.
Editors.
The learning curve for both of them is very steep.
They don't take you by the hand and show you how to use it.
You don't configure them by clicking things in the interface, but you modify a configuration file, so you need to first check the documentation of what's possible.
But thanks to that they offer an incredible level of customization.
You can change basically any part of it.
For example, here is a Vim configuration that I use, and I swear I try to keep it to a minimum, but it's still almost 250 lines long.
I usually use Vim for very quick edits in a terminal because it's very fast to use it that way.
I don't have to contact switch to a separate up.
I just run my edits in the terminal and then continue with what I was doing before from all those code editors that I mentioned.
VSCode is my favorite.
I've been using it for a few years, and it works pretty well for Python and for any other programming language today I am.
Using, what I really like about VSCode is that it strikes the right balance between productivity and Beginner friendliness.
When you install it, you can start using it right away.
And as you go, it will try to suggest some plugins and settings that you might want to use.
Open your first Python file and it will suggest, Python extension.
When you continue editing that file, it will suggest to install a Linter and a formatter.
On top of that, there is a very vibrant community around it.
Every month, a new version is released, and it usually brings a ton of new features.
So, in this chapter, I will show you how to install and set up VSCode for programming in Python.
I will start from a completely new installation of VSCode and then set up everything from scratch.
I will show you some of the most popular features but also some less popular ones that maybe you never heard about.
If you don't use VSCode and you are not planning to use it, feel free to skip this chapter and jump to the next one.
|
|
show
|
0:24 |
To get the Visual Studio Code editor just go-to the code.visual studio.com website, and you should see a big blue button for your specific operating system.
So, let's just click it and then VSCode will start downloading and you will be taken to a documentation website.
Once the download finishes, install it following the instructions specific to your operating system and let’s open it.
|
|
show
|
5:55 |
After you install VSCode and you open it, you are greeted with this welcome page.
It has some suggestions of what you can do at the beginning.
Since we'll be working with Python, let's click this Python link on the right side.
Let's click.
OK, and now VSCode is installing Python extension for us after it's done is going to reload the window and you can see that Python is installed because it's no longer blue.
If for some reason you don't have this window, you can always click this icon.
This will take you to the extensions marketplace and here you just need to search for Python.
We already have it installed, so it was displayed there by default.
But let's say you want to install a different package, you search for it in the extension marketplace and you just click this install button.
So what is this Python extension?
Well, VSCode doesn't support Python out of the box.
I think it only supports Typescript and JavaScript, so we have to use an extension.
With the Python extension, we get a bunch of additional features.
First of all, syntax highlighting, for Python, but also additional tools like IntelliSense, which gives us code completion and code navigation.
We get linting, so we can use Flake 8, Pylint and other tools like that.
We get formatting, so we can use Black PEP out of PEP 8 another formatters We also get different debuggers and, for example, VSCode will automatically detect Python virtual environments.
Don't worry.
If you don't understand any of those features that I just listed, I will explain them in more details as we progress through this course.
So, now that we have Python extension installed, let's try to open a new Python file if we save it with .py Extension, VSCode will automatically detect that it's a Python file and it's going to suggest some additional features.
So, first is going to open this Python get started page, but also, it's gonna suggest you to install a Linter.
Linter is a tool that points some easy to fix problems with your code, for example, when you import the module or function, and you don't use it or when you try to use an undefined variable, since it's a very useful tool.
It's worth installing it, by default we get the suggestion to install pylint.
We can either click install or we can select a different linter.
I will go with pylint this time.
As you can see, the installation was successful.
If you run into some troubles when installing, for example, maybe you have an old Python version and VSCode cant install pylint.
Don't worry.
In the next chapter, I'm going to show you how we can use some additional tools like virtual environments and pipx to install global packages, for the time being.
If you get some errors with pylint, just ignore it.
So, let's see pylint in action.
Let's go back to our file and let's try to reference a variable that was not defined.
You can see, we get this Reds quickly underscore and error message saying undefined Variable name.
So, linters are very useful tools that can help you spot errors in your code.
Next thing you might want to do is to choose a different Python interpreter, and if you click in this lower left corner, you can see there is the list of different interpreters.
This is the default Python 2.7.
That comes by default on a macOS.
And here are some more up to date Python versions that I have installed on my computer.
If you have a brand-new MacBook, then you'll probably have only this Python 2.7.
And if you installed additional versions, maybe with conda, maybe with some other package manager.
You might see them here as well, to change which, Python version is being used.
Just click one of them.
You might get some errors, especially if you're using Python 2.7.
The first warning says that well, you selected an old Python version, which is not recommended, and the other one says that linter is not installed because well, we installed pylint for the previous version of Python.
Now that we switched to 2.7, we would have to install it again.
And if you try to install, I think you're going to get an error, let's give it a try.
Yeah, there is no Pip, so, if you get this error, just ignore it.
For now, let me go back to the more up to date version of Python.
If for some reason you don't see this side bar will not sidebar, but just bar and you can't change Python version by clicking here.
You can do this from the command palette on the Mac Press control shift P.
And you will see a list off all the commands available in VSCode command palette is the most useful tool in VSCode, and you will probably be using it often.
So to select a different interpreter, just type interpreter and select this command.
And here again, you can change which version of Python you are using.
Let's go back to this one and let's continue one last step of the set up is to add code command to your terminal.
That way you will be able to run code and the name of the file in your terminal, and this will open that file in VSCode.
You have to follow different steps depending on what operating system you use.
For Mac and Linux, you can open the command palette and search for command called install code command in path.
When you run it VSCode will do everything for you.
However, if you're using windows, then this step is done during the installation, so make sure you check the add to path option, when you install VSCode on windows After we have done this, we can go to our terminal and open any files from there.
So let's close it here.
Actually, close all that.
Go to the terminal and we have a code command.
Just let's restart, be sure it's working.
code hello.py .
And it opens file in the VSCode, right.
|
|
show
|
2:53 |
Python extension, that we just installed came with a lot of features.
For example, auto completion jump to definition and more.
They all came from a tool called Python language server.
Python language server is part of the language server protocol.
This is a new thing that was created to stop duplicating work when people create plugins for specific programming languages for different code Editors.
without language server protocol, When someone was creating a Python plugin for VSCode, this plugin could only be used with VSCode.
You couldn't use it with Vim.
So, someone else had to create a plugin for Vim for Python and the same for other code editors.
So, each separate code editor needs to have a separate plugin for each programming language.
The language server protocol was created to separate programming languages from code editors.
So now we only need a Python server plug in for Python Ruby server plugin for Ruby and JavaScript server plugging for JavaScript, and then for each code editor, we need a plugin that will let you use language server protocol.
So instead of having a plugin per language and per editor.
We only need one plugin per language and one plugin per editor.
So if the VSCode client plugin, we can use Python server plugin, Ruby server plugin or JavaScript server plugin and the same of the Vim and Emacs clients.
Why am I telling you this?
Well, that's because there is a new Python language server plugin being developed for VSCode.
That's called Pylance.
If you're watching this video in the future, It might replace the default Python language server that it's currently used with the Python package But for now, you can install it separately by searching for Pylance in the extension marketplace.
As you can see, we would have to install it by clicking this button, and when you install it, you can see that the list of features is pretty impressive.
For example, you have better Docstrings in the auto completion, you get auto imports, that will automatically import modules as you use them in your code You also have faster reporting of errors and so on...
If you want, you can check it out.
Just click this install button and you will get a popup that you need to reload VSCode.
So, let's go ahead.
Now you can see we have Pylance installed here and we can check if it's actually being used by going to settings and searching for language server.
You can see here we have Python language server and, on this list, here we have Pylance.
If you want to go back to the default language server that comes with the Python package, you can select Microsoft.
And if you don't like any of them, you can also try the Jedi language server.
For the rest of this course.
I will stick with the standard Python language server, so I will select Microsoft.
But I definitely recommend that you check out Pylance when you have a chance.
|
|
show
|
10:47 |
Before we start talking about how to use VSCode with Python specifically, let's talk about how to use VSCode in general.
So, you can see on the left side we have the sidebar.
First icon is the file Explorer.
I have opened the Django project, so, we have some files here.
So, we can click a file to open it.
And for example, if you have a lot of files and you want to filter.
You can just start typing in the sidebar.
And that way you can filter which file or folder you want to select, under the files.
We have the outline tab, if you can see a list of all your classes, function and variables in the currently open file.
So, all those elements with this yellow icon, are classes, inside you would normally see functions, but unfortunately, I open the file that doesn't have functions.
So, let's try a different one.
So here, let's open the outline.
And here you can see that we have some classes.
Then we have some functions.
And finally, those are, all variables.
One nice thing that you can do.
If the outline is that you can select follow cursor by default.
It's disabled, but if you enable it and you move around the file, you can see that the location in the outline is changing.
So if you move around the file, you will see that the outline on the left side always follow your cursor.
Also, you can select how you want to sort all those elements, by default, It's by category.
So first you have classes, then you have functions and then variables.
But also you can sort by position.
So it started from the top to the bottom or you can sort it by name.
I usually like to sort by position and under the outline we have the timeline.
So, this is a simple Git integration, that will show you all the changes related to this file.
So, you can see, Three months ago, someone did this small change in this file and also one down here.
One year ago, there was a different change and so on and so on.
So, this is a good tool to see the latest changes to this current file.
Next in the sidebar, we have the search, you can search for something, and you will immediately see the results this called fuzzy searching.
But you can also press control + enter or on MAC command + enter and this will open the search results in the new tab.
Here you can further narrow down your search, but also you can select how many lines around the search results you want to see by default, For each result, you will see one line above and one line below.
But sometimes maybe you want to see a bit more context.
So, we can change this number.
And then for each result, you will see two lines before and two lines above.
Next.
We have source control tab.
Right now we don't see anything.
But if we make a change to one of those files and we save it, we can now see this change in the source control tab.
We can open this file, we can discard changes, or we can make a commit from it.
Let's discard it.
Next, we have a Debugger.
Right now, we don't have any configuration, so let's skip it.
But I will show you later how to use a 03:21.89 debugger.
And finally we have the extension tab that we are already familiar with.
Right now, we see five icons here, but actually more icons might be available when you enable more features.
For example, when you configure testing framework, you will see an additional tab for tests.
When you install a Docker extension, you will see an additional side menu for Docker, So that was the side panel.
But actually, we have yet another panel that we can trigger by running view.
Toggle panel.
That's right, this one lets actually close those files.
So at the bottom of your code editor, you will see some additional tabs.
First up, it's problems.
If you have a linter enabled and there are some problems with your code, you can see a list of those problems here.
Next we have output.
This is where you can see output of various parts of VSCode.
You are probably not going to use it very often, mostly for some debugging purposes.
Speaking of the debugging, we have the debug console.
So, when you started debugging session, you can go to this debug console and you can execute some statements here.
And finally we have the terminal tab, which is basically your standard terminal.
It's nice to have it here, because that way you don't have to switch between VSCode and terminal.
You have everything in one window, so it's good to remember the shortcut to show and hide terminal.
Let's open some files and let's hide this thing.
Let's open this one.
This one, this one.
If you have multiple files open, sometimes you want to see, let's say two files side by side.
So you can easily do this by dragging your file around.
That way you can move this file to the right side and then you can move this file to the bottom.
You can also copy file to the side.
So here we have the same file in two places and actually, if we modify it here, we can see the modification on the right side.
Another useful feature not limited to VSCode is the multi cursor.
So you can put your cursor in multiple places by pressing option or alt.
And that way you only need to type ones to put the same text in multiple areas, manually putting this multiple courses in multiple places.
It's not very useful, but for example, if you want to, let's say, rename a variable; you can select all the currencies off this variable and simply rename it, another cool feature related to files is when you want to create a new file, but you also want to create folders on the way there.
So one way would be to click this add folder and then name this folder.
Then inside, you would have to click again add folder to and then you would finally, add a file.
A much better idea is to simply at forward slashes(/) in the name of the file, and this will create folders for you.
You can see we have foldera folderb inside we have our file, by the way, when you have a folder that only contains another folder inside.
VSCode will use this one-line notation to indicate that.
So this basically means that inside the foldera, we have only foldera and then inside folderb.
We have our my_new_file.py, last thing that I want to talk about our keyboard shortcuts.
There are few keyboard shortcuts that you will be using very often, so it's good to remember them.
The most important one opens the command palette on Mac.
It's command shift P and in other operating systems, I think it's control shift P from the command palette.
You can run basically any function in VSCode.
So instead of clicking things around, it's much faster to open command palette and run commands from there.
For example, if you don't know what's the keyboard shortcut to start debugging can open the common palette and type start debugging, and there we have.
Next, we have go to file, which is command+P.
That way you can quickly open any file from the currently open folder or a project And since this is a fuzzy search, you don't even have to type the whole file name.
Next, we have go to symbol in file.
You can quickly jump to a Function, Variable, Class or even the module in the current file.
And if you want to group classes together and variables together, you can add the colon (:) at the beginning, so you can see here, we have one class, five methods and 13 variables in this file.
You can also quickly go to a line in the file, and you can even combine go to file and go to specific line.
So, if we go to a file and then we specified the line number after the colon, we will go to Line 15 of the clickjacking.py.
The two final useful keyboard shortcuts, that I want to show you.
It's go to definition and show all references.
So, when you have a function, let's say this one, and you want to go to the place where it's defined well here.
It's kind of easy because it's the same file, but normally it's gonna be in the different file, right Click and select Go to definition or is the keyboard shortcut.
And when you have a function definition, and you want to find all the places where it's being used, you can select go to references.
This will show a little pop up with all the references to this function.
Those are just the most popular keyboard shortcuts that you will be using.
But VSCode is full of keyboard shortcuts that you can use, so as you go, you will probably learn more and more off them.
Basically, almost anything can be done with the keyboard shortcut.
That's a lot of keyboard shortcuts to remember.
So to make it easier, VSCode documentation contains a nice cheat.
sheet.
With the list of most common shortcuts for each operating system, you can find this list in the VSCode documentation.
But to make it easier, I'm also linking to those cheat sheets from the modernPythonprojects.com website.
So, if you want to see the cheat sheet for macOS, just goto modernPythonprojects.com/vscode-mac If you want to see the one for Linux, replace Mac with Linux and for windows, goto /vscode-windows.
And finally, if you're moving to VSCode from a different code editor, it might be hard for you to get used to having different shortcuts.
So, to make it easier, there are some plugins that will actually change the default VSCode shortcuts to use shortcuts from your previous code editor, so we can go to the extensions marketplace and search for the name of your previous code editor.
For example, I moved to VSCode from sublime text.
So, the first plugin that I ever installed in VSCode was sublime.
Text keymap.
Once you install it, some of the default VSCode shortcuts will be replaced with the shortcuts from sublime text, so it will make the transition to VSCode slightly easier.
|
|
show
|
3:08 |
Let's say, you wrote some code in VSCode and now you want to run it.
How can you do this?
Well, let's go back to our example with just a simple hello world run this code in terminal you can just press this green arrow in the right upper corner.
And if you don't like using your mouse, you can run Python run Python file in terminal command, from the command palette.
So let's say we have two lines and we only want to run the second one You can select the code that you want to run and run command called Run Selection in Python Terminal.
As you can see, this starts Python terminal and runs this line.
You can also use Shift + enter for the same effect.
So, that's how you would run a simple Python script.
Things get a bit more complicated if you want to run, for example, a Web application.
Of course, you can always run it from this building terminal, but maybe you don't like to switch between the code editor and the terminal all the time.
So here I have a very simple flask Web application, just a single file, and when you go to the main page, it's gonna display you.
A, hello from flask text.
Don't worry.
If you don't know flask.
This is just for the illustration purpose.
As I said, we could go to a terminal and run Python -m flask run Command.
But that's not very convenient because, well, what's the point in switching to a terminal?
If there are so many buttons in VSCode, so for sure, one of them can be used to run flask server for us And indeed, we can go to the Debug panel and start our server from there.
So, first, let's kill this server.
Let's open the debug panel.
So first we have to create a debug configuration.
It sounds complicated, but actually, VSCode has some default configuration that we can use.
And as you can see, we even have one for flask, which is awesome.
So, let's select flask from this list.
Next, let's select the name of the file.
That is the main flask file.
In our case, we only have app.py.
So, that's easy.
We press enter and that's it.
You can see a lot off thing has changed but that's because we started the debug session.
So, if we had some break points in our code, we would be able to actually stop there and investigate our code.
So at the top, you can see we have a debug panel that lets you stop the code execution step in, step out, completely Stop the debugger, or restart it, and we can see that our bottom panel has changed.
Color to orange.
This is an indicator that we have a debugging session launched If we stop at some break points and now we go to the debug console we would actually be able to see more things here.
So to verify that our server is actually running.
Let's open this URL all in the browser and we have our hello from flask message.
Right?
So, in the next lesson, I will show you how we can use this debug panel more.
|
|
show
|
3:55 |
In the previous lesson, I showed you how to run a single Python file.
This time I open a folder with just a single file there.
But I want to show you what's the difference when you open a single file in VSCode versus when you open a folder that contains some files?
When we go to this debug menu, you see that when you open the folder, you now have an option took create a launch.json file.
That way, when we set up a launch configuration for our debugger, it will be persistent.
So, let's click this and again we select flask and this creates a launched.json file inside our folder.
It's actually inside the .vscode and here we have it, Here You can customize, how you want your flask application to be launched.
The default values are good enough.
But if, for example, you want to enable the debugger, then you would remove this argument.
Now that we have this launch.json file.
If we go back to the debug menu, you see that we have this launch configuration here, so we can simply press this green arrow to, launch our Debugger.
And again, this started the simple development server.
We can see the website by refreshing this page.
As you can see, I've added a few more things to this file.
So instead of displaying a static text, I'm displaying my name and then a random integer between 0 and 100 which will be our lucky number.
I've added those two variables, so we can test how the debugging works.
You can add a break point by clicking this red dot on the left side.
So, let's put a break point on this line.
Now, if we start the server again and we refresh the page, the code execution should stop at this line when it gets there.
So let's try it.
We have the server, we go here, you can see it's still loading, and we go back to VSCode and you have this indicator that we stopped here.
On the left side, We can now see the locals and global variables.
Only name is defined at this moment, but if we go one step down, you can see that we have the lucky number defined and then the name, under the variables menu.
We have the watch menu, if you have a variable that you want to monitor, but it's not in the locals or in the global's.
You can add it here to this WATCH menu, so, let's add another variable to our code.
Let's re-run it.
Let's actually add an expression to WATCH.
Let's refresh the page and we go back to our break point, and now you can see, we can check.
What's the value of this surname variable, under the WATCH We have a CALL STACK.
So, if you have a complicated code that calls multiple functions, you would see the whole CALL STACK from the main function all the way down toward the break point is located.
We only have one function, so it's not really useful.
But if you have a code with a lot of things going on, cause that can be really helpful, and then finally you have list of break points because you can see we only have one in our app.py.
But you also can insert break points when an exception is raised or when an exception is Uncaught, can also click here to deactivate all the break points or click here to add custom break point from this menu.
And if you want to execute some Python code in your debugging session, you can goto this debug console.
Here, you can run any Python expression, so you can inspect some of the existing variables.
But you can also, for example, modify them.
So, if we check the value of, name its Sebastian.
And if we try to change it, you can see that now, in the local scope, name has changed to ‘Steven’.
And if we continue the execution of this code and we go back to the browser you can see that our change has persisted.
|
|
show
|
4:52 |
VSCode, also has a nice side menu for running your tests, but by default, it's not visible.
So, to enable it, we have to open the command palette and run Python configure tests.
From this list, we have to select which testing framework we're going to use.
In this course, I will be using pytest because it's much better than the building unit tests.
So, let's go ahead and select pytest right now.
And now we have to select the directory that contains your tests.
Normally, you would store them in a folder called Tests or something like that.
But here, I only have one test file that it's in the root folder.
So, let's go ahead and select root.
If you don't have pytest installed, you might get a pop up saying that you should install pytest.
I went ahead and I installed pytest using pip.
As you can see, it's already installed.
If you already had pytest installed and you had no warning, or maybe if you selected a different testing framework, you should now see that we have a different icon on the sidebar.
This is the testing menu.
We only have one test file, and there is this great question mark next to it because those tests have not been run yet.
So test around them.
And as you can see, all three tests are passing, and we also have the check mark above each of them.
That's a very optimistic scenario, but usually you will have some tests that are failing So let's try to modify one of our tests and see what happens.
As you can see when one of the tests is failing, you have an indication right above the test.
But also in the sidebar.
We can now, for example, debug this test to see what's going on.
If we click here, we get the output from the debugging, but we can actually do a bit more.
For example, we can put break points in our tests.
Let's copy this line so the break point doesn't exit because that was the last line in our file.
And let's put a break point here and let's rerun Let's Save It first.
And now let's rerun it.
So now we have this debug toolbar on the top, and if we go to the debug side panel, you can see that we have the same debug information and as we had in the previous lesson, so we can investigate local variables, we can watch some of the variables.
We still have the surname from the previous lesson, so let's remove it.
So let's stop debugging session and let's go and fix our test.
You can see it's still greyed out, because we haven't run this test after we modified it.
So, let's run it.
And as you can see, our tests are green again.
There are a few other useful tools in the sidebar, so you can run all the tests.
We can debug them, you can discover test.
So, for example, if we create a new file, VSCode didn't detect this test file.
So, if we want to add it to the sidebar, we just have to run discover test again.
And now we have our new file.
So, let's rerun all the tests and all of them are passing.
I will talk more about pytest later in this course, we're gonna have a chapter related to pytest and testing in general.
But right now I can already tell you that pytest accepts a configuration file called pytest.init I'm telling you this right now because if you create a pytest.init VSCode will actually respect options from that file.
So, let's give it a try.
Let's go to our folder.
Let’s add the new file, and let's add the configuration option here.
So, first we need to write pytest.
For example, we can add an option that will make pytest fail after the first failed tests.
So, we don't want to run all the tests if we know that one of them failed.
Let's make it stop after the first failure.
We can achieve this by adding --maxfail=1.
to the adopts parameter.
Let's go back to our testing tab and let's rerun all the tests.
Since they're passing, nothing has changed.
So, let's change some things here.
Let's change all those tests to make them fail, As you can see, After the first Test failed, the other two didn't run.
So, when we fix it, we can rerun it, now this one passed, this one failed, and since this one failed, the last one was not rerun.
So, that's how you can use the testing menu in VSCode
|
|
show
|
4:27 |
As you start typing in VSCode, you will sometimes notice that there is a pop up suggesting you some different auto completion So, if we try to create a new function, you can see that there are some strange long auto completion options.
Those are snippets, and they come usually from different plug ins.
For example, Python snippets come from the Python pluggin.
They can save you some typing.
For example, this one for the function will pre generate the scaffolding of your function So let's create a name of a function, you can hit TAB to go to the next location in the snippet.
So now we need to provide some parameters.
Let's put the name the Docstrings and then finally, the code.
Let's print something and let's run this one.
The nice thing about snippets is that you can easily create your own, so let's try to make a new snippet for Python.
First, using the command palette, select configure user snippets.
That's such if we have Python snippets, yes, we do.
So, let's select this one.
And now we have a file where we can write our snippet inside this file.
We already have some example Snippet.
Unfortunately, it's not a Python snippet.
It's a JavaScript one, but it explains you.
What are the mandatory parts of each snippet So based on that, let's try to create our own.
Let's say I want to create a snippet that will measure the execution time of a given piece of code.
So I want to start the timer on the beginning, execute some code and then display how long it took.
So let's call it time measurement.
So first parameter is prefix.
This is the text that will trigger the suggestion for the snippet.
Let's say we want to trigger the snippet auto completion when we type time.
Next, we have a body.
Body is a list of lines that will be inserted as our snippet.
So, don't worry about the indentation.
VSCode will figure it out.
So here is the code of our simple snippet.
First we import time module, then we save What time is it, then?
We have a placeholder called tab stop, when you insert a snippet and you add this $1, $2 and so on and so on.
This is the location where the cursor will move when you press stop, as you saw when we used the def snippet each time I pressed stop, I was moving first from the function name, then to the parameters list, then to the Docstring and finally to this pass statement.
So those are four different tab stops.
Here We only have one, because this is where we want to type some code.
And finally, we have a line that will subtract the initial time from the current time and print out the results.
Final parameter that we can specify is the description.
This is the description that will be displayed in this auto completion.
Pop up, for your snippet.
So, once we have the snippet, let's give it a try.
Let's remove this and let's type time and you can see this is our snippet.
Let's add some numbers so it will take some time Let's let's actually execute only this part in the terminal.
So I select the code and pressed shift enter and, as you can see, first I have the sum of the first one million number and then I have a statement saying that it took 0.4 seconds, so that's how we can create a simple snippet.
If you want to learn more, go to the VSCode documentation.
There's a whole section on snippets here, but I suggest you take a look at create your own snippets, after explaining the basics The important thing that you should take a look are variables, so you can use special variables in your snippets and that will, for example, insert the currently selected text, the content of the current line or, for example, the content of your clipboard.
And so on, dates and stuff like that, you can even perform some regular expression transformations, and you can even assign key bindings to snippets.
So to create more advanced snippets, I suggest you take a look at this documentation.
|
|
show
|
8:22 |
In a nutshell.
VSCcode is just text editor So to.
Turn it into a proper code editor.
You'll have to install some extensions.
Right now we only have the Python extension and Jupyter one that comes preinstalled with Python If Python is not the only language that you work with, you can also install plugins for other languages.
One way to do this is to go back to this welcome page that we see when we opened VSCode.
And here you can select some of the most popular plugins, for example for JavaScript, Java, PHP and stuff like that.
Apart from language specific plugins, they're also framework specific plugins.
For example, if you're a Python developer, maybe you're working with Django or Flask, Web framework and VSCode has plugins for those two.
So, let's search for Django.
If we extend this panel, you can see how many downloads each extension has.
So usually when you Install extension and there are multiple ones, select the one with the most downloads.
This Django plugin will provide you with some additional features related to Django.
For example, you will get the syntax highlighting for Django specific files that this HTML files with template tags.
You will also get some additional snippets that you can use and so on.
So, let's actually try to install it, and I'm going to show you what I mean.
So now let's say we're working with Django HTML file, as you can see VSCode figure out that this is a Django HTML file.
But if for some reason you want to change how the VSCode is interpreting this file, you can click here, or you can select change language mode.
This is very useful.
For example, when you have a file saved as HTML, but it's full of markdown code and you actually want VSCode to treat this file as Markdown file.
So, let's see what we have for HTML.
Right now we have the standard HTML file and Django HTML.
If we treat this file as a standard HTML file, we get no auto completion for the template tags from Django.
You can see there is nothing, if we treated us.
Django HTML.
Now we have all those snippets that can make writing your Django code easier, next plugin that I want to show you.
It's not specific to Python, but it's very useful for any programming language.
It's called IntelliCode and what it does.
It tries to predict which term from the auto completion least you are most likely to use, in a given situation.
And it will put that auto completion suggestions at the top, with this star.
Behind the scenes, Microsoft has used some machine learning to analyze millions of lines of code, and they try to predict in which situation which auto completion terms should be the most suitable.
So, I really like this extension because it works surprisingly well.
Next, we have a plugin that can help you a bit with writing documentation.
It's called autodocstring, and when you install it, you can type those three double quote signs and then press enter, and this will generate the scaffolding for your documentation.
So, for example, when a function already contains some arguments, it will copy and paste those arguments inside the documentation.
Let's actually give it a try.
Let me open some file.
Let's remove all that and let's say I want to write another greeter.
You can see we have auto completion to generate the docstring I press enter, and now we can quickly write a summary, press stop, here We can specify the type, description of this argument, and we are done.
Another plugin that It's not specific, to Python, but it's very useful when you're working on a very large cold base It's called bookmarks.
So here again, I have the source code of the Django Web framework, and let's go and install this plugin.
With this plugin, you can put bookmarks in your code.
So, for example, when you're new to a project and there is a lot of code and you don't really know, what's going on, I really like to use this plugin to, Make it easier to understand, how different things work.
So, for example, if I'm looking into some tests and I want to see what they test, I might put some bookmarks in the test.
I might then dig deeper into some specific functions, put bookmarks there and so on.
So, let's try to put a few bookmarks here and there.
As you can see, they are marked in the gutter, and now we have the whole menu related to bookmarks, where we can jump to the previous and next bookmark, list all of them.
So, here is jump to the next one.
Well, this file only have one bookmark.
So, let's put another one.
Here is how we can quickly jump between bookmarks in the same file.
Of course, It's much easier to use given shortcuts for that we can least all bookmarks from all the files, and you can quickly jump between them.
So, as I said, this is a very useful plugging when you want to jump around a large code base.
And speaking of large code base and collaborating with others, VSCOde has a source control tab.
But a much more useful extension that you can use is called Git lens.
This is one of the biggest extensions, I have seen for VSCode, and it's really packed with features.
So, when you install it, you can, for example, show blame annotations, per specific line, pedophile in the status bar or in the Hover.
Now you can see this comes from the git lens.
We can see who was the last person, to change the specific function, and you have a massive amount off different options that you can do.
You can open file in GitHub.
You can compare it with the head.
You can copy the commit message.
Moreover, you even have a new sidebar menu related to git lens.
And of course, you have some additional command palette options.
So check out the git lens documentation because this is a very massive plugin, and I don't think I'm using more than, like 10 or 20% off what it can do.
And finally, I have three small extensions that can make your life a bit easier First one is indent-rainbow, So in Python we're not using brackets.
And sometimes if you have a code of this nested by multiple levels off indentation, it's hard to figure out where one function or one loop ends.
Another one starts so we can install indent rainbow, and then each level of indentation will have a different color, so it will be easier for you to see where a specific function ends.
And sometimes you might notice that when you're copying some code, let's say from stack overflow or from a different program, and you paste it into VSCode.
It's not correctly indented, so you have to heat up a couple of times.
If you find this annoying, check out the Paste and indent plugin.
It will add a new command that you have to actually assigned to a keyboard shortcut and this command will paste code from your clipboard and try to indent correctly.
And actually, in most cases it does indented correctly, so you no longer have to press tab.
And finally, this is a completely personal preference.
But I really like the extension called Error lens, because I like to see the errors and warnings next to the code, not at the bottom of the VSCode or in a separate tab.
So with this extension, you can configure how the warnings and errors are displayed.
You can, for example, display an icon in the gutter and the whole text of a warning or in error next to the code.
So those are, in my opinion, the most important plugins to make writing Python code in VSCode much easier But those are not all the plug ins that I use.
So if you want to learn about some other plugins that I can recommend, I have a blog post about them, so you can go check it out.
And here I have some screenshots and also descriptions.
Some of the plugins I just covered.
And there are some others that I didn't, so maybe you'll find it interesting.
|
|
|
58:40 |
|
show
|
1:39 |
Before you can start working on your Python project, you need to first install Python on your computer.
Depending on your operating system, your computer might already come with some version of Python If you're using macOS, then it comes with Python 2.7.
If you're using Linux, then the Python version depends on which distribution you are using in Debian 10.
It's also Python 2.7 in Ubuntu 20.4.
You have Python 3.8 under the Command Python 3, but there is no Python command.
And in Windows 10, you don't have any version of Python installed.
The Python version that you already have on your computer is often called System Python.
And no matter what system version of Python you have installed, I strongly suggest that you don't use it.
First of all, as we saw, it's terribly outdated.
Python 2.7 is no longer officially supported by the core developers, and hopefully you are no longer using it, since a long time, you might be tempted to upgrade the Python version to Python 3 But you probably have some programs on your computer that requires you to have Python 2.7.
Even some parts of your operating system may need Python 2.7.
If you update system Python, those programs will stop working.
Let me repeat that because it's important.
If you change the Python version that comes preinstalled on your operating system, you risk that your computer will stop working at all.
And that is not fun.
I've tell it in the past, when I didn't know much about programming and I had to reinstall the whole operating system So my advice here is to leave the system Python alone and pretend it doesn't exist.
|
|
show
|
1:10 |
So, no matter what operating system you have, you will need to install Python.
There are many different ways to do this.
You can go to the Python.org website and download the installer for any operating system.
You can use a package manager like Homebrew for Mac or apt-get On Linux.
Or you can even compile Python from the source files.
However, my favorite way of installing Python that I'm using since a few years is to use a tool called pyenv.
pyenv is a tool for managing Python versions.
You can use it to easily install new Python version, but also to quickly switch between different Python versions that you have installed.
It might not be a big deal if you only use one version of Python all the time.
But if you're working with multiple Python versions, this tool is a must.
While pyenv full work for macOS and Linux, if you're using windows, check out the pyenv-win.
It's a part of pyenv to windows.
It might not have all the features that the standard pyenv has,but it has all the essential ones, that I will be showing in this course.
|
|
show
|
3:20 |
Let's see how we can install pyenv.
If you go to the GitHub Repo and scroll a bit down, you can see there is this installation section, and here you can see that if you're on MAC, probably the easiest ways to use Homebrew.
Otherwise, you can check out the GitHub repository and just follow some steps to find here.
But probably the easiest way is to use the automatic installer.
This last option requires you to run one command in your terminal, so let's use it.
If you go to the pyenv installer repo and scroll bit down, you'll find the installation instruction.
So all you have to do is to run this command in your terminal.
If you want to see what the script actually does, you can open this file in your browser.
So let's install pyenv.
Copy this, and I run it my terminal.
So, as you can see, apart from installing pyenv, this installer also installed some additional plugins, for example, pyenv-doctor that can be used to verify that the pyenv installation is working fine, pyenv-update that can be used to update pyenv, pyenv-virtualenv that could be used to manage virtual environments and which-ext that can be used to run commands installed outside of the current Python version and things like that.
Those plugins are nice to have, but they are not necessary to use pyenv, to verify that pyenv is correctly installed We just have to run pyenv command.
If you see a list off available commands, it means that you are all set, if you're not installing pyenv, using pyenv installer, but you use Homebrew or you check out the GitHub Repo make sure you follow the additional steps specified in the installation instructions.
So, after you clone the GitHub Repo, make sure you export some environment variables.
I need to make sure that you execute pyenv in it to enable out of completion and stuff like that.
Once again, pyenv installer does everything for you automatically, but if you follow those instructions, you should also be set.
There is one important step that we need to do before we can install a new Python version, and it's to install Python build dependencies.
So, if we open this link, you will see that depending on your operating system, there are different build dependencies.
If you don't have them installed, pyenv will do its best to try to install them itself.
But sometimes it might fail.
So it's better to install them yourself.
If you are on MAC and you're using Homebrew, Just run this command in the terminal.
I have already done that, so I don't have to.
If you're using Ubuntu, centos fedora, you have all the instructions here.
And once you do this, you are all set.
In the next video, we are going to install some new Python versions.
But just in case you run into some problems, there is a wiki page called Common Build Problems.
If you have some issues, just go here and maybe you will be able to solve them.
|
|
show
|
0:58 |
If you are a Windows user, then, as I said before, you pyenv win instead.
If you go to the GitHub Repo, you will see the installation instructions there.
As you can see again, there are many different ways how you can install it You can use pip.
You can download the ZIP file.
You can use git, or you can use the chocolatey, after we finish the installation make sure you check the notes in the GitHub Repo because there might be some important steps that you need to do after installation.
And if you want to verify that the installation was successful, run pyenv in your terminal, and if you see a list of commands,then everything is fine.
|
|
show
|
5:18 |
All right.
Now that we have Pyenv installed, it's time to install some brand new Python version, to see a list of which Python versions you can install.
Just run pyenv install --list.
You will see a huge list of available Python installations.
At the top of this list, you will see a standard cPython versions.
Those are the Python versions that we will be installing and using.
You can see we still have Python 2 and Python 3 all the way, To Python 3.10 dev.
But apart from that, you can also see Anaconda, Iron Python, miniconda, pypy and so on.
So, all those different types of Python versions can also be installed with pyenv.
if you ever want to try, let's say pypy.
This is the easiest way, how you can install it.
So, let's say I want to install the latest stable version of Python.
Let's see, which one is that?
Okay, so we can use Python 3.9.0, So, let's copy it and let's run pyenv install command.
This installation is going to take a while.
As you can see, pyenv is already using open ssl, and a read line that I have installed on my computer.
If you don't have those dependencies installed pyenv in full, try to download them, each time you install a new Python version.
So to speed up this process a bit, I suggest you go to the Github repo pyenv and check out how you can install those additional dependencies.
So let's do this now.
If you go to the GitHub repo of pyenv and you scroll down a bit, the installation instructions, you will see point number 5 installing Python build dependencies.
This will take you to a wiki page where you can see how you can install those dependencies, depending on your operating system.
Since I'm using a Mac, I just have to run this one command, For Ubuntu to centos fedora.
You have different commands, and if you're a Windows user, I don't think there are any additional dependence is that you have to install.
At least I haven't found anything here, So, let's go back to our terminal and it's done.
As you can see, it took almost four minutes, but at least no problems.
So to see the list of available versions of Python you can run pyenv versions command.
As you can see, we have the system Python that I told you not to touch.
And now we have 3.9.0 that we can use.
This Asterisk means that this is the current version we are using.
So if I run python --version, you can see I'm still using 2.7.
And if I want to switch to a different Python version, I just have to run pyenv global 3.9.0.
And as you can see, this doesn't work, which is actually great because I can show you how to troubleshoot pyenv.
So, when you install a new pyenv version and you can't switch to it, Means that you should run pyenv rehash command.
This commands updates by pyenv, so it's now aware that we have this 3.9 version and it's ready to use.
And if this still doesn't work, make sure that pyenv is actually correctly set up.
So, if we go back to the GitHub page, you can see that in the Step 3, we have to add pyenv in it to our shell.
In my case, I forgot to do this And I can see this by running echo $PATH Command.
You can see Pyenv is not on this list and my computer is using the default Python version.
So let's follow the instructions from GitHub.
I'm using Z Shell.
So I have to take this command and added to my Z shell RC(.zshrc) file, Here I have some custom commands, so don't worry about it.
And here I am adding command to initialize pyenv.
I have to restart the shell and hopefully pyenv should work correctly now.
Yeah, it's working.
You can see that I have a different Python version.
That's because when I was debugging this issue, I installed yet another Python version.
So you can see I have already two pyenv versions and the system Python.
But now I can easily switch between those versions.
So, that's how we can easily install and switch between different Python versions.
Next, we are going to talk about three different levels at which pyenv works.
|
|
show
|
3:47 |
When we use pyenv, we can choose one of three different levels at which we want to change by Python version.
First, we have the global command, that we used in the previous lesson.
This is the most common usage of pyenv.
It changes the Python version globally on your computer.
This is an equivalent of installing a different Python version.
Next, we have local command, running pyenv local will set a specific Python version for a current folder.
And for all the sub folders, you would commonly used this command if you're working on different projects and each of them requires different Python version.
For example, one is using Python 3.8 and another is using Python 3.6.
Instead of changing the global Python version back and forth, you can just call pyenv local 3.6 in one folder and pyenv local 3.8 or 3.9 in another folder.
And you are all set.
pyenv full automatically switch Python versions when you go inside, Either of those folders, let's see a short demo, as you can see my global Python version Python version is Python 3.9 and let's say I have another project that requires Python 3.8 If I go inside this directory and I run pyenv local and I specified version of Python, you can see that now we're using 3.8.6, and if we run pyenv list, and if we run pyenv versions, you can see that this version comes actually from this directory, not from a global one.
How does pyenv keep track of what version it should use.
Well, it simply creates a file called Python -version and puts the number inside.
Later When you run Python Command pyenv checks, is there a Python -version file in the current folder?
If yes, use that version of Python.
If not, check the parent folder and the grand parent and So on..
until it gets to the top Most directory.
If it doesn't find the Python version file, it uses the global Python version, So, the pyenv local command overrides the pyenv global If we go out from this directory, you can see that our global Python version is still Python 3.9.
We didn't change anything, and finally we have pyenv shell.
This changes Python version for the current shell session.
You might want to use it in a situation where you want to temporarily change which Python version you are using.
For example, maybe you want to run some code under Python 2.
So, here we are using Python 3.9 as a global Python.
But I can temporarly change the shell to 3.8, or even to a system Python, which in my case, Python 2.7, the Python Shell command overrides the Python versions set by pyenv global and pyenv local.
So, if we go to our Python 3.8 project, you can see that we are still using Python 2.7 that we said with pyenv shell, unless you are working on multiple projects that use different Python versions.
pyenv global is the command that you will be using most often.
|
|
show
|
1:41 |
Most of the time, pyenv work without any problems.
But if you run into some issues like I did before, Here are some things that you can check.
It's possible that after you install a new version of Python pyenv wont detected right away In this case, you can either restart your terminal, which will run the pyenv init, command, or you can manually run pyenv rehash, and they should fix the issue, if something goes wrong When installing a new Python version, you can always uninstall with pyenv uninstall and try to install it again.
And finally, if something goes wrong with pyenv itself, you can always uninstall it, by following the uninstalling pyenv instructions from the github.
And then you can install it again.
So here, as you can see, if you want to completely uninstalle pyenv, you have to remove the pyenv directory and then run brew uninstall pyenv if used homebrew Or use the corresponding command for your package manager.
After you uninstall pyenv, everything goes back to how it was before your operating system will go back to using the system Python or any other Python version that you have installed before.
This is all thanks to the fact that pyenv doesn't try to modify the existing Python versions, but it uses a system of shims to.
Insert its own Python versions before the system Python version into your path variable.
If you're curious how pyenv works, check out the next lecture.
If not, jump to the next part of this chapter where I will talk about managing Python dependencies.
|
|
show
|
2:39 |
In this bonus lesson, I will show you how pyenv of works.
If you're not interested, you can just skip it.
So, to understand how it works, you first have to understand how the path variable works.
Path is an environment variable, and it exists on pretty much any operating system.
It specifies where you're operating systems should look for programs and commands that you run.
When you display the content of a path variable, you will see that it contains different folders.
In this case, pyenv is the first directory.
But if we remove it, we go back to the default path variable that we have on our operating system.
So, for example, if I run Python in the terminal, my operating system will check for Python binary file in all those locations specified in the path variable.
So, first it will check user/local/bin.
If it's not there, then user/bin, then bin user sbin sbin, and so on and so on.
If it can't find Python in any of, those directories is going to throw an error.
If we run which Python, you can see in which directory this binary is located, in my case, it was in the /usr/bin.
When we use pyenv, if it modifies the path variable, it adds the path to the current Python version installed with pyenv.
At the beginning of that variable now, when you run Python, your operating system will first check inside this pyenv/shims directory, and it will find Python binary there, so it will run this one instead of using the system Python.
We can confirm this by running which Python and, as you can see it found Python binary in the first directory, so, it's not checking the other directories.
So, that's why it's important to put the pyenv folder at the beginning of your path variable.
To better reflect how pyenv of works the folder with Python binary is called Shim.
It means a small tool that lies between the Python command in your terminal and the Python binary that is executed.
Thanks to that, pyenv is completely separated from any Python version that existed on your computer before you install it.
If you ever mess up something and pyenv stops working all you have to do is to remove it and install it again.
This is very convenient because, for example, if you ever mess up something with the system Python version, your computer will probably stop working, and that's much harder to fix.
|
|
show
|
0:45 |
If you're using other programming languages, you might be happy to hear that there are similar tools to pyenv, and for those languages, for example, there is nodenv.
That you can use to easily switch different node versions there is Go.
And if that you can use if you're using Go lang.
and there are other tools for other languages as well.
All of them work in a very similar way as pyenv.
And if you're using really a, lot of programming languages, there is a tool called asdf-vm and you can use it to replace all those different separate tools, so you don't have to install, node and pyenv and goenv, and you can just install this one, and you can use it to manage different versions of different programming languages.
|
|
show
|
5:19 |
You often hear that Python comes with batteries included.
This means that it comes with plenty of packages already installed.
This is called the Standard Python Library.
You can Goto Python Modules Index website, and here you can find the list of all the packages that comes with Python.
So, without installing anything, you can just import them in your code.
But what makes Python truly amazing is the Python Package Index.
This list of over 260,000 packages created by other developers.
If you find the module that you want to use, all you need to do is to run pip, install and the modules name, and this will download and install that package for you Now you could import this module in your code and start using any function that this module offers.
With so many Python packages, you can quickly built a pretty advanced projects simply by combining different packages together.
However, pip has one big problem.
You can only have one version of a given package installed on your computer.
Whenever you ask it to install a specific version of a Python package, it will uninstall the previous versions from your computer and installed the one that you asked for.
Let me show you an example of what I mean.
Let's say you are a Web developer and you want to build a Django website, so, you create a folder for the application and then you start by installing the latest version of Django with pip install Django Command.
As you can see it, this installs Django 3.
Everything works great.
You build an awesome website and it's actually so good that soon you get a customer coming to you and asking to fix their Django website.
Their Django website is still running on Django to so you go out from this folder you create a new one and then you installed Django 2 by specifying pip install django==2.2 Let's say, pip does what you ask for, and after a few seconds you start working on your client's website using Django 2.2.
So far, so good.
But later that day, you discover a bug on your personal website, the one that you created with Django 3.
You quickly fix the code, but when you want to test that is working correctly you get error message saying that Django 2 is not installed.
What?
Wait.
But we just installed a few days ago, where did it go?
Well, when we told pip to install Django 2.2 pip first checked if we already have Django install and we did.
But it was not the 2.2 version, so pip uninstalled that version and installed the correct one.
As you can see here, it says uninstalling Django 3 and successfully installed Django 2.
And if we check the list of packages installed on our computer, you will see that we no longer have Django 3.
So, we just run into a problem with dependencies management.
All the Python packages that you install with pip are called dependencies because the projects you are building depends on them.
pip installs those dependencies in the site packages folder and puts each package in a separate folder named after the package.
So, when we installed Django 3, it was placed in site packages/django folder.
But when we tried to install Django 2, it was also placed in the same folder, so, pip has the first remove what's inside the Django folder and then install different version of Django.
If you're only working with one Python project on your computer, then you are probably not affected by this problem.
But sooner or later you will need to install a different version of a package, and you're going to run into issues with pip uninstalling some previous dependencies.
Dependencies management problems are not specific.
to Python.
Basically, any programming language that allows installing external modules have to face this issue, and different programming languages solve it in a different way.
For example, developers who used Ruby on rails install their packages with a tool called bundler.
And when Bundler installs a new dependency, it puts it in a folder called Package Name-Version.
So it's possible to have two different versions of a package.
For example, here you can see we have active support 5.2.3 and 6.0 6.0.2.1 and so on.
JavaScript or, more specifically, node.
So, the server side version of JavaScript solves this problem by installing all the dependencies inside the folder called node modules inside of your project.
So, each different project on your computer has a different folder where the dependencies are stored right now, in Python pip doesn't have a good solution for this problem.
However, there is a proposal to use a local packages directory in the same manner as JavaScript developers do.
If it gets implemented, pip would be installing all the dependencies in the folder inside of your project, not globally.
This proposal is still draft, but maybe in a future version of Python problem with dependencies will be solved.
But for now, most Python developers solved this problem using virtual environments.
|
|
show
|
1:27 |
The problem with pip is that it installs all the packages in the same folder.
So, how about we tell pip to temporary install packages in a different folder, and then we tell our Python interpreter to use that folder instead?
Well, that's exactly what virtual environment does.
A virtual environment is a special folder that contains a Python, binary and any additional packages that you install.
When you activate a virtual environment, two things happen.
First, you tell pip to install any new packages to that folder, and then you tell Python interpreter to use packages from that folder.
Let's see an example.
so, when we are not using a virtual environment and we tell pip to install Django 3, it's going to install it in the global site packages.
And then, when we tell pip to install Django 2, is going to install the previous version of Django and install the new one, as we already saw.
If we use the virtual environment, first we activate a specific folder that we want to use.
Then we tell pip to install Django 3, and as you can see, it's going to be installed not in a global site Packages but in side this Django 3 application, and then we tell virtual environment to activate a different environment.
And when we tell pip to install Django 2.2 this time, it's not going to uninstall anything because it's going to use site packages from a different folder then previously.
|
|
show
|
2:13 |
Let's see virtual environments in action.
So, first we need to create one.
Python has a built in module called venv to manage virtual environments, so we don't even have to install anything.
We create a new virtual environment with a command Python -m venv.
And then we specify the name of the virtual environment.
The -m parameter tells Python to call the built in module venv and module venv takes one parameter the name of the visual environment.
This created a folder called my_first_virtualenv.
in the current directory.
my_first_virtualenv is a silly name for a virtual environment.
But I just wanted to show you that you can name it however you want.
A much more common convention is to name this folder venv or .venv.
This makes it obvious that whatever is inside this folder is related to a virtual environment.
But it also has another benefit.
Some Python editors, like PyCharm or VSCode, will automatically recognize this folder as a virtual environment, and we'll start using it in your project.
Okay, so we created a virtual environment.
But how do we use it, inside our folder.
There is a bin directory, and there we have activate script.
Since it's a bar script, we have to source it with the following command source activate, if you are not using bash but, for example, Fish shell, you have to source the appropriate file in this case, the activate.fish.
After the activation, you can see that my prompt has changed.
And even though I'm using a custom prompt if you're using, let's say bash or just plain Z shell, you should also see the name of the virtual environment displayed.
Let me show you.
Let's say I'm using Bash and you can see it's here So it's easy for you to tell if your insight of the virtual environment or not, let's go back.
And if for some reason you don't see the name of the virtual environment in the prompt, you can quickly check that you are inside the virtual environment by running.
pip -V.
If it's pointing to a pip binary inside my first virtual evironment folder then it means that you are using virtual environment.
|
|
show
|
1:28 |
So, we created an activated, a virtual environment.
We know that it's active because the prompt in our terminal has changed.
But how do you work with this virtual environment?
Well, now we do everything that we would do normally when building a Python project If you install a package with pip, it will be installed inside this virtual environment.
You can actually see that.
Currently, we don't have any packages installed by running pip freeze command.
Let's install Django here.
When we do pip freeze again, we can see that we have Django installed.
If you want to stop using a virtual environment, you just need to run the activate command in your terminal.
When you call the activate, it will revert all the changes that the activate command did.
So, it will go back to using the global Python version and global pip packages If we now run, pip freeze, you can see that we no longer have Django 3 that we just install.
We are outside of a virtual environment.
So now we're back to the global packages and here we have Django 2 that we installed in the previous lessons and some other packages like requests.
Finally, If you ever want to get rid of a virtual environment, you can just delete the folder, where the virtual environment was created.
In our case, if we want to remove the my_first_virtualenv and we simply delete that folder.
|
|
show
|
2:27 |
Let's see how we would use virtual environments in our daily work.
Typically, you would create one virtual environment for each of your projects.
So let's come back to my example with two projects using different Django versions, I have created two folders one called my_django_app and second one called client_django_app.
We will go inside both folders and create a virtual environment there.
Here is a quick tip.
I told you that it's a common practice to name virtual environment folder venv or .venv that way your code editor will automatically detected.
But also you will immediately see that this folder contains stuff related to the virtual environment But if you have many projects and you switch between them, it can be very confusing toe always see venv, as the name of the virtual environment.
You might install, the wrong packages in the wrong environment by mistake, So, to make it easier to immediately tell which environment is active.
You can pass the --prompt parameter.
This will change the name of the virtual environment without changing the folders name.
So, as you can see, we have a folder with the virtual environment called .venv But when we activated, the name of the virtual environment is actually mydjango3app So, let's go and create a virtual environment for the other folder as well.
All right, we're all set.
We have two projects.
Each of them has its own virtual environment.
Now let's say I want to work on this Django 2 application for my client.
I go inside the Projects folder and I activate the virtual environment.
I do some coding.
Maybe I installed some packages and stuff like that.
At the end of the day.
Maybe I want to work on my personal Django project.
So I deactivate this one, and I switched to the other folder and I activate the other virtual environment.
Again, I do some coding here.
I installed some other packages, and then the next day, I switch back to my client's application.
And if I forget to run the activate, that's fine.
When I activate another environment while the previous one is still active, the preface one is automatically deactivated first, So, when you are switching from one virtual environment to another,you can skip this deactivate command
|
|
show
|
10:04 |
venv module is perfectly fine for managing virtual environment, but I want to show you another tool that I have been using for a long time.
It's called virtualenvwrapper, and it comes with a lot of cool features that makes working with virtual environments much easier.
virtualenvwrapper is available on Linux and macOS.
If you're on Windows, check out the virtualenvwrapper-win or virtualenvwrapper-powershell.
They're both parts of virtualenvwrapper for Windows.
One will work with the standard terminal and the other one will work with Power Shell However, be aware that virtualenvwrapper won't work.
If you are using pyenv, let me show you why.
So, Only follow the next steps.
If you are not using pyenv and for all those pyenv users watching this, I will show you what you can use instead in a moment.
So, if you're not using pyenv and you want to install virtualenvwrapper, we can do this, for example, with pip.
Let's copy this.
Okay, next we have to also, add some environment variables, to our shell, no matter if you're using bash or zshell scroll down a bit and then copy those three lines and put it either in your zshell rc or bash rc Since I'm using Zshell, I'm gonna edit zshell.
I just put a note that this for virtualenvwrapper Okay, Now we can actually restart zshell and it should work, except that it's not working.
So I told you, this is not going to work for pyenv because actually, the virtualenvwrapper was installed in a different directory.
We can check it using the which Command.
As you can see, virtualenvwrapper was installed inside our pyenv directory.
And even if we use this one, if we try to source it, it's actually going to fail.
Okay, first time it works, and the next time it's actually going to crash our shell Yeah, we can.
If you do this, you will no longer be able to start shell.
So, don't do this.
That's why I told you to not follow those steps.
If you're using pyenv.
So if you are not using pyenv virtualenvwrapper should be installed for you And if you are using pyenv, let me show you what you can use instead.
So, virtualenvwrapper is not going to work for you.
But there is a plugin for pyenv called.
Well, pyenv-virtualenvwrapper.
When you install it, you will get the same set of commands that virtualenvwrapper provides.
So let's do this now.
Installing pyenv plugins is easy.
You just have to clone the GitHub repository inside your pyenv/plugins folder.
Once this is done, let's scroll a bit down to see the usage.
So, we have to run pyenv virtualenvwrapper command to initialize this plugin If it's not working, just restart your shell, and this time it worked.
So now you have access to all the commands that virtualenvwrapper provides.
And speaking of those additional commands, let's see what you can actually do now.
To create a new virtual environment.
You just have to call mkvirtualenv command and the name of the virtual environment.
And as you can see, it's automatically activated.
You can create more virtual environments.
You can deactivate it.
To list All the environments you can run lsvirtualenv.
You can see, we have two virtual environments and to remove them, just run rmvirtualenv, and that will actually delete both of them.
The difference between venv module that comes with Python and the virtualenvwrapper is that for virtualenvwrapper, it doesn't matter in which folder you're on your commands.
It will create all your virtual environments in a special directory in your home folder called virtualenv.
So, we can go there and here we don't have virtualenv yet.
You can see we just created a new virtual environment and that created the folder here so your virtual environments are no longer stored together with your projects, but they're stored all in one folder.
Let's go back to our example with two Django applications.
And as you can see, I'm on Django 2 virtual environment, so I can go to client_django_app, and I can actually keep install some stuff and start working on it.
If I want to switch to this other Django app, I don't have to go to a specific folder to create a new virtualenv.
I can on it, for example, in this work directory and as you can see, I automatically switched django 3.If I goto this other Django app, I don't have this .venv folder here.
All the virtual environments that I created with virtualenvwrapper are stored in my home directory in this .virtualenv folder.
I no longer have to remember the full path to the activation script.
I can just call workon and provide the name of the virtual environment, and I automatically switch between them.
If I want to quickly check something, I can create a temporary virtual environment, install some packages there, play a bit with it, and then they deactivat it and delete it.
Creating, activating, listing and removing virtual environments are the most popular commands that you will be using.
But if you go to the website of virtualenvwrapper, you can see that there are many more interesting commands.
Let's go to the command reference, and here you will see there is a command to make a temporary virtual environment.
It will be automatically deleted.
When you deactivate it, you can see the details.
For a single virtual environment, you can copy an existing environment to another one.
For example, when you have a bunch of.
pip packages already installed, and you don't want to reinstall them by hand.
You can just copy existing virtual environment.
There is allvirtualenv command that you can use to run the same command in all environments.
For example, when you want to update version of some package in all the existing virtualenv.
To see the list of available commands, you can run virtualenvwrapper in your terminal.
One last thing to keep in mind if you're not using pyenv then if you start a new shell, virtualenvwrapper commands will be available for you.
But if you are using pyenv, they won't be.
You always have to run pyenv virtualenvwrapper first to initialize it.
So if you find it annoying, you can, for example, add this command to your zshellrc, so it's automatically executed when you start in your shell.
Let's take a look at the list of pros and cons of both the venv and virtualenvwrapper the first venv.
The biggest advantage is that it works out of the box.
It's a built in module, so once you install Python, you can start using it right away, and it creates virtual environments in the same place where your Project is.
So, when you delete your project, you automatically also delete the virtual environment.
And finally, if you name it, venv or .venv your code editor will automatically detected and start using it.
On the other hand, if all virtual environments are named venv or .venv, it's very easy to confuse them.
So, you either have to use different names or use the --prompt parameter.
When you create a new virtualenv, and when you want to activate it, you need to remember the full path to a specific virtual environment.
On the other hand, for virtualenvwrapper, you don't have to remember in which location you put the activation script.
You just used this one command workon and the name of the virtual environment.
individualenvwrapper will automatically detect where it's stored.
This makes it very easy to activate any virtual environment in any folder on your computer As I said, you don't have to remember the whole path to the activation script.
You just run the workon command and virtualenvwrapper comes with a set of commands that makes managing virtual environments much easier.
On the other hands.
If your virtual environments are not stored in the same folder as your project, you have to point your editor to a correct virtual environment they wont detected automatically.
Although, for example, VSCode smart enough to know that virtual environments can live in the .venv folder in your home directory, so they will try to detect them and they will display you a least of existing virtual environment.
So even though it's not as easy as automatically detecting, which virtualenv to use, at least you don't have to type the whole path Just have a list of it the existing ones, and you select the one that you want to use.
And since your virtual environments are separated from your projects, if you delete a project folder from your computer, you also need to remember to remove the virtual environment.
This is not really a big deal, since virtualenv don't really take that much space, I usually remove them when I forget what was the purpose of a specific virtualenv In the end, it's up to you to select which one you want to use.
I'm using virtualenvwrapper because I find it much more convenient to not have to remember in which folder I placed a specific virtualenv.
But venv much easier to start with, so it's very common to use it at the beginning, and when you feel comfortable with using virtual environments, then maybe you can switch to a separate tool.
|
|
show
|
2:53 |
So far, we solve the problem of how to install different Python versions on our computer and how to separate dependencies, in our project.
However, there are some Python packages that you would probably like to use globally on your computer.
For example, there are some code linters like Flake 8 and pylint, or code formatters like Black.
I will talk more about those tools in the next chapters, or maybe even you want to install the virtualenvwrapper that we use in the previous lessons.
We wont to use those Python packages across all our projects or even outside of any project.
For example, we would use virtualenvwrapper to first, create a virtual environment before we start coding.
Or we could use black to format, some Random Python script.
There is no point in installing those global packages inside of a virtual environment because then to actually use it.
You would always have to first activate the virtual environment, then run this tool, and then they activate this virtual environment.
I mean, you can do this, but it's a lot of typing, and generally it's a waste of your time, a common way to install a Python package globally is to just run pip install outside of a virtual environment.
This will work fine at the beginning.
But the more global packages you install, the bigger is the chance that some of their dependencies will conflict with each other.
For example, black will install version 1 of some library, but Flake 8 will require version 2 of this library, so it will reinstall it.
And suddenly black is no longer working and you have a mess.
It's the same situation when we try to install django 2 and django 3 in the previous lessons.
This problem can be solved with a tool called pip X, so pipx installed spied on packages in separate environment.
But at the same time, those packages act as if they were installed globally.
You don't have to activate any virtual environment to use them.
You can install pipx with pip, so let's run pip install pipx.
After you install pipx.
Make sure you check the installation instruction from the documentation because there is one more thing that you have to do.
We have to run 'pipx ensurepath' command.
You want to have auto-completion in your shell.
You can also run this pipx completions command.
It will show you what code do you have to add to your bash or zshell or any other shell to make sure that auto-completion is working.
And also don't forget to restart your shell or start a new terminal for the path changes to take effect.
So let's start the new zshell session and to see if pipx correctly installed.
Let's just run pipx without any command.
Yep.
If you get this list of commands, it means everything is ready to use.
|
|
show
|
5:11 |
Now that we have pipx installed.
Each time we want to install a Python package that should be available globally instead of running, pip install and the name of a package, we're gonna run pipx install and the name of the package.
so,there is just one letter of a difference in your commands.
Let's say I want to install black.
I can do this with pipx install black and now I can use black.
You can see that after installing black, I have three new global commands black, black primer and blackd.
so,now if I run black Command, it doesn't matter, if I'm inside the virtual environment or not.
I will always use this global black package.
And if I want to install a different version of black, let's say inside of my virtual environment, I can still do this.
Let's create a temporary virtual environment and let's install a different version of black inside of it.
so,first I will activate virtualenvwrapper.
Then I can run mktmpenv.
This will create a temporary virtual environment that will be deleted when I deactivated.
so,we are inside of it.
We have no pip packages installed.
Let's install black version 19.
so,there is No 19.
There is 19.3b0.
Oh, right.
so,now if we try to run black --version to see which black we're using, you can see that we're still using the global one so,it's important to remember that if you have a problem with using a version of a package inside of a virtual environment, always run pyenv rehash.
so,now we will be using black version installed inside of this virtual environment.
we deactivate it.
We are back to our global black version.
so,what else can I do with pipx?
Let's see the list of commands, we have installed that we just used to install the package, to uninstall a package.
There is uninstall, and if you want to uninstall all packages or install them, then there is uninstall-all, install all, upgrade is a useful command.
If you want to upgrade a single package or if you want to upgrade-all the packages installed with pipx, you can just run, upgrade-all, another really cool command is pipx run.
It will run a command from a pip package in a temporary virtual environment, and it will immediately destroy this environment after the command exits.
This is very useful when you know that you want to run a given command only once in your life.
so,instead of doing pip install, then running the command and doing pip uninstall, you just do pipx run and you specify command from that package.
Let me show you an example.
This is a silly example, but let's say we want to run command from this cow say package that prints this ASCII character of a cow and some text you specify As you can see, we just ran a command from the cowsay package But if we check the list of installed packages, cowsay is not installed anywhere.
As I said, this is a very useful command.
If you want to run and one of command without cluttering your computer with redundant packages another very useful command is inject.
This lets you install a pip package inside of an existing virtual environment.
If it sounds confusing, don't worry.
For a long time I didn't understand why would I install a package inside of an existing environment, if I actually want to isolate my packages, right?
Well, this is useful when you're using pip packages that have plugins, but those plugins are separated pip packages.
A great example here is pytest the most common testing library for Python that I will talk more in Chapter eight.
But right now you need to know that pytest has many plugins, and to use those plugins you have to first, install them.
so,let me show you.
First, we need to install pytest.
And now let's say we want to install the pytest-cov pluggin that will display the test coverage of your code.
How do we install this plugin?
Well, we can't run pipx install pytest-cov.
Because this will install our plugin in a separate virtual environment.
so,we need to install it in the same environment where we installed pytest.
We could figure out where this virtual environment is located, manually activated and then run pip inside.
But there is no need for that.
Since we have the inject command, we can run pipx inject pytest pytest-cov, where the first argument in our case pytest is the name of the existing package and the second argument is the name of the package that you want to add to that environment.
As you can see, it said, injected package pytest-cov into venvpytest.
so,now we can run pytest with --cov parameter that comes from this new plugin that we just installed.
|
|
show
|
2:33 |
Do you remember, how we were setting up VSCode in the previous chapter?
And after we installed Python extension VSCode asked us if we want to install the pylint linter.
At that time we chose, Yes, but normally I don't like to do this.
I don't like to install packages that way because VSCode will install it somewhere But usually I don't know where.
So, I prefer to install packages myself and then point VSCode to use those packages.
Now that we're using pipx to manage our global Python packages, we can tell VSCode to use those pipx packages.
All we need to do is to install pylint with pipx and then find a path to this pylint binary.
So let's go to the terminal.
As you can see, I have already installed pylint, so I can run which pylint and this should give me the path to the pylint binary.
So, I take that.
I go back to VSCode and I go to the settings.
I don't remember what's the setting name.
So, let's search for pylint path and that should be it.
Yeah, so you can see there is section for Python linting and you can specify paths to different linters.
Here's pylint.
So, instead of using pylint command, I want to explicitly use the pylint from pipx.
We change this, we save our settings and we quit.
So, let's see if pylint will actually work.
Now let's make some mistakes.
Let's enable linting and nothing happened.
So, let's run linting actually, still nothing happened.
I'm running out of options here, Okay, so let's actually select the linter.
So, as you can see, we have the undefined variable surname and this error comes from pylint .
I was actually expecting it to complain about having unused variable and importing a module and not using it, but maybe it requires some additional configuration.
Nevertheless, this is how you can use VSCode with packages installed with pip.
All we have to do is to find the appropriate setting.
In my case, it was path to the pylint binary, but it could be a path to some other binaries.
And then you have to change the setting to point to the package from pipx.
|
|
show
|
3:48 |
It was a very long chapter.
But if you're watching this video and you followed all the steps, you should now have a rock solid set up for your future Python projects.
Let's quickly recap what we did and how it's going to help us.
First, we installed pyenv, so we can easily install different versions of Python and switch between them.
So with pyenv to install new Python version, you can use pyenv install and the name of the version, and then you basically use one of the three commands.
If you want to change the global Python version on your computer, then you're gonna use pyenv global.
If you want to change Python version for this current folder and all the sub folders you going to use pyenv local.
And finally, if you want to set specific Python version for the current shell session you're gonna use pyenv shell.
The main benefits of using pyenv are, first You don't mess with the Python version that it's already installed on your computer.
You should never update the system Python because there are other programs that depends on this specific version.
If you updated, they will break.
Once you have pyenv installed and working, you are almost guaranteed that installing new version of Python through pyenv will be successful There might be some problems with your first Python version, because you might be missing some dependencies, but once you sort it out, it's gonna be much easier to install.
new Python version.
The biggest benefit is that you can install multiple versions of Python and instantly switch between them.
This is super useful if you're working on different projects that use different Python versions.
Because if you quickly need to switch, let's say, from Python 3.9 to 3.6.
It's just one command away.
And if you mess up something you can easily reinstall pyenv, you just remove the .pyenv folder and you install pyenv again.
or we can completely uninstall pyenv, and you'll be back to whatever version of Python you have used on your computer before.
Next, we saw how to use virtual environments, so we will no longer have problems that one Python package uninstalled some other Python packages.
How you are going to use virtual environments depends if you're using the built in venv module for one of the tools, for example, the virtualenvwrapper that I recommended.
But basically, in both cases, you first need to create a virtual environment.
Then you need to activate it to start using it.
And when you are done or when you want to switch to a different virtual environment you have to deactivate it.
Some of the benefits of using virtual environments include starting with Python 3.3 venv is a built in module in Python, so it's a default standard for managing dependencies In Python World.
Virtual environments isolate packages between your projects, so each Python project on your computer has its own set of dependencies, and they can all live in harmony.
And if you mess something up, you can always delete a virtual environment and create a new one.
Finally, we installed pipx, so our global Python packages are nicely isolated from each other.
We no longer risk that when we install a new Python package globally, it will mess up some other packages.
We could solve this problem by always using virtual environments, but it's very inconvenient.
So when you use pipx, the only difference is that Now you have to install global packages with pipx install command, instead of the usual pip install.
Main benefits of using pipx are that it isolates global packages.
So you no longer risk that installing one global package will uninstall dependencies for some other global package.
And it's pretty transparent to use, when you install a package using pipx.
You can use it as you would normally use that package if it was installed with pip.
With all those tools set up, we are ready to start building Python projects in the next chapter.
|
|
|
18:24 |
|
show
|
3:33 |
What do you do when you want to build a new Python application?
Do you stick with the same tools and techniques that worked fine in the past 10 projects that you build?
Do you use this new cool library or framework that you just found recently, or do you search for the best Python library for X.
Each of this option has some drawbacks.
If you stick with the same tools that you have been using for years, you are playing it safe.
If something worked for you 10 times, there is a huge chance that it will work.
Fine this 11th time, and since you are using a framework or a library that you know very well, you will be more efficient and you will build it quite fast.
But you might be missing on a new tool that could help you build your application even faster.
A library that has some features that you need, so you don't have to write them from scratch or a Web framework that this 100 times faster and could save you a lot of money because you could deploy your application on a much cheaper server and still handle all the incoming traffic.
And maybe there are no tools and techniques that worked fine in the past 10 projects, because this is your first Python project, so you are not sure what to use.
So maybe you decide to try a new framework this time, something that you found on Hacker News or some other website where programmers share the cool things that they have built.
It looks very promising.
According to the benchmarks.
It's faster than all the other frameworks.
It has a beautiful documentation, and they GitHub Repository looks very active.
Every new issue is addressed almost immediately.
It's a great framework, you think, and you start using it in your project.
Fast forward one year.
It turns out that the creator of the framework no longer has time to work on it, so no new version has been released since half a year.
You had to fork this framework because there were some security vulnerabilities that you had to fix.
At some point, you decided to finally give up and rewrite your project in a different framework, something that has been around for long enough so there is a much smaller chance that it will be abandoned.
If this was your personal project, then selecting a wrong framework will only cost you time.
But if that was a project at work, now you have to explain to your boss why instead of working on those new features, you will have to spend a few weeks rewriting this application in a different framework.
And that means losing money.
If there was one thing that I learned about companies is that they don't like losing money.
Instead, we should look for a middle ground.
There might be some libraries or a framework that are much more suitable for your new project, then what you have used in the past.
At the same time, you need to make sure that whatever library you choose will still work in the next few years.
So, what people commonly do is to search for recommendations from other programmers.
We look for phrases like fastest Python Web framework or best graphQL library for Python.
Then we start doing our research.
We read articles.
We compare the number of stars in GitHub for each recommendation etc...
This is a good approach.
The more time you spend doing the research, the more likely you are to choose the best tools.
But sometimes the number of alternatives can be overwhelming, and it might be hard to choose one.
Some people recommend one framework.
Others say that it's bad and you end up confused.
Wouldn't it be better to have a recommended solution for a specific problem like a recommended way to, build a Python library or recommended set up for a Django application with user authentication and some common features?
Well, I don't have a perfect solution.
I have a solution that worked well for me, so chances are that it will work well for you.
The solution is called cookie cutter.
|
|
show
|
0:36 |
cookie cutter is not a tool to cut cookies.
We are still talking about programming.
Cookie cutter is a Python CLI tool that generates a scaffolding for a Python project.
From a template, this scaffolding will contain some files to help you start working on your project faster.
For example, it might install and configure pytest the most popular testing library.
Or if it's a Web application, it might add the bootstrap library, so your website will look much nicer out of the box.
What features it adds depends on which template you use, so lets see an example.
|
|
show
|
8:00 |
Let's say we want to build a Django website.
Don't worry if you don't know anything about Django.
All you need to know is that Django is the most popular Web framework in Python.
So, first we have to find the right template.
We scroll down to the pantry full of, cookie cutter section of the GitHub page.
And here it says that the best place to start searching for specific and ready to use cookie cutter template is GitHub search.
So, we click this link and we are presented with almost 5000 different GitHub projects related to cookie cutter.
You don't have to check all of them.
Usually, only a few first pages are useful, so, we can sort this list by most stars.
And now let's look for a Django cookie cutter.
The first result is the cookie cutter package itself, but the second one is actually a cookie cutter for Django.
Since Django is a very popular framework, it's not surprising that Django cookie cutter template is the most popular one.
Also, PyDanny.
He's the maintainer of the cookie cutter library and a very prominent Django developer, who wrote the best Django books available on the market.
By using his cookie cutter template.
We are taking advantage of all his xperience and best practices, so, that's a pretty great way to start the project.
Let's click on this cookie cutter.
At the top, We can see a list of features, and it's impressive.
It's using one of the latest versions of Django and Python which is good.
It adds the SSL certificate.
It adds a very good Django package for user registration.
It uses Amazon s3 storage for all the images and other media files in our project.
It creates a docker compose files, so we can deploy our application with Docker and a special file called Procfile.
If we want to deploy to Heroku.
So out of the box we have a bunch of tools and plugins that we would have to add to our projects at some point anyway.
Great.
But how do we actually use this cookie cutter template?
Let's scroll down to the usage section.
First, we need to make sure that we have cookie cutter installed on our computer Cookie cutter is one of those tools that I want to install globally, so I will use pipx to install it.
Next, I need to run cookie cutter command and pass the name of the GitHub repository with the template.
Let me just copy this command from GitHub and now the most important part, cookie cutter will ask you a couple of questions to customize your project.
Some of them are purely cosmetic.
For example.
The name and email, will only be added in the documentation, but some of them are important.
You can choose between different types of databases, enable some optional settings, etcetera.
All those questions were prepared by the Creator of this specific cookie cutter template, so each template will have a different set of questions.
Let's quickly go to them.
So first there is a project name.
Let's keep the default value of my_awesome_project.
Now the project slug.
This will be used to create the directory.
Let's keep it the same, description, Let's change the description.
This is one of those cosmetic changes that will go mostly to the documentation author_name.
Well, this we have to change.
We are not Daniel Roy Greenfield, unless this is actually Daniel following this tutorial, but probably not.
Domain name, I don't have a domain, so let's keep the default for now.
Email.
We have to change.
Here, We have to select the license.
So, depending on which one you choose, a different license file will be created for you.
Let's go with the default MIT license on default.
This option probably means if you are working on Windows or not.
Probably if you're working on Windows, there will be some additional settings, by default, It's set to not use windows, so let's use the default version.
If you're using PyCharm select, y, like yes, I'm not using PyCharm.
I will go with No, let's say that we actually want to use Docker.
Now we have to select which version of postgresSQL database we want to use.
The latest one is the default, but maybe you want to use an older one.
If you want to run some JavaScript tasks, then you can select the task runner.
I probably don't want to, so let's go with one cloud provider.
Let's let's go with AWS.
Here, you can select the mail service in case you want to send emails to your users Depending on which option you select, this template will probably generate some initial configure that you can use.
Let's go with Amazon, Whether or not you want to use async.
Let's let's use the default version whether or not you want to use the drf framework now.
No, As you can see, there are a lot of questions.
If you don't understand some of them or you're not sure what to choose, use the default option.
Okay, after we answer the last question, cookie cutter creates a folder with the scaffolding of our new project, and we can start working.
So let's see our new folder.
When you go inside the folder created by the cookie cutter, you might be overwhelmed with the number of files that were created.
And that's understandable.
Cookie cutter Django and other popular frameworks try to be as generic as possible, so everyone can find a configuration for their specific needs.
This results in a lot of configuration files for different tools, so the best place to start is usually to read me file.
It explains how to use this template.
Let's open this one.
As you can see, the Read Me file is written in the RST format, and I don't have a plugin to correctly display it.
But let's not deal with that for now.
Let's see what we have here.
We have a section on settings.
We have a basic commands.
So here it explains that you will need to create super User to actually be able to log in.
Then if you want to run type checks, you can just use mypy.
If you want to run tests or check the test coverage, you can run all those commands.
And then there is some instruction that sends you to the docker section explaining how to develop your application.
Using Docker If, like me, you don't have the RST plugin.
You can push this code to the GitHub repository and GitHub will actually nicely displayed this readme for you.
Don't worry.
If you don't know what some of those things mean, You don't have to use all of them.
You can completely ignore those things and for now, start working on your Django website as you always do.
If you want to learn more about how to use this cookie cutter template, we can go to this documentation website.
Here we actually have the plugin with Docker but you can see we have many more pages getting up and running locally with docker settings, linters, testing documentation and stuff like that.
If you want to start working on your Django website, you would now do everything thats in a normal Django project.
But because we use the cookie cutter template, we have the latest version of Django.
We have some packages for out Authentication, bootstrap is already included.
Also the set up for Amazon s3 is here, and we are using the latest pytest for testing.
And we have coverage plugin to show the test coverage for our code.
So all those things that a lot of Django projects would have to set up anyway is already done for us.
It saved us a lot of boring work, and now we can focus on actually building our cool website.
|
|
show
|
1:18 |
I want to give you a quick tip on how I use cookie cutter templates.
I've noticed that it's much easier to remove things from a project than to write everything yourself.
So what I usually do is I take a very popular cookie cutter template and I generate a project with all the options enabled or disabled according to what I need.
If I don't know how to configure something, I leave the default value.
And then once the project is generated, I go through all the features and I start removing what I'm not going to use For example, in this Django project, let's see what we have.
There are a lot of files, but for example, if for some reason you are not going to use pytest you can remove pytest in it.
If you're not going to use GitHub, actions you can remove, GitHub actions folder, if by accident You selected Docker during the configuration, but you're not actually using docker.
You can remove all those doctor related files.
dockerignore, the compose folder and so on.
And if you notice that you always remove the same parts of a cookie cutter template you might consider writing your own template.
That way you can customize what features your projects will have.
So, let's talk about writing around templates in the next lesson.
|
|
show
|
2:23 |
So far, We saw how to use an existing cookie cutter template to generate the project.
But all those templates that we saw in the GitHub were created by someone.
So if you want, you can create your own template to, why would you write your own template?
You might do this when none of the existing ones seems like a good choice.
If you're always building very specific projects, maybe you need your own template from scratch.
Or maybe you like one of the templates.
But you want to change a few things.
For example, you want to use a different library here and there.
Or maybe you always remove the same feature from a given template, and you would prefer to have a template without those features in the first place.
You can't change the template that belongs to someone else, but you can always fork it, make the necessary modifications and use that version instead.
So how do we write a cookie cutter template?
If you go to the documentation of the cookie cutter, there is a section called Create a Cookie Cutter from scratch.
Unfortunately, it's very basic, and it has this to be continued sentence at the end, so maybe by the time you're watching this, it has much more detailed information.
But basically the simplest cookie cutter template is just a folder with this weird name.
Cookie cutter.project_slug In this double curly brackets.
You put some files inside that folder and you publish it on GitHub.
Now when you run cookie cutter command with GitHub URL, It will replace this weird cookie cutter.projects_slug folder name with a real name of your project, and inside you will have the same files that you put on GitHub.
But of course, cookie cutter can do much more.
You can replace the content of some files or file names with variables that you provide when generating the project.
You can also remove some files.
For example, when the templates lets you choose between different database engines and you choose MySQL instead of Postgres..
Then there is no point to keep the postgres setting file, so you can safely remove them.
If you want to learn how to create your own cookie cutter template, I can recommend you another course from the talk Python catalog called using and mastering cookie cutter.
Apart from a very good explanation of different features of the cookie cutter library, one of the chapters shows you step by step, how to write your own template from scratch.
And as you can see, it takes over one hour, so there's a lot of things that you can learn.
|
|
show
|
2:34 |
Cookie cutter is one of the best answers to one of those How to build X in Python questions.
You want to build a Django website.
Great use Cookie cutter Django, Flask or FastAPI.
Their templates for those frameworks to, want to publish a package on PyPI.
But you have never done this, so you are not sure how to start.
Well, there is a cookie cutter -py package that you can use, using Cookie cutter can give you a head start when building a new Python project.
Instead of setting up everything by hand, you can use one of the existing templates.
Some of the main benefits of using a cookie cutter are that, it removes a lot of boring work that you need to do when you set up the project.
When you start building a Django website, you probably need to add some plugins for authentication, maybe install pytest with some plugins.
Set up continuous integration to make sure that your tests are running automatically on GitHub That's all boring work that you have to do for every project, and I'm pretty sure just like me, you would prefer to actually build that website, instead of dealing with all this stuff, especially since it's easy to meet something and then you waste even more time debugging some miss configured settings.
Cookie cutter deals with this problem for us.
You answer some questions, and it immediately generates a project with all those things already set up for you.
And it's not only useful at the beginning of the project but also at the end when it's time to deploy your application or publisher package to PyPI, with cookie cutter.
We usually have everything already set for that, it's very easy to create or modify and new cookie cutter template.
The simplest template is just a folder with some files.
But if you need something more advanced, cookie cutter got you covered with its simple templating language.
Cookie cutter helps you set some good and safe default settings for your project.
The most popular templates have been used and reviewed by thousands of developers.
If there was something wrong, or if someone noticed a security vulnerability, they probably already reported and fix that, so you can be sure that you are starting your project following the best practices and I like to think about cookie cutter templates as getting a feedback from hundreds of developers who contributed to that template.
A free feedback from people who have been writing Python packages or Django applications for years.
And the best part is that you don't have to argue with them on what's the best way of doing something.
It has already been established, and by using a specific template, you just follow those recommendations.
|
|
|
43:02 |
|
show
|
0:34 |
In the previous chapter, we have created a scaffolding for our project.
In this one, we will talk how to manage your project as it grows.
If the phrase project management has a negative association for you, don't worry.
I'm not going to talk about gantt charts, milestones and deadlines.
Now, I will talk about the technical concepts of managing your project.
I will focus on the following issues, how to structure your project?.
How to run some typical tasks like tests, builds and so on?
and how to manage dependencies for your project.
|
|
show
|
3:27 |
Let's first talk about how to structure your Python project.
Unfortunately, there is no perfect way to do this.
What might work for one project won't work for a different one.
And since Python doesn't enforce any specific structure, you can organize your project however you want.
You might be wondering, Why does a good project structure matter?
First of all, a good project structure makes it easier to understand the project.
If you put files in random folders or throw hundreds of files inside, one folder.
It's going to be difficult to understand, how this project works and what's the connection between all those files?
And it's going to be especially difficult for new developers on your team.
So as your project grows, you sometimes need to refactor the file structure.
For example, if you have a lot of scripts related to managing users of your application, maybe it's time to put all of them inside the folder called Users.
Understanding How the project works is very important, but a bad project structure can actually cause some import errors.
They usually result in a module not found error, or import error.
To understand why we have import errors.
Let's quickly see how imports work in Python, when you import a module Python will look in three places.
The first place is the folder with the Python file that you are running.
Let's say we have a Python file called start_server that it's located inside my modules /my_module/scripts folder.
This file imports, some methods from a module called utils.
When you run this file with Python Command, the utils module have to be inside the scripts directory.
Otherwise, Python won't be able to find it.
It's slightly different when you don't run a script, but you start on interactive Python session by running Python command in your terminal.
If you try to import some Python modules in this interactive session, Python will look inside the current folder.
If Python can't find the module that you want to import, it will then check the additional paths from the Python path environment variable.
By default.
This variable is empty, but you can use it to specify some additional paths for importing Python modules.
And finally, there are some installation dependent folders that Python will check.
Installation dependent means that they depend on what operating system you use, but also on how you installed Python.
Since we're using pyenv.
It will contain some folders from the pyenv directory.
That's a lot of places to check, but there is an easy way to list all the places where Python will look for modules.
All you need to do is run.
import sys and then sys.path.
This will print the list of folders.
That Python would check, if you try to import some modules.
As you can see here, we are not using a virtual environment, so Python will look for pip packages inside the global site packages.
But when we are inside the virtual environment, the last item on this list has changed to the virtual environment.
Let me show you a better comparison.
As you can see.
First we have the current directory.
Then we have three folders related to my operating system, and then I have another folder, depending on whether I'm using the virtual environment or not.
This one, it's pointing to site packages.
Checking the syst.path is often the first step of the debugging, import errors and speaking of import errors.
Lets talk about two most common ones in the next lesson.
|
|
show
|
2:34 |
When you try to import the module_and Python cant find it in any of the three places listed in the previous lesson, it will result in a ModuleNotFoundError.
When that happens, first thing that you should do is to check the syst.path and see if it contains the directories that you think should be there.
If it doesn't, the best solution is to check the current structure of your project and move files around.
If you can't do this, you can also modify the PythonPATH environment variable to include your directory.
And if none of those works, you can also modify the sys.path list directly.
This is a very hacky solution, but it's usually the fastest one, specially if you want to add the parent directory of the current folder.
Another common problem is circular import.
It happens when two different modules try to import something from each other and Pyton gets stuck.
Let's see an example.
We start by executing a module_a module_a import function_b, from module_b.
So, we start executing module_b.
But it turns out that module_b needs a different function from module_a, but module_a is still not fully loaded so, Python fails to import this function_and you get an exception.
If you're using Python 3.8, you will actually see a nice error message saying that it's probably a circular import error.
If you are using an older version of, Python, you'll get a pretty generic error message, saying import error cannot import name function_b, to fix those circular import errors.
You need to refactor your code and make sure that different files are not trying to import function from each other.
Typically, this can be solved by adding another file.
So, here we have the main.py file.
It imports function_b from module_b and module_b imports function_a from module_a and that's it.
There are no more imports here.
Solving circular import errors here was easy, because we had only two files.
But usually solving circular imports in your projects is gonna be much more complicated, because it will involve multiple files.
So, they can be quite hard to fix.
We can avoid a lot of those problems by carefully structuring our projects.
Project containing of only one Python file will be structured differently than a huge Web application consisting of hundreds of files.
So, let's take a look at three different sizes of a typical Python project.
|
|
show
|
2:57 |
Let's start with a very small project that consists of only, one Python file and maybe some additional files, like Read me, requirements, license, etcetera.
If you're using cookie cutter that I showed you in the previous chapter, then you might have some other files.
If the cookie cutter template is good, then it will explain, what's the purpose of each of those files?
As you can see in this case, managing this project is easy.
You pack everything inside one folder, and that's it.
The only file that it's actually required is the simple.py file that contains all our Python code.
We might not even have the other files, but they're useful, so let's see what each of them does.
A license, as the name suggests, contains the LICENSE text of your code.
If you want to let others use your project or build something cool on top of it, you need to choose a license that explains how they can use it.
You might be thinking, Well, I don't really know anything about licensing, and they all sound complicated, so I'll just not select any license, and people can do whatever they want with this code, but that's not how licensing works.
If you don't specify any license, the default copyright applies, and the default copyright law says that you retain all rights to the source code, and no one can reproduce, distribute or create derivative work from this code.
So, basically without specifying the license, no one can use your code.
If you're planning on open source in your code, make sure to select the correct license.
Next we have Makefile.
I know that Makefile sounds like some scary Linux magic, but they're actually not that bad.
Makefiles are a great way to organize tasks for your project.
For example, if you're building Docker Images, then instead of memorizing all the parameters and file paths to different configurations, you can turn them into a build task and save it in a Makefile.
Or, if you are running pytest with a lot of parameters, you can save all those parameters as the name of a specific task.
For example, a unit test.
I will show you an example of a Makefile later on.
The next file is Read Me file, that contains all the information about this project you can use a plain text format, or if you prefer a nicer styling, you can use markdown.
And if you're hosting your code on, GitHub, it will format, Your read me file accordingly.
Read me should contain the most important information to everyone, who is seeing your project for the first time, together with links to additional resources, like the documentation.
Next, we have requirements file.
If your project depends on some additional packages, you will need to specify them somewhere.
The most popular file format is a plain text file called requirements.txt.
simple.py is the file that contains the actual Python code of your simple project.
And finally, if you add some test for your simple application, it's common to put them in a file called tests.py
|
|
show
|
1:39 |
As you add more features to your project, the single Python file starts growing, so, at some point you will have to split it into separate files.
You might still use one main file, but you move all the helper functions into separate files, and store them inside the folder like here.
Or maybe you move all the Python files inside one folder like here on the right, although as the number of test grows, you will probably split your tests into separate files, and move them inside the folder called tests.
So, now you're Project looks more like one of those two options.
They're both quite similar, so let's discuss the project structure on the left.
Now we have two new folders, medium and tests, medium contains all additional Python files.
You will usually name this folder in the same way as the name of your application So, if my application is called medium, because this is a medium sized project, then this folder is also called Medium.
We still have a main.py file that we used to start our application but this main.py imports additional functions from files inside the medium directory I have also moved all the tests inside the directory called tests.
And here I have split all the tests into separate files, depending on what they are testing.
As you can see, we have a nice separation.
We have the main folder that contains our main Python file and all the additional files for our project.
We have the medium folder that contains all Python files related to our application and finally, tests that contain all the tests.
Everything has its own place.
|
|
show
|
1:45 |
As your project grows bigger, you might need to create even more files.
And to organize those files, you will need to create even more sub folders.
So this time, let's take a look at the real world example.
Here we have an example, Flask project from the flask Documentation.
Don't worry if you don't know what flask is, it's a Web framework, just like Django.
We don't need to know how flask works to understand its file structure.
As you can see, we still have a folder with tests.
We still have some top level files like Manifest.in or setup.py, just like we had requirements.txt and read me file before.
The main difference is that flask are, so the main folder with our application now contains sub folders.
But again, it's nothing magical.
We simply put files that belong together inside folders.
We have a folder called templates, where we put HTML files for different Web pages of our application.
Every HTML file related to blog it's located inside blog folder, and every HTML file related to authentication is stored inside the auth folder.
Also, all the static files are stored in the static folder.
That's a good structure that can take a long way.
But of course, it's not the only possibility.
If you feel that you don't like it and it actually gets in your way instead of helping you, you might want to structure your project differently.
Those three projects that I showed you, so the simple one, the medium size and this advanced one, are a typical progression of a Python project as it grows, some cookie cutter templates might use a different structure, and you might like it better.
So, before you start writing code, take a moment to think and plan how different parts of your application will interact with each other.
|
|
show
|
5:51 |
When we talked about the simple project, I showed you Makefile, Makefile is a file containing a set of instructions for a make build automation tool.
For example, if you go to see Python repository, you'll see a Makefile there.
You can use it to build cPython on your computer from the source.
And as you can see, it's huge.
Back in the days it was more popular to build packages on Linux, but now we have other tools, that make installing packages much easier.
But quite often, even though you don't see the make command being run, it's actually run in the background, for a long time I was scared of Makefiles because I only use them to build packages on Linux So those Makefiles that I saw, were long and full of bash commands that I didn't understand.
On top of that, if some dependencies were missing that makes command would fail and I had to search on stackoverflow, what random package I have to install to make it work.
So my first impression was that Makefiles are complex, scary, and I don't want to use them, but then I work on some projects that used to Makefiles for much smaller task And I realized that you can write a useful Makefile without knowing anything about bash scripting.
I don't want to go too deep into the topic of Makefiles but I just want to show you that they're quite easy to use.
They are a perfect solution.
If you need to group some commands that you would run in your terminal, they're also useful.
If you're passing a lot of parameters to a function, instead of remembering all those parameters you can write them down in the Makefile.
Let me show you an example of a simple Makefile.
At the top.
We have .PHONY.
This is an optional instruction, but it's quite important one.
If by any chance, you have a file called.
Let's Say init in your folder and you run, make init, make will try to run that init file instead of executing the init command from this Makefile.
So, to prevent that for each command that you specify in a Makefile, you should add it to this .PHONY instruction and then you define Make commands, you specify the command name:.
And then in the next line, you write down what commands should be executed.
To run those commands in your terminal, you just need to type, make and the name of the command, something that is very important and probably confusing to Python programmers, is that Makefile you tabs, not spaces.
So if you run into a error saying missing separator stop, it means that your Makefile contains four spaces instead of a tab, that can happen usually when you copy and paste code from somewhere.
But when you write your Makefiles by hand and you press the tab key, you should be fine.
Our first task is called Help, and it's used to simply print the instructions on how to use this, Makefile.
If I run make help.
It will print those five commands, explaining what each of those commands do.
Next, We have init command.
Its main purpose is to set up everything for your application.
So, if a new developer comes to your project, you can tell them.
Just run, make init command in your terminal, and you will have the whole application, up and running in seconds, as you can see this init command is running to other make commands, build and run, but it's also running some other shell commands.
Build and Run both run some docker commands.
Don't worry.
If you don't know what they do, I will cover some basics of Docker at the end of this course.
And finally, we have two commands related to test.
The first one simply runs pytest on the tests directory.
So, if you run make test in your terminal, it will in turn, run pytest tests command.
If later you decide to use a different framework for your test, you just need to modify the test command in the Makefile.
Your end users will still run.
Make test, so you don't even have to update the documentation.
That's another great advantage of using Makefiles.
They standardize a set of commands between projects.
If I go to a random project on GitHub and I see that it has a Makefile, I'm like 99% sure that if I run make test, it will run tests for that project.
I don't even need to know what testing framework this project is using.
I have also put here one more test command.
This one is running pytest, but with a bunch of additional options and parameters.
So in this case, I want to run pytest and set the output very verbose.
That's the -vv parameter.
I want to Only run test marked as unit, and I don't want to generate a coverage report, if that's a command that I run often, then instead of remembering all those options, I can just define a task called unittest and then just run make unittest It's much less typing and much less memorization.
If you want to try using Makefile in your next project, you might be wondering how to get it.
If you're on Linux, chances are that you already have it installed because make is often required to install some packages.
If not, you can easily find on the Internet how to install it, In Ubuntu, You just need to run, apt-get install make.
If you are on Mac, you can, either install it through the xcode by running, xcode select --install.
Otherwise, you can use homebrew and run brew install make, if you're on Windows, make will be available when you install the windows subsystem for Linux, which is a very popular tool, if you're a developer working on Windows.
Otherwise you can install cygwin.
That will give you a lot of Linux tools.
And if you use the chocolatey package manager, you can just run chocol install make,and this will also work.
|
|
show
|
0:32 |
We covered project structure, and how to eun some common tasks.
Last important part of project management is dependencies management.
The easiest way is to have one file where you list all the dependencies for your project.
By convention, this file is usually called requirements.txt.
You pass this file to pip by running pip install -r and the path to the requirements file and pip will install all the dependencies from this file.
Don't forget to activate virtual environment before you do that.
|
|
show
|
3:16 |
There are three common ways to specify a dependency inside the requirements file.
First is to specify the name of the package, and which version you want.
This notation is using something called semantic versioning, where each package specifies three numbers.
First one is the major version.
Then we have minor version, and finally we have the patch version.
The major version usually means that there are some big changes in a package, and they can break some functionality.
So, you need to be careful when you update the major version.
Minor version is not that drastic, and usually it means adding new features without breaking the old ones, although it can happen that some features will be removed between minor versions.
But in this case, it's a common practice to display a warning, saying something like, Hey, this feature is going to be removed in the next version, so you better be careful.
And finally, the last number is a patch version.
Whenever a serious bad or a security vulnerability is fixed in a given package, a new version of this package is released and this last number is incremented.
It's not only safe to update your package to the latest possible patch version.
But it's also recommended to do this as soon as possible, because it means that the previous version has a bad that someone might exploit.
Version numbers are quite flexible, and you are not limited to a specific version.
You can use less than(<) or greater than (>) operator to specify that you want to have a Django version at least 2.2 or newer.
Or you can say that you want to have, a Django version, that it's less than 3.0 because you know that your application won't work with Django 3.
It's also very common to combine those two conditions.
For example, you want to install Django 2.2 or higher, but still less than Django 3.
So, you combine both operators.
Another common scenario is to stay on a specific minor version.
For that, you can use this quickly equal(==) operator.
It's equivalent to saying use any version of Django 3.1 as long as it's higher or equal to Django 3.1.2.
So, if a Django 3.1.3 is released, this version will be used.
When Django 3.1.4 is released.
Then again, that version will be used.
But if a version 3.2.0 is released, it won't be used because we said Stay on Django 3.1 And finally, in a similar way, you can say that you want to use a specific major version.
So, when the new minor or bug fixed version is released, they will be automatically installed.
But when a new major version of Django is released it won't be automatically used.
If you want to always install the latest version of a package, you can skip the version number.
For example, here we want to.
Always use the latest version of pytest.
And finally, not all packages are available on pypi, maybe some of them are stored on a private pypi server.
Or maybe you want to install the latest version directly from GitHub.
That's also possible.
You can specify a Github branch or a tag, and pip will install this specific version of package from GitHub.
|
|
show
|
1:15 |
Another common practice with requirements files is to have a separate set of.
dependencies required to run your application, and a separate set of dependencies used only by developers.
For example, your production server doesn't need to run pytest, so there is no need to install pytest at all.
But you, as a developer, need to have pytest to run tests.
So you make two requirements files, one that is called requirements.txt It contains the dependencies that you want to install everywhere, so on a production server, but also on your computer.
And then you create another one called Requirements-dev that installs all the dependencies from the requirements.txt file and some additional ones like pytests and it's plugins.
You use this file on your computer when you develop your application, with the separation of requirements, you only install the necessary packages on your production server.
There is no need for you to have pytest installed on your production, and the less packages you installed.
The lower is the risk that some of them will have a security vulnerability that hackers can use, and it will also be slightly faster to installed packages on your production server.
|
|
show
|
1:02 |
In the previous lesson, we saw that we can tell pip to, install whatever latest version it can find or to install a specific version.
Installing a specific version is called Pinning Dependencies.
And always make sure to pin dependencies on your production servers, all of them.
If you don't, you might install different packages on your production, then you use during the development, and those different versions might not work properly.
And you won't realize that they are not working properly, because when you're on tests, you use a different versions of dependencies, so your application will be broken.
And even if you have 100% test coverage, you won't know that.
So always pin your dependencies on the production server, to the exact patch version.
So no Django>=2.2 or Django<3 But be very specific, install Django 3.1.2 because this is the version that you tested and you know that it will work.
|
|
show
|
4:12 |
How do we pin our dependencies?
Do we go and do this by hand?
And then each time a new version is released, we go back to the requirements.txt and update all those versions by hand.
No, that would be a terrible waste of time.
There are automatic tools that can do this for us.
One of them is called pip-tools, and that's the one that I use.
Most of the time.
pip-tools is a combination of to different tools, one called pip-compiled and another one called pip-sync.
pip-compile is the important one, So let's talk about this one first.
Basically, what pip compile does is that it takes an input file with dependencies, and it generates a list of pinned dependencies for you.
This input file can be either a setup.py.
or requirements file, notice that the requirements in the input file can be as specific as you like.
You can pin some dependencies, but you don't have to.
You can simply provide the name of a package, and it will work fine.
When you run, pip-compile command pip-tools will take that input file and generate a new list of requirements this time with each dependency pinned to the more specific version possible.
Let's see this in action.
Let's use a simple requirements.in file with just two packages.
I want to use Django, but it has to be version at least 2.2 and less than 3.
And I want to use the latest version of pytest, if we run.
pip compile requirements.in, we get a new file called requirements.txt this time with all the dependencies of Django and pytest and their sub dependencies.
So every single package it's pinned to the most specific version that is available.
As you can see, most packages use semantic versioning with three numbers.
But for example, pytz is using a different format, and pip-tools detected that, also as an additional benefit, we can see in the comments from where each package comes from.
We see that pytest comes from the requirements.in file, packaging library comes from pytest and finally, pyparsing, comes from the packaging library.
So pip-tools has pinned all the possible dependencies and if two different packages required the same dependency like here, the importlib-metadata is required by pluggy and pytest, pip-tools will try to find a version that satisfies both requirements.
If it can't find such version, it will report an error.
And that's it.
Now you take this requirements.txt file, and you tell pip to use it.
That's the requirements file that you will use on your production server and on your test server, to make sure that everything works fine.
If in the future you see that the new version of Django was released, you can just rerun, pip-compile, and it will create a new set of dependencies.
It's a good idea to run pip-compile as part of your continuous integration pipeline.
That way, your dependencies will be up to date with the latest patch fixes.
But at the same time, your test will be run on those new dependencies, so you can see if something breaks.
And if something breaks because let's say the latest version of Django has a bug, you can go back to the requirements.in file, and pin Django version, so you won't use the latest version until you notice that the bug was fixed.
This probably won't happen often, especially for such a popular package like Django.
But smaller Python packages can sometimes have bugs in the latest version, so keep that in mind.
pip-tools comes with another command called pip-sync.
You can use it to synchronize which dependencies are installed in your virtual environment, with the list of dependencies in the requirements.txt file.
Running pip-sync is equivalent to uninstalling all the pip packages and installing them again from the requirements file.
So, this tool is just for convenience and can save you some typing, but I usually don't use it.
If you want to see some more examples of how to structure requirements, check out the GitHub repository of pip-tools.
It has a lot of examples of how you can use it.
|
|
show
|
1:10 |
Working on a Python project involves a lot of different activities.
First, you need to create a virtual environment, and you need to remember to actually activate it.
So, you don't accidentally install pip packages globally on your computer.
Then you need to manage your requirements, files and pin all your dependencies.
You also need to remember to regularly update those dependencies, but at the same time, need to make sure that you use the exact same versions of packages both on your local computer and on the production server.
So, for that you install a tool like pip-tools to pin your dependencies and then you run pip to, install them.
So far, so good.
But if by any chance you're creating a Python package that you're planning to publish on pypi, then there is a bunch of other tasks that you need to take care of.
You need to create a setup.py file.
You need to build your package and send it to the pypi server and all that requires you to remember another set of commands.
It would be nice to have one tool, that can do all those tasks for us thought some Python programmers and so many different tools were actually created.
But two of them gained the most popularity pipenv and poetry.
|
|
show
|
0:59 |
pipenv and poetry are project management tools.
They provide you with commands to set up a project, install depenvencies and, in case of poetry, to publish it as a package and pypi.
They will also automate some of the tasks.
For example, they will automatically create an activate a virtual environment for you, so you don't have to remember to do this yourself.
Which one you should use is up to you.
I will show you how to use poetry because it supports publishing packages on pypi That's a huge benefit because it saves you from writing a setup.py file by hand.
I mean, writing a setup.py file by hand is not extremely difficult, but it's nice to have a tool that can out make this task for you.
Another benefit of poetry is that it's newer than pipenv, so it's built on top of some lessons learned from pipenv.
But of course, if you prefer to use pipenv keep using it, I know pip who are using it, and they are quite happy with it.
|
|
show
|
4:47 |
Let's see how to use poetry to.
Build a Python project.
First, We need to install it.
since poetry one of those packages that you want to install globally on your computer, because you probably always want to use the latest version.
I will use pipx to install it.
Once we have it installed, we can run poetry new in the name of our project.
This will create a poetry demo folder with some files.
The most important one is the pyproject.toml.
This is the main configuration file for our project when we use poetry.
Now, let's add some dependencies, to our project.
We could modify the pyproject.toml, but that's not the point of.
Using a tool like poetry, we just install it, so that we don't have to manually edit files.
So instead, you can use the CLI command to add a new dependency.
Let's go with the same example that we have been using so far and Let's add pytest and Django.
First pytest.
We want to use the latest version, so we can run poetry at pytest.
This created a virtual environment for us, and then it tried to install pytest, but we have an error, based on the error message, We can see that there was some problem with resolving our dependency because we are trying to install pytest 6.1, and at the same time something is requiring pytest 5.2 So what can we do?
Well, let's take a look at pyproject.toml to see what's inside, as you can see when we create a new project with poetry, Poetry adds a pytest dependency to this project, but its version 5.2.
And we want the latest one, which is at the time of recording is 6.1.
So we can, for example, changed the version directly here.
Now that we have modified the pyproject.toml directly we have to run poetry install to install and resolve those dependencies.
Oh, lets write this.
All right, so this time everything worked fine.
And we have this nicely colored output saying which dependencies have been installed.
Having conflicting dependencies can sometimes happen, and the best way to solve this problem is to either pin or unpin some of the versions, depending on which version you actually need.
Now let's try to add Django.
This time I want to use the latest version of Django 2.2, but not Django 3, just like we did in the examples so far.
I forgot to add the quotes, and this time we're more lucky.
But we resolved all the dependencies for us.
When we run poetry installed for the first time, a poetry.lock file is created.
It's a slightly more complicated version of requirements.txt file that we were creating using pip-compile command in the previous lessons.
This poetry.lock contains pinned dependencies, so the next time you run poetry install, poetry will install those exact versions of Python packages.
So, don't forget to add poetry.
Lock to your git repository.
Behind the scenes poetry is using a virtual environment for us, even though we don't really see it.
As long as you use the appropriate poetry commands, it will use that virtual environment, so we don't run pip install to install dependencies, but we run poetry install.
Now that we have installed Django, let's quickly create a simple Django website to see that it's working.
Don't worry if you don't know Django the next few commands that I will run will create an empty Django project and start the Web server, So we can open a browser and see that we have successfully created an empty Django website.
As You can see I'm not running the Django admin command, but I'm running poetry run Django-admin.
That way our command is run through poetry, and it's using the virtual environment, plus all the dependencies that we installed with poetry.
We have a Web server running, so we can open it in the browser to see that everything is working fine.
And there we have it.
Great, it's working.
And if we do pip freeze, you can see that there are no dependencies installed globally.
Okay, I have some packages, but they are required by pipx.
But as you can see, there is no Django here, and there is no pytest Poetry is keeping everything in a virtual environment for us, and we dont even have to manually activate anything.
|
|
show
|
1:27 |
One last thing that I want to show you, is how to build a Python package with poetry.
Normally, when you want to create a Python package by hand, you go to the Python packaging User guide, and you follow instructions from there.
First you need to create a setup.py file.
Then you need to install setup tools and wheel that you will use to generate the sdist and bdist_wheel folders.
Then you need to install another tool called twine, and finally you will have to use the tool to upload your package to pypi The instructions are very detailed and rather easy to follow, but there is no way I would remember all those steps of the top of my head, unless I build and publish a lot of Python packages, which I don't.
Let's compare this to using poetry.
First we run poetry build.
Then we run poetry publish, and that's all two commands, with the most obvious names possible built and publish.
As you can see, the publish command asked me for a Username and password for PyPI.
So if you want to publish your package, need to first go to the pypi website and create an account.
This example Folder contains some rubbish files, so I'm obviously not going to publish it.
But at the end of this course where we will be building a real Python application I will show you how you can build and publish a PyPI package, so stay tuned.
|
|
show
|
3:33 |
Apart from poetry.
Another good project management tool is the pipenv.
It doesn't have the functionality to publish to pypi, but it has all the other features.
It automatically creates virtual environments, and it resolves dependency versions.
If you are a data scientist, or if you work on Windows, you might already be familiar with the next tool.
It's called conda, and it's a Python project management tool combined with a Python package installer.
And as you can see from the documentation, you can use it not only for Python but also for other programming languages.
conda does not use pip, so it doesn't install packages from the pypi Python package index.
Instead, it installs packages from its own server, and those packages are always in the binary format.
What it means is that they're all bundled with dependencies.
When you install a package using pip, it's not always in a binary format, so pip try to build this package from the source files on your computer.
If you are missing some dependencies and I don't mean Python dependencies, but rather some Linux tools required to compile that specific package, pip will fail to install it, so that's a bummer.
On the other hand, if use conda, it downloads a package, and that package contains all the dependencies, which means that it will always install, but it has to be first built by someone and pushed to conda Repository.
It's not a problem for the most popular packages, but some less popular ones might not be there.
So, you have to build them and push them yourself.
In general, if you're a Python programmer, I recommend that you stick with pip instead of conda.
But if your team is already using conda or if you're really struggle with installing packages using pip conda can be a good alternative.
I'm not a Windows user myself, but I have seen a lot of people using Conda windows because it makes installing packages much easier.
So there's nothing bad about using conda, as long as it gets the job done.
Just keep in mind that Conda is maintained by an external company, not by the Python foundation itself, so there is no guarantee that one day they won't simply disappear or that they won't make you pay to use their tool.
But that's something to keep in mind for each external tool and dependency.
Next, we have flit.
If you're looking for a tool just to help you publish your projects in pypi, check out flit, flit does Only that.
It provides commands to simplify publishing PyPI packages.
So first you run flit in it.
That will generate a pyproject.toml, which is a replacement for setup.py.
And then when you run flit publish, it will generate all the necessary files and publish your package on PyPI.
So it's a good alternative to poetry if you don't really need a tool to manage your project, but just to publish it.
And finally, a bit less known tool.
At least at the time of recording this tutorial, we have Dephell.
It's one tool to do everything.
Resolving, installing and locking your dependencies, managing virtual environments, building pip packages, running security audit on your dependencies to show you the outdated one.
It could even convert between different configuration files.
So, when you're moving from, let's say pipenv to poetry.
You can use this, and it can even isolate CLI tools just like pipx does.
On top of that, it can generate some files like license authors, etcetera.
So if you're looking for one mega tool to do everything, dephell might be a good candidate, but I have never used it personally.
|
|
show
|
2:02 |
I shall do a lot of different tools.
Maybe, you know, and use some of them.
Or maybe you're just looking for which one to use.
So, you might be wondering which tool, though I use in my projects, am I a poetry person or do I prefer pip?
And for conda?
Well, I usually don't use any of them.
I managed dependencies with simple requirements file and I use pip-tools to pin them.
I mean, if I work on some hobby project, I might use poetry because it's fun to use.
But if I work with a client, I don't recommend them any of those external tools, unless their team is already using one of them.
Why is that?
Well, if I use venv to, create virtual environments and then install packages with pip, I use tools that comes built in with Python.
On the other hand, when I'm using an external tool and this tool stops working or have a bug, I have a problem because I can no longer work on my project.
I can't add new dependencies if my tool can't install them, and I can't run any scripts.
If there is a problem with this automatic activation of virtual environments.
If the tool gets abandoned and it's no longer updated, I have to move all my configuration files to a different tool.
Don't get me wrong.
I think the developers of every tool that they mentioned here are doing an amazing job.
And I'm grateful that those tools exist because they can definitely make your life easier.
And I'm especially thankful since most of those tools are open source, so people put their free time.
But before you decide to use one of them, think about the pros and cons for a bit.
If you think it's worth adding an external tool for the comfort of, having one tool to manage your project, go ahead.
I know some people who use those tools, and they're happy.
But I also know a lot of programmers who manage their projects without any external tools.
So, just like with adding any new dependency to your project, do a bit of a research to figure out if you really need it.
|
|
|
39:36 |
|
show
|
1:43 |
So far in this course, we learn how to set up our development environment and how to start building a Python project.
Now we can finally sit down and start writing some code.
In this chapter, I want to talk about how to write good Python code.
I can't tell you exactly how to write the best Python code, but there are some rules that you can follow and some tools that can help you follow those rules.
I found this interesting reddit comment explaining that basically, if your Python program doesn't have to throw any exceptions, then you can write it in a single line without any semi colons.
You can check it out for the explanation of how it's possible, but when we scroll down, you can see a code example here.
And as you can see, this whole code can be written one line.
So, as you can see, you are free to write such a monstrosity like this.
But it's not gonna be the easiest thing to read.
And even if you actually split your code into multiple lines, nothing stops you from using a different number of spaces than for or from using some other crazy indentation.
Take a look at this ugly code example.
The greet Function uses one space for indentation.
The create_fullname function uses eight spaces, and when we call the greet function, we use a completely inconsistent and hard to read indentation.
What happens when we try to run this code?
Python runs it without any problems.
There is no compiler that would give us atleast some warnings that your code is an abomination, as long as there are no errors, Python wont complain.
But we can't write code like that, right?
That's why some guidelines have been published and the two most popular ones are PEP 8 and pep257.
|
|
show
|
2:00 |
The best place to learn the fundamental rules on how to write your Python code is a document called PEP 8, PEP stands for Python enhancement proposal, and it's a document published on Python website, that introduces some changes to the Python language specification.
All the new features that came to Python, where once proposed as a PEP document, and when they got accepted, they became a part of Python.
One of those Python enhancement proposals, Number 8, is the Style Guide for Python code.
It was written by Guido von Rose, with the help of some other CPython core developers.
It's a long document specifying all the aspects of writing Python code.
For example, it tells you how many spaces you should use, what's the maximum line length, how to solve your import statements and so on.
For each of those rules, you can also see some examples of good and bad code.
In addition to Pep 8, we also have pep257.
This document explains how to write the docstrings.
So, the documentation for your code, there are a few simple rules that need to remember One set of rules is for docstrings that can fit in one line, and another set of rules is for docstrings that will take multiple lines.
For example, the one line Docstrings should start and end with triple quotes.
The closing quote should be on the same line as the opening one, and so on.
Even if this is the first time you hear about those documents if you are using a code editor that offers you auto formatting, you are probably already following those rules anyway because that's the standard in the Python community So, unless you have a very good excuse, you should follow those rules.
The only one common exception from those rules is the line length.
Pep 8 recommends that you should not exceed 79 characters per line.
But as our screens get bigger and bigger, I see that more and more projects move away from this limitation and increase the maximum length upto 120 characters and even more.
|
|
show
|
4:21 |
The best way to apply PEP 8 rules to your code is to use an automatic tool.
There is no point in manually adjusting the indentation.
If you're code editor can do this automatically.
One of the most popular tools that can format your code is called Black.
Black is very simple to use.
It takes your files and formats them according to PEP 8 and pep257.
It also has some additional rules.
On top of that, for example, it will convert all single quotes to double quotes.
We can see black in action here, on the left side, we have an unformatted code, on the right side We have code formatted with black.
As you can see, Black is mostly fixing the indentation.
But for example, it's also removing the backslashes, when they are not needed.
It's adding the correct number of white spaces for inline comments and so on.
And if you want to preserve the custom formatting of some code, you can put a comment above this line saying fmt: off and black will leave alone the next line.
Let me show you how to set up black VSCode So, When you open the command palette, you can search for format document.
If you select format document with, you can choose the formatter.
Right now we only have a Python, so there is nothing here that we can select, if we try to format it with Python, nothing really changes.
So, what we have to do is we have to first install black, and then we have to go to settings and use Black as our format.
First, let's install black.
So for that I will use pipx because I want to have black installed globally on my computer.
Not like that.
Okay.
And just to be sure that I will be using this black from pipx, I will copy the full path to the black binary Yeah, that's the file I want.
That's the binary for the black that we can use.
Now we have to open the settings and set for Python format provider or something like that.
Yeah, by default, it's PEP 8, but we want to change it to black, and we also need to specify the path to the black executable.
Lets search for Black, and that's such only in Python by default, Python will just call command black, and that should work.
But just to be safe, we can replace it with our pipx version.
So, when we close it and when we try to format this document.
Tadaah, now our ugly Python file has been formatted with black, all the indentations are fine.
All the white spaces are fine, and it looks much easier to read.
Black offers almost no customization.
The only setting that you should be able to change is the line length.
If you don't like the default 88 characters, there are some other options likes skip string normalization that disables, replacing single quotes with double quotes.
But according to the documentation, you should not use it.
This option is meant for adding black to an existing project.
If you want to prevent it from changing almost every string in your code, so black is opinionated.
Most people like it that way.
Some people don't.
The main argument against Black for a long time was this single quote to double quote conversion.
I worked with people who wouldn't merge a pull request because they didn't like the code formatting, those discussions should not be part of the code review.
You should install a code formatter, run it on everyone's code and then focus on the important part during the code reviews If you are looking for a code formatter that it's more flexible and actually lets you modify some settings, take a look at yapf, yapf stands for yet another Python formatter.
It allows you to write a configuration file where you can specify different options.
So, if you want, you can sit down with your whole team, and you can all agree on a common formatting style so everyone will be happy.
I personally use black whatever I can.
Sometimes I have a bit more complicated data structure, so I indent them in my own way to make them easier to read.
If I'm not happy with how black formats this code, I might disable formatting for that one line.
But in general I'm quite happy that I don't have to deal with the formatting configuration and everything works fine out of the box.
|
|
show
|
2:09 |
Another popular category of tools that can help you, while you write code is called linters Unlike the black formatter that we saw in the previous lesson.
Linters, don't modify your code, but they will give you a real time feedback about your code.
The two most popular Linters for Python are pylint and flake 8.
Let me show you how they work.
For testing, I wrote another ugly piece of code.
I swear this is not how I usually write code, as you can see.
First, I'm importing a non existing function that well, doesn't exist in this module.
Then I import os module, but I actually don't even use it.
Then I create a class, that looks OK, but has some additional white spaces here and there, and it doesn't have empty lines between functions.
Then I have a function that it's supposed to create an instance of this class and print my name.
But as you can see, first, I use a variable that it's not even defined.
And then I don't use this me variable.
But I use the Python class instead.
So this code won't even work, by default with Only, The Python extension installed.
We already have some feedback saying that this variable is undefined.
You can actually see the list of all problems.
Yeah, only one problem.
And even if we change this unnamed variable to something that actually exist, this code still won't work because this non existing function cant be imported.
And even if we remove this import, the code will work.
But it won't work as expected, because I wanted to print my name.
But I made a mistake.
And instead I will probably get none, let's actually run to see it.
Yeah, As you can see, nothing was printed because me is not passed my function.
I could actually go here and replace person with me.
And it still doesn't work because it's not main.
It's a double underscore main (__main__).
And finally we get those results.
As you can see, one way to the debug this code is to rerun it, and each time try to fix whatever error we have But a lot of those errors can actually be spotted when you install a linter So, let's install, pylint and flake 8 and see what they can tell us.
|
|
show
|
6:59 |
So, I'm back to the original version of my ugly code, and I will try to install, Pylint and run it.
So first we have to open the command palette and search for select linter, this one Python Select Linter and we want to select Pylint As you can see, nothing really happened because we don't have Pylint installed.
So, let's go and install it with pipx, Just like with black.
Let's copy the path to the binary and let's add it to the settings, search for pylint path and copy it here.
As you can see, I already did that in the past.
It can also happen that when you select Pylint Linter, you will get a pop up here saying that Pylint is not installed.
Do you want to install it?
You can press okay and VSCode will download and install Pylint for you and that will also work that way.
You don't have to modify any settings in the configuration file.
I usually don't install packages directly through VSCode because that way I don't really know where VSCode, installs them.
If I want to go and update those packages I usually can't find them later.
So, I prefer to install packages first and then point VSCode to use them But feel free to just click okay in the pop up.
when VSCode will set up everything for you.
So, we have selected Pylint.
Let's now enable Linting.
So, we have Linting enabled and nothing really happened.
So, let's see what's wrong.
There are no problems.
What if we actually run Pylint on this code in the terminal?
Ah, Great.
My code was rated at -5.
Out of -10 out of 10.
That's great score.
So in the terminal we see a lot of warnings, but not in the VSCode.
Let's see what's wrong.
VSCode.
Has this output tab with output from different extensions and different parts of VSCode So, usually you can do this to check if everything is working fine of or if there are some errors.
I don't see any error about Pylint not being found So, let's just try to added one more time as a Linter.
Okay, This time it worked.
We can see more red underscores quickly errors.
So, let's see what pylint is reporting, it correctly detected that we don't have non existing function in itertools.
So let's remove that, undefined variable.
So that's correct.
Let's try to define this variable.
unused variable here and here.
No value for arguments, cell.
So this already can point us into direction of what might be wrong.
So with those two information, I can see like Okay, I forgot to pass me here.
try again.
No problems anymore.
So, it didn't detect that it should be double underscore Main(__main__).
But this is kind of tricky error to spot, if we fix it and we can run it working fine.
So, as you can see with Pylint, we were able to spot a lot of errors before we run our code the first time.
If you have used Pylint outside of VSCode, you might be surprised why Pylint is not complaining about some of the additional problems like missing white spaces or additional spaces here and there to show you what I mean.
Let's run Pylint in the terminal again on this ugly2 file.
That, according to VSCode, has no more problems.
As you can see here.
There are still some other problems that we have to fix, like missing docstrings in the module, in the class or in the function, too few public methods, missing functional method docstring and so on.
That's because when you check the documentation of VSCode, you can see that VSCode has some default Pylint rules that are supposed to be friendly to the largest number of Python developers.
The thing with Pylint is that it's quite strict.
It complains about missing documentation about too few public methods, or even if you create a class without init method, it will still complain about that.
So, it really tries to make you right perfect, well documented code.
But sometimes you know better what you're doing.
If you're writing a simple script that you are going to use once, then maybe you don't feel like documenting every function.
On the other hand, if you want to enable all Pylint warnings, you can go to the VSCode documentation, and here you can see that there is pylintUseMinimalCheckers option that is currently set to true when we set it to false, then Pylint will by default, show us all the warnings.
So, let's try that settings set for this guy.
And now you can see we have all the warnings and use import, missing documentation and too few public methods, missing documentation.
But for example, it's not complaining about additional white spaces here, so those are all the warnings, that we will see with the default Pylint settings.
But Pylint has also some additional checkers that are disabled by default, but you can enable them.
When you go to the Pylint documentation, technical reference and optional Pylint checkers, This option is quite well hidden documentation.
So, I'll put a link in the resources Page.
And here you can see some optional Pylint checkers that you can enable, to enable those additional checkers.
You will need to create a .pylintrc file and add them under the load plugins parameter.
Some of the checkers here are very useful.
For example, there is one that will warn you when you use a Deprecated builtin function.
So the function that will be soon removed from Python or there is another one that will complain when you use else-if instead of elif statement.
So, those optional checkers can further help you when you write your code, on top of the plugins that come with pylint, there are also plug ins that you can install separated from Pypi.
For example, if you are using a Web framework like Django, you can install a Pylint plugin called Pylint Django.
It will suppress some warnings that don't apply to your Python code, when you're writing a Django website, and it will also add some new checks specific to Django code.
Unfortunately, I haven't found like one repository that contains the list of different Pylint plugins So usually you have to search for them on pypi or just Google for them to find the interesting ones.
|
|
show
|
3:10 |
Another very popular Python Linter is called Flake 8.
Flake 8 is actually a combination of three tools, pyflakes, which gives you warnings about unused modules, undefined variables and stuff flake that pycodestyle that gives you warnings about, Pep 8 violations and mccabe that gives you warnings about two high cyclomatic complexity of your functions.
In other words, it tells you if your functions are too complicated.
So, if they contain too many nested loops, too many if statements and stuff flake that But by default, mccabe Checker is disabled with Flake 8.
So we only use Pyflakes and pycodestyle.
Let's go back to my ugly code and let's see what flake 8 things about it So first, make sure you have Flake 8 installed.
Don't forget to copy the path to flake 8, and now we can select Flake 8 as a Linter and we have a lot of red underscores.
So, let's see at the list of all the errors.
Here, flake 8 complaints that we imported the non existing function, but we haven't used it, so actually this is a kind of a wrong warning because this function doesn't exist.
So, in this situation, pylint was actually better, because it detected that this function doesn't exist.
Let's remove it.
os was imported but unused.
Great.
Let's remove it.
What's next?
White space before closing bracket.
Let's remove it.
Expected one blank line.
Let's add this blank line.
You know what?
Let's actually run black on it because I don't want to manually Fix all the white spaces.
Perfect.
Some warnings went away.Here We still have too many white spaces, but neither Flake 8 nor pylints complain about it.
Black also didn't change it because this is inside of a string and black doesn't touch white spaces in a string, even though it would be better to actually remove those additional white spaces.
Here It correctly detected that the name is undefined and me is assigned but never used.
So let's fix it.
pylint actually complained that we're using person without the argument, but flake 8 is not.
But we still have this error here, so it can point us in the direction that this is something there is something wrong with this code.
Let's replace it and that's it.
We no longer have any errors here, so that's how flake 8 works.
Compared to pylint.
I personally find Flake 8, much less strict than pylints.
It didn't complain about the missing documentation or that our class had too few public methods.
But on the other hand, it didn't detect that invalid import at the beginning of our file.
So each linter has its own pros and cons.
|
|
show
|
5:20 |
One of the reasons for Flake 8 popularity.
It's it's massive catalog of plugins that you can use to customize it.
There is a GitHub repository called Awesome Flake 8 extensions, that contains the list of most popular plugins, and I definitely recommend that you check it out.
Some extensions will modify the default behavior of flake 8 and others will give you some additional options and checkers that you can use.
I chose a few plugins and listed them under the resources page at modernPythonprojects.com/resources, with a short description for each of them.
Let's quickly take a look at What do we have here?
First we have flake8-bugbear.
It adds some additional checks that can help you find some possible bugs and design problems in your code.
For example, it complains when you just do, except, without specifying the exception type or it complaints when you write +(+(n))because that's not how you implement a variable by one in Python.
So, that's a mistake that you can do when you move to Python from a different programming language.
You can see the full list of different warnings.
Here, you can see there is quite a lot of them, and there are even some opinionated warnings.
Opinionated means that the author things they are useful.
But maybe other programmers don't agree, so those checks are disabled by default.
Next, we have, flake8 Builtins, and this one makes sure that you don't use Python builtins as variables or parameters.
So, for example, when you use list as the name of the argument in the function, you're basically shadowing the built in list function from Python.
So that's something that you should avoid in your code.
Next, we have a flake 8 plugin that can help you write better list, set and dictionary comprehensions.
It will analyze your code and give you suggestions in which cases you should rewrite your code as a comprehension and in which case is you should actually avoid using comprehension.
So this can be actually very useful.
If you're new to Python, comprehensions, and you're having a hard time trying to, understand how they work and where you should use them.
Next, we have flake8-docstrings, and this will enforce rules from Pep257.
So, this is the PEP that tells you how to write documentation strings in your code.
Next, we have Flake8- eradicate, on this plugin will try to find a commented out code and complain about it.
So if you forgot to remove some code that you commented out, you can easily spot it.
Next.
We have Flake8-isort.
This plugin integrates the isort to with, flake 8.
And isort is a tool that will check if your import statements are organized according to the pep8 guidelines.
Next there is flake8-broken-line.
This plugin will complain when you try to use a backslash for line breaks instead of using something better, like parenthesis or a triple quote.
So, in the first example, instead of using a single quote and the line break we could use a triple quote instead in the second example.
Instead, of adding the line break.
We could either put this if statement on one line or use the parenthesis and then for using method changing.
You actually don't need to add the backslash and so on Then we have flake8-rst-docstrings.
If you're writing documentation with strings, then you are probably writing documentation using the 'rst'.
So the restructure text format.
It's kind of similar to mark down, but it has some different things.
So, with this flake8-rst-docstrings, you could get Flake 8 to check the docstrings of your function.
If the rst format is correct, it's quite easy to make a mistake and use, for example, the markdown link style instead of the rst, so this plugin can be quite useful.
Next, we have darglint, which is a plugin for PEP 8 that will check if the docstring description matches the function definitions.
Here is an example.
If you have a function with self and arg1 arguments and then in the docstring you forgot to document the self argument, then this plugin will complain.
Also, it might complain if you forget to specify the return statement as in the above example.
So this is a very good plugin because when you're modifying the code sometimes you forget to update the documentation.
Maybe you modify the name of a function argument or you add or remove the argument from the function signature and then you forget to update the documentation, which is quite a common mistake.
So, with this plugin, you can get a warning that your documentation is not corresponding to what the function looks like.
And finally, we have flake8-mutable.
This is a nice little plugin that will warn you when you use a mutable default argument in a function.
For example, if you defined a function like here, then each time you call this function, the 'b' argument will be always pointing to the same dictionary.
So, you might call this function five times thinking that each time you have a different b B.
But when you try to modify it, it turns out that you will always be referencing the same dictionary between all those different five calls.
So that's something that can be tricky when you're new to Python.
|
|
show
|
2:04 |
We saw that both flake 8 and pylint found different errors in our code, so you might be hesitating Which one will be better?
Well, the good news is that you can actually use both of them at the same time, but it's quite tricky to enable it in the VSCode.
As you saw when we changed the linter using the command palette, the previously linter was disabled.
But if you go to the documentation of VSCode you can see that you can enable multiple linters.
But you have to manually edit settings.
And if you use the select linter comment from the command palette, it will override your settings.
So, when you set up multiple linters, make sure that you don't use the Select linter Command.
So let's go back to VSCode and let's try to enable both Flake 8 and Pylint.
Let's open the settings.
And now let's open the settings Json file.
This is basically the Json files storing all your custom settings.
So whenever you change something in this interface, it gets safe in this Json file.
As you can see here, we have two options pylintEnabled that it's set to true and flake8Enabled that it's set to false, if you don't have those two options, so let's try to remove them all you have to do.
It's the first select flake8 Linter, then select pylint Linter, and this will add those options to Json file.
So, let's go back here and here It is like I enabled pylint enabled.
So each time you change your Linter, VSCode disables the previous one and enables only new one option, all we have to do is to put true to both linters.
Let's close this file and here we have it, lot of errors, Some of them comes from FLake 8.
Some of them comes from pylint, so it might be annoying to see all those errors.
But on the other hand, if you fix all of them, then you can be sure that your code is actually pretty good.
|
|
show
|
2:56 |
Both PyLint and Flake 8 belong to a family of tools called static code analyzers.
Static code analyzers.
Check your code and give you some useful advice.
Let me show you a few other tools that you might also find useful.
Bandit is a tool designed to find common security issues in your Python code.
For example, it will complain when there is a possibility for a SQL injection when you silently ignore exceptions or when you use modules in an insecure way.
If we scroll down, we can see the list of possible warnings.
However, running bandit out of the box on a large project will give you plenty of false positives.
For example, it will complain about assert statements in your pytest files, even though pytest is using, assert everywhere for testing and that's a normal thing.
So you have to spend some time and configure it a bit to remove those false positives.
But once you do, this bandit can be a very good tool to review your code.
And if you're using Flake 8, there is a plugin called Flake 8 Bandit that adds bandit checks to your flake 8 checks.
That way You don't have to install a separate, tool.
If you want to make sure that your documentation is written according to the Pep257 which is the style guide for the documentation, then you can install pydocstyle.
Just keep in mind that it will complain about missing documentation of every function or module that you forgot to document, just like pylint did again, If you're using Flake 8, there is a flake8-docstring plugin.
that enables pydocstyle for you.
And if you think that PyLint is not strict enough for you, then we also have a wemake-Python-styleguide tool that describes itself as the most strict and most opinionated Python linter ever.
And in my opinion, it kind of yes, if we go to the documentation, you can see that apart from using their own checks, they also combine around 20 flake 8 plugins together.
So, if you're looking for a very strict linter, you can check this one out and another tool that combines different linters together It's called Prospector.
This one combines pylint Pep 8, which was actually renamed to pycodestyle pyflakes, Mccabe, Dodgy and Pydocstyle.
There are even more optional tools like pyroma, Vulture, frosted, mypy and Bandit.
All of them will be preinstalled with Prospector, but they will be disabled by default.
So, if you're looking for one tool to, combine basically every possible static code analyzer together, then you can use prospector.
|
|
show
|
3:08 |
I want to mention one more tool that I found quite recently, and I think it could be very useful, especially for beginner Python programmers.
It's called sourcery and what it does, it gives you suggestions how to re-factor your code.
You can install it in VSCode or PyCharm.
And as you write, Python code sourcery will sometimes show you pop ups that your code can be re-factored.
Something to keep in mind is that sourcery is not open source.
There is a paid plan, but for now, it only applies when you want to use sourcery as part of your continuous integration pipeline on GitHub.
But the plugin for VSCode is free to use.
So, let me show you how it works in VSCode.
First open the extensions marketplace and search for sourcery and then click install.
Once it's installed, you have to follow the installation instructions from here, so you will have to go to sourcery Page and create an account to get a personal token.
Let's click this link.
You'll have to create an account.
You can either use Google or GitHub.
I usually use GitHub and once you log in, you will see your personal token.
I'm obviously not going to show you my personal token, but let's pretend you create an account you logged in and you copied the token from the next page.
You go back to the VSCode and you open the settings and search for sourcery.
And you have to copy this personal token here.
Once you copy it, closed the settings and we are ready to go.
If you set up sourcery correctly, now we should see some hints about our code in the places where it could be refactor.
However, with the current version of sourcery, it doesn't seem to work outside of the workspace.
As you can see, I have this purple bar at the bottom of VSCode which means that I'm not in the workspace.
I just opened a single file.
So let's try to open the workspace.
Let me reuse one of the previous ones that I used for other lessons.
Let's go with Django.
It doesn't really matter.
So now I need to reopen my ugly3 file actually close this and here it is.
Those blues quickly underscores come from sourcery.
And here we can see the deep of how sourcery would re-factor our code.
In this case, we have two identical print statements in the If, so we can take it outside of the if, so we can click the quick fix.
And it's re-factored for us.
In the second example, we are creating at list using a for loop.
But sourcery suggest that we using this comprehension and it will work the same.
We click yes, and we get a code that does exactly the same, but with just one line.
So sourcery is a pretty great tool, and I really hope it will stay free to use with code editors like VSCode And even though I have been writing Python for many years, I still find this plugin very useful.
It can spot some re-factoring is that I don't easily see, so I can highly recommend it.
|
|
show
|
1:15 |
As you write Python code, you sometimes need to take it for a spin and do some interactive coding.
By default, You can start the interactive Python session, by calling Python in your terminal.
This will start a REPL.
REPL stands for Read, Eval, Print, Loop.
So, it's a program that will read your input, evaluate it, print the results and loop again to the first step.
The standard Python REPL is nice because it comes with Python out of the box, but it's very basic, and in the long run, it's not convenient to use, in the other versions of Python.
There was not even a tab completion.
So when I typed na and pressed tab, that would insert the tab instead of auto-completing the name variable.
But if you want to write a for loop, you still have to Add the indentation in the next lines by hand, instead of your REPL detecting it.
So, that's still quite inconvenient.
So, if you do a lot of this interactive coding, there are much better Python REPL alternatives.
We have iPython, bPython and ptPython.
I will talk about each of them in the next 3 lessons.
|
|
show
|
2:39 |
IPython is probably the most popular REPL for Python.
It's the same REPL that brands behind the Jupyter notebooks.
So if you're using Jupyter notebooks, then you will feel at home.
Once you install IPython with pip, you can started by typing.
Well, I Python in your terminal, out of the box, We have syntax highlighting.
We have automatic indentations.
So when we start writing a for loop and we press enter, IPython will automatically detect that we want to insert four spaces.
We get the tab completion.
So, for example, when we want to import the function from some module, we can just type some letters and keep pressing tab to find the function that we're looking for and one really nice thing is a dynamic object Introspection.
This is my single most favorite feature of IPython.
You no longer need to search for documentation of a function or a module.
You just append the question mark to the name of the function, and IPython will show you the docstring of that object.
And if you want to see the complete source code, just use double question marks instead, it won't work in this case, because chdir is a function that comes from CPython.
But if you're using some external pip modules, then it will work.
Let's try to, import the function from IPython itself and see it.
As you can see here we are importing some module from IPython genitals.
And then if we use double question mark, we can see the whole source code of this module.
Another cool thing is that you can run shell functions directly from IPython by prepending your functions with the exclamation mark, so we can list the content of the current directory We can create a new file, and we can even start vim from IPython.
There are some helper functions called magic commands that you can use for some common things, like measuring the execution time of some code, running a Python script without living the Python terminal and so on.
One of them is, for example, run that you can use to run, any Python file in the current I Python session So let's try to execute the file that we just created and it's working.
There are many more things that you can do if IPython.
If you want to learn more, I actually have a presentation on IPython that I gave at some conferences.
It's a 45 minute long, fast paced talk where I go through various features of IPython.
I will put a link to this presentation in the resource page of the course.
|
|
show
|
0:53 |
If you are looking for something more lightweight than I Python then bPython, is another great choice.
It has less features than iPython, but it has all the essential ones.
Let's see this in action.
We get syntax highlighting.
We get smart indentation, auto completion and suggestions.
As you type.
One really cool feature is the rewind, that lets you undo the last command by pressing ctrl+r, this is pretty useful.
If you made an error and you want to undo it, as you can see here, we got back to the time when a was still equal 1, pretty handy feature.
|
|
show
|
0:59 |
And finally, we also have ptPython, just like bPython.
It has the essential features, like syntax highlighting, multiline editing, auto completion and so on.
But it also has some cool features, so let's give it a try.
For example, it does syntax validation as your type, and it won't let you execute code that contains invalid syntax.
You can also run shell commands or switch between vim and imax key binding modes, and even has those simple menus where you can configure it or ransome special commands.
So, for example, if we press F3, we go to the history menu, and from here we can run some commands from this or from private sessions.
So we use space toggle, which lines we want to run.
So ptPython is yet another interesting alternative to the standard Python REPL.
|
|
|
39:27 |
|
show
|
0:41 |
So, we wrote some codes, and now it's time to have some tests.
Otherwise, when we want to add new features, we will have no way to tell, if the old code is still working fine or if we introduce some bugs.
Python comes with a unit test module that you can use for writing tests, but most people use Pytest Instead, it's a third party library, which means that you have to install it with pip.
But it's much easier to use, and at the same time, it's much more powerful in terms off additional features or command line parameters.
Let's see an example of how pytest is different than unit tests
|
|
show
|
4:57 |
If we go to the unit test documentation, we can see a basic example.
Let's try to write the same test, but this time using pytest.
So, let me copy this for the reference.
Let's name a test_string_methods.
And now let's create a second file.
And let's name it pytest_example, Let's move it to the right side, so we can see both files on the same time.
Perfect.
So first, instead of importing unit test, we're going to import pytest.
Since this is a third party library, we have to install it with pip, before we can actually use it.
So let's actually do this before we forget.
First, let's activate the virtualenvwrapper and let's create a virtual environment that we're going to use.
Next, let's install pytest and let's tell VSCode to use this virtual environment with pytest.
Otherwise, we're going to get this problem with the import, not this thing.
So here, that's search for pytest.
Well, there is none.
so let's reload the window that way.
VSCode will pick up any new virtual environments that we have created.
And as you can see, here we have the pytest chapter, virtual environment that we just created.
We select this and even though we still get this unresolved import, it comes from the Python language server.
so we can ignore it for now.
Okay, so we have pytest, and now we could create a class to group all our tests together.
But with pytest, it's not necessary.
In case of a unit test, you always have to create a class that inherits from this unittest.TestCase.
And then you have to create functions inside, with pytest, All you have to do to create a test, is to write a function that starts with a word test inside the file that starts with a test prefix, and pytest will automatically detect all those as best cases.
And if you don't like this convention, you can use whatever name you like and then you just change the pytest configuration to tell pytest how you're naming your tests, so I'm not going to create a class.
Let's start with the first function.
And now we have next difference between unit test and pytest.
Unit test, has a lot of different assertions.
If you want to check that something is true, you have to use assertTrue.
If you want to check that something is false, you have to use assertFalse.
If you want to check that two values are equal, you have asserEqual and so on.
You can go to the unit test documentation to see the list of all the available assertions.
On the other hand, with pytest, we only have a simple assert, assert takes an expression, evaluates it and checks if the return value is true.
So if pytest, if you want to compare that something is equal to something else, we just right assert 'foo'.upper() == 'FOO', Next, we have a test for its upper.
If you want to assert that something is true or false, all you have to do is to, run assert something is true or assert something is false.
We're getting those warnings because Flake 8 is expecting two blank lines.
So, let's actually format our file with black, and that is fine.
We still get this warning from Flake 8 because we imported pytest, but we're not using it.
Don't worry, we'll actually use it in the next test, and then the Python language server is still complaining that the import is unresolved, but well this we can ignore.
And now we have the final test.
One thing that we can't do with the simple assert statement is to assert.
That exception was raised.
We can't do aseert type error because this is going to give us a syntax error.
So, instead we have to call pytest.raises().
And that's how we can check that, An exception was raised and again with pytest, We don't have to write this, because pytest will automatically detect that those are our test functions and it will call them.
|
|
show
|
3:50 |
So we have our pytest example file with some tests here.
Now, we actually have to run it somehow.
One way we could do this would be to configure the VSCode to use pytest and then running from VSCode.
But probably most of the time, you will be running your test from the terminal.
So, let's do this.
When you run, pytest command in your terminal.
pytest will look inside the current directory or it's subdirectories for files that either start or end, with test and inside of those files, it will select all functions that starts with test either outside of any class or inside of a class prefix with test again.
All this is called test Discovery.
And by following those simple conventions, you can make sure that pytest will work out of the box.
So as you can see, two interesting thing happened.
First of all, pytest didn't run our pytest example file because the name doesn't start with test, so we have to either re name this file or point pytest directly to it.
But another interesting thing is that pytest had rerun our unit tests and it actually managed to run all those tests and make sure they are passing.
So what's really cool about pytest is that it can also run tests written in the unit test.
If you have a large code base of tests written in unit test and you want to move to pytest, you will be able to check both test written in unit test and the new tests that you write in pytest.
So migration from unit test to pytest can be very easy because you still keep your old test and the new tests are written in pytest framework.
Let's go back to actually running test from our pytest example.
The easiest way is to just write pytest and point to the file.
And now pytest had run all three tests from our file, and all of them are passing.
If something was wrong with this file, let's say this should return false.
We would get such an error message, out of the box is not super helpful because it says, assert false is true, but assert accept additional parameter, and this is the message that will be printed.
If the assertion is false, so we can say something like that, and here we have it, foo is not upper.
You can also specify a specific class or even a specific function from this file that you want to run.
So let's say we want to run just this test split so we can do this like that pytest name of the file, two colons(::) and name of the function.
This will run only one function.
And as you can see, this one function is passing.
If we had a class here, so let's create one.
Okay, now we have to pass self everywhere.
And what's the problem?
Too many blank lines.
So, let's format this document and now, like it is happy.
No, we still have the failing test.
Let's fix this, Now, We can either run all the tests from a class or we can specify a single file inside this class, so we always have to use this double colon to seperate file name, class name and a test name.
|
|
show
|
6:57 |
Let's see what else we can do with the pytest command.
If we run pytest --help, You can see a huge list of different parameters.
That's a lot of parameters.
But the most important stuff is at the top, a very useful parameter.
Is this -k?
It lets you narrow down which test you want to run.
Let's say you have a lot of tests, but you only want to run one of them, so you can specify the name of that test with -k and the name of the test.
So in our example, let's say we want to run this test_upper.
And as you can see, only one test was selected and two were deselected because they didn't match this expression and we don't even have to specify the full name of the test.
We can specify part of the name and all the tests that match this will be run, so we have two tests that have upper in the name.
If we want to run both of them, we just do pytest -k upper.
And now we have two test selected one deselect and we can even revert this match, so we can say we want to run all the tests that are not matching the upper in their name.
So, in this case, we have to do -k "not upper".
Okay, we have to provide double quotes now.
Only one test was run.
You can specify -v parameter to get the more with both output, so we can see which test was actually run.
And as you can see, our test_split was run.
But the other two tests that have upper in the name we're not run.
Next, We have mark so we can mark some test with, let's say, categories or tags, and then you can use those markers to run those specific tests or to not run those specific tests.
This works kind of like this -k parameter, but it can be used, for example, when different tests don't share the common part in their name.
Typical case is, for example, To mark Some tests are slow and Only run them from time to time because they are really slowing down.
You're test.
So let's go back to our pytest example and let's mark this test split as a slow test.
So all we have to do is to decorate it with pytest.mark and the name of the mark.
So let's say it's a slow test.
Now we go back to the terminal and let's run only the slow tests.
As you can see, we run only, this one slow test.
We also get this warning saying that pytest.mark.slow is an unknown mark.
That's because pytest also have some built in marks.
So it's trying to tell you when you're actually using an existing mark, and when you make a type and just like with -k parameter, we can also run all the tests that are not slow by saying -m not slow.
This is very useful when you have some tests that are slow and you don't want to run them on your local computer.
Maybe it's a full end to end test that test your whole website, and it takes a few minutes to finish.
So instead of wasting your time each time you run the whole test suit, you can mark them as slow then pytest do not run them on your local machine and only run them on your continues integration server.
We also have -x parameter that will exit after the first failed test.
So let's actually make a failed failing test.
Let's get rid of this unknown mark and let's rerun it.
And as you can see, only two tests have been run.
This guy was run.
This guy failed, and since this guy failed, this was not run.
We can also run pytest --pdb, and this will automatically start at debugger session when assertion fails or if there is any other kind of error.
This is one of my favorite commands because you don't have to manually set a break point.
You just add this one argument, and whenever you have an exception, pytest will automatically started debugger so we can poke around the code and see what went wrong.
So when you have a failing test and you think you fixed it you can run pytest --lf.
which stands for last failed, this will Only rerun the test that has failed in the previous run.
So this is a great command when you don't want to run your full test But you just want to see if you manage to fix the failing test.
There's also --ff, which is very similar to the previous command.
But this one will run the failed test first, and then it will run all the other tests.
So this one is actually exactly like the lf.
Except that it runs all the other test after the failing tests.
And if you have created a new test file and you want to run this new test file first, there is --nf option that will run all your test.
But first it will run tests from the new files that you have just added to your test suit.
In this case, it doesn't make sense because we specify only one file.
So usually you would run it with just pytest on all your tests.
And one really cool option is this durations.
It will report the slowest test, So you run it with pytest --duration = n and this n specifies how many slowest test you want to see.
So in this case, I want to see to slowest test.
But since all of my tests are faster than five milliseconds.
I don't really get the output.
This is a super useful command when you have a lot of tests and they become slow and you want to actually see which one are the slowest, so we can go and figure out how you can make them faster.
So this list of pytest parameters is really huge, and you can easily customize many aspects of pytest.
So I recommend you take a look from time to time because each new release of pytest brings new features.
|
|
show
|
2:16 |
Instead of passing arguments each time you run pytest command, you can customize them by creating a configuration file.
pytest can actually accept different configuration.
File for months.
You can use pytest.ini.
You can use pyproject, but this requires version 6.0 of pytest.
You can use tox.ini and you can also use setup.cfg I will show you how to use pyproject.toml.
Why this format?
Well, that's because it's the same format that tools like pipenv and poetry use.
So chances are that if you're using poetry or pipenv and you already have this toml file, plus there are some peps like pep-518 that recommends to use it instead of the old setup.ui.
So I think this is the format that's going to be widely adopted in the Python community in the future.
So here is the example pyProject.toml file that I got from the pytest documentation.
To get this syntax highlighting, I had to go to the extension marketplace and installed Better TOML extension and then I Choose TOML language here.
Here we specify that we want to use at least version 6.0 of pytest, although if we're using an older version, it won't even recognize this toml config file because it's supported from version 6.0.
But anyway, it's good to specify the minimal version, especially if we depend on some features that were added in the specific version of pytest Next we configure additional options that pytest is run with.
That's probably something that you will be using most often instead of having to remember a bunch of flags and arguments.
to pytest.
We can specify them here.
So what we do here is first, we're telling pytest that we want to get a nice summary of how many test failed succeeded or how many warnings we got with the -a parameter.
Next, we want to make the output a bit less for both, with -q parameter.
And finally we tell it to skip all tests marked as slow.
We also have this Test paths option, which specifies which folders we are using to store our tests.
In this case, pytest would look for project inside the test directory,but also inside the integration directory.
|
|
show
|
2:16 |
Let's talk about some useful features of pytest.
One of the most popular ones are called fixtures.
Fixture is a piece of code that you can run before a test function.
They can be used, for example, to create some test objects.
Let's say you have a website and you want to test that after a user looks in, they can perform certain actions.
You might have a lot of different tests, and each of them requires a user who is logged in like in this example.
We have a test by item, where we first have to create and log in the user.
And then we have test admin permissions where again we have the same code to create a user and to make sure he's logged in.
We can easily avoid this duplication by extracting the first two lines from each test into a separate fixture.
So let's do that.
Let me cut this and then we are going to create a fixture called authenticated user, and to mark function as a fixture.
We have to add a decorator called pytest.fixture and make it work.
We have to import pytest, now to make your test.
Use this fixture.
We just have to pass it as a parameter, and we can remove it from here, at this admin parameter is actually not needed.
And again we pass this fixture and we use it in our test.
If in the future we need yet another test that requires this authenticated user, we already have a fixture for that.
So using fixtures in pytests is a standard way to extract creation of some test objects from your tests.
If a few tests share the same code at the beginning, you can probably take them out to a separate fixture.
And apart from creating your own fixtures, pytest comes with some builtin ones that you can find in the documentation.
So here you have this list and, for example, have a fixture called tmpdir() that you can use when you want to create a temporary directory in your tests.
|
|
show
|
1:17 |
Another common scenario is that you have a function that performs some action and you want to temporary disabled this action in your tests.
Let's say you have an online shop, and you need to make sure that when the user buys something, you charge their credit card before you send them their order.
But you can't charge a random credit card each time you run your tests.
So instead you can temporarily replace a piece of code responsible for charging a credit card This is called Monkey Patching.
We replaced the charge method of the stripe object with a mock method that always returns a dictionary with status == success that way when we call charge customer later In this test, we no longer call the charge method from stripe object because that would normally charge credit card.
We call our mocked method, and all it does is to return a dictionary with status == success.
Then we can continue testing the rest of our code to make sure that an order is processed correctly.
Notice here that we're passing monkeypatch as an argument to our test, thats because monkeypatch is a fixture that comes with pytest.
|
|
show
|
3:38 |
Another useful feature of by testes parameterization.
You can use it when you want to test some code with different inputs.
Let's continue our example with the online shop.
You want to test that adding different numbers of item to the cart will result in the correct cart size.
So we have a first test where we add one booked the cart and we want to make sure that the card size is equal to one Then we try to add 10 books, and then we would expect to have 10 items in the cart.
And then we also want to check that when we have a malicious user that tries to add minus one item to the cart.
Our cart is empty, so all those three test looks very similar.
The only difference is the number of items and expected cart size.
We can use pytest parameterization to extract those two variables into a fixture and turn those three tests into one test.
So, let's do that.
First we have to create parametrize Fixture.
The first parameter is the number of arguments that we will use in the function.
I will explain that a bit later, so we have two variable things, the number of items in the order and the cart size.
Then we pass a list of tupples.
Those tupples will be assigned to those parameters that we just specified.
So in one test we want to see that adding one item to the cart results in a expected card size of one, the same for 10 items and adding -1 results in the current size of 0.
So now let's just modify this test and let's call it add_items_to_cart and it should accept those two parameters as arguments and now we can take them and use them in our function.
If we run pytest on this file, pytest will create three different tests and each time, It will replace the number of items and expected cart size with the values from the tupples in the list above.
Since I don't have the code for cart written, I made another simple example with the calculator.
So here we are, just making sure that adding numbers in Python works correctly.
So we have three arguments left, right and output, and first we want to check that adding 1 + 1 = 2 that adding 10 + 100 is 110.
and so on.
So this file, we can actually run with pytest.
And as you can see, we have four tests.
The output from pytest looks different than before because we created this pyproject.toml there We have specified that we want to have a different output using this -array and -q parameters.
So let's see what happens if one of those parameters is incorrect.
As you can see, we have three tests that passed and one that failed, and pytest is actually adding the values from the parametrize fixture in the square brackets here so we can see that we have a failing test pytest other numbers with parameters -10 --10 --10.
So we know it's this guy that's failing.
|
|
show
|
3:14 |
We saw how to use marks in pytest to mark, Some tests are slow or two parametrize them.
So let's talk about mark in more details.
Marks are like categories for your tests.
Let's say that in your code you have two different types of tests.
You need tests that test a very small functionality, and they run very fast and end to end test that test your whole application.
But they take quite a lot of time to run.
Since your end to end tests are slow, you probably don't want to run them together with all the other tests on your computer.
It's much better to run them on a continuous integration server each time you create a pull request.
So you mark your end to end test with slow marker and then you exclude them from running, by passing -m not slow parameter to pytest as we saw in one of the previous lessons.
But if you use marks like that, you have to remember to register them.
Otherwise, you're gonna get this warning from pytest saying that Hey, you're using an unregistered Mark, make sure this is not the type of.
So if you're using pyproject.toml, you have to add an argument called markers, and there you specify a list of markers that you want to use in your test You can just specify the name of the marker, like with the serial.
Or you can also specify the description of what this marker is supposed to do.
pytest also comes with some built in markers.
For example, we have this skip marker that you can use to disable a given test.
This is useful when you're in a hurry to fix something in production, but you have some tests failing.
So instead of removing them or commending them, you can just add this mark and they will be display in the test report.
as Skipped, That way, you still get a hint that you should fix them at some point in the future.
There is a similar marker called skipif, but this one lets you specify a condition when this test should be skipped.
For example, if you want to test a piece of code that will only work with Python 3.6, you can market like that now, When you run your test pytest will try to evaluate this expression, and if it evaluates to true, it will skip this test and display the reason in the test report.
If a test is expected to fail, you can mark it as xfail and pytest will ignore it, when it actually fails.
Why would you want to use a mark like this?
Well, maybe you have a test that is currently failing, but someone is already working on a fix, and it should be patched in the next few days.
So instead of skipping and removing this test, you mark it as expected to fail.
When the fix is implemented and your test will start working again, you will see a warning message from pytest saying that it was expected to fail, but it actually passed.
So that way you know that your test has been fixed and you can remove the xfail Mark, all those three marks.
So skip, skipif.
and xfail, accepts an optional parameter called reason, you can write down the reason why a given test is skipped or why it's expected to fail, and this reason will be printed in the pytest report.
|
|
show
|
1:21 |
One of the things that we usually forget to keep up to date is the documentation mostly because we don't have a tool that will tell us when it's outdated and it's no longer valid.
pytest can solve part of this problem.
If you put some code examples in your documentation, it can evaluate them and tell you when they are no longer working.
Take a look at this function.
We have a documentation that specifies how to use it, but actually the documentation is incorrect.
But if we run pytest with --doctest-modules, parameter, it will check for parts of your documentations, starting with this triple right angle bracket(>>>).
If it finds any line like that, pytest will treat the following text as Python code, and it will check that the result of this code is equal to the next line in the documentation.
If it's not, it will report an error.
So let's give it a try.
And as you can see, pytest is complaining that there is a problem with docstring of add_two_numbers function.
It was expecting 5, because here was specified 5.
But actually the result of adding two numbers was 4.
So that's a pretty cool way to test that code examples in your documentation are upto date.
|
|
show
|
1:42 |
We saw that pytest offers a lot of functionality out of the box.
But if something is not there, there is a huge chance that there is a plugin that you can use.
There is the website called pytest plugins compatibility that gathers all the projects from pypi that matches the pytest-in the name, which are considered pytest plugins, so you can see there are over 800 plugins but a lot of them are no longer maintained.
There are version 0.1 and they don't even support Python 3.6.
But there are still plenty of very useful plugins, and in the next lesson, I will show you some of them.
To add other plugin to pytest.
You need to install it in the same virtual environment as pytest is installed.
So let me show you how to add a plugin.
I found the silly pytest plugin called pytest-emoji that basically adds emoji to the test reports.
So let's copy this.
Here I have the pytest chapter virtual environment with pytest installed.
Go to add our plugin, we have to install it in the same environment, and now we can run pytest with the --emoji flag.
So let's see how the output looks without the flag.
I still have this one test failing because I'm using lower case foo, while my test is expecting uppercase foo.
And what happens if we run it with emoji flag?
Tadah, the boring docs has been changed to emojis.
Well, it's a funny plugin.
It's not the most useful one, so let's actually check out the useful plugins for pytest.
|
|
show
|
3:27 |
Let's take a look at some of the useful pytest plugins.
First, we have pytest-xdist, which is a plugin that lets you run your tests in parallel.
Once you install it, you can run pytest-n auto and pytest will automatically split all test between all the CPU cores that you have and this can significantly speed up your tests.
And if you have some crazy large amount of test that still take too long to run even when using all the CPU's of your computer you can even use this plugin to.
Execute your tests on a remote machines.
Next, we have pytest-cov.
You can use it to generate a test coverage report at the end of each test run.
When you run pytest --cov parameter, you will see a summary of How much code is covered with tests in general.
Plus, you get a nice, detailed information for each file so you can see which ones don't have enough tests pytest-clarity is a very useful plugin that improves the readability of your pytest output By default, If you have, for example, a large dictionary and some keys are different than expected.
The output from pytest can be quite confusing, but with pytest clarity, it's much more readable by default, When you run pytest, you will see the errors only after all the tests have been run.
But if you want to see them immediately, you can install this pytest-instafail, plugin, and when you run pytest --instafail, you will see the errors and failed test immediately in the output.
pytest-sugar is, ah, pytest plugin that.
Criticize the output of pytest a bit.
You can see a nice progress bar and, just like pytest in stuff fail.
You will also immediately display the errors and failures instead of waiting until the end.
So if you're tired of watching those dots, you can install pytest plugin, quite a sugar plugin, and you will have a slightly different output.
If you notice that your tests are starting to get slow, you can install pytest benchmark.
With this plugin, you can run some benchmarks on your test and easily find which one are the slowest ones.
Do you remember when we talked about monkey patching a few lessons ago, I showed you how to mock stripe, so it doesn't charge a real credit card when you're on your test.
But what happens when you manually test something?
Use a real credit card, and then you forget to remove this code.
Well, that credit card will be charged each time you run your test, and that can be an expensive mistake.
Or maybe you forgot to change the database settings for your test, and you accidentally dropped the production database.
I mean, those are extreme examples, but they can happen, so you can install pytest-socket plugin, and it will disable all network calls in your tests, so you won't be able to access anything outside of your local host machine.
So no access to stripe API or to your production database.
Last but not least, we have our lovely pytest-emoji plugin.
If you like a emoji and you find the default output from pytest boring, you can check it out, as we already saw it replaces the output of pytest with some emojis.
|
|
show
|
1:49 |
I hope that by now I convince you that pytest is a great testing framework.
Don't get me wrong.
Unit test is not a bad library.
If you prefer to write test in object oriented way, it's perfectly fine to use it.
But pytest has much more features, and in the long run, it's a much better frame-work to use.
If you're starting your product from scratch, then it's not a problem.
You can start writing your test in pytest and everything works fine.
But what if you have an existing project full of test written in the unit test?
Well, as we already saw, pytest has a backward compatibility with unit test so out of the box it will run your unit tests.
Now you can either slowly migrate them from unit test to pytest.
Or, if you don't have time, you can leave those all the unit test as they are and write new test in pytest.
Let's take a look at the file that we used to start this chapter This is a unit test that tests some string methods, so we have a class.
We have three tests inside.
If we now go to the terminal and try to run pytest on it.
It will work.
Now let's say we want to add another test, but this time with pytest.
The best idea is to.
Just create a new file and add pytest to there.
But you can also make pytest and unit test here if you want.
Let's just let's say we want to write a test for lower matter this time Let's just take this out.
Remove parameter.
And replaced the assertion, back to the terminal.
We run again, and now we have four test passing.
|
|
show
|
2:02 |
In this chapter, I showed you how to use pytest, the most popular testing framework for Python.
Since we saw a lot of different features, let me give you a short summary.
The most important features of pytest are simple assertions, so you don't have to remember different types of assertions, and everything can be achieved with a simple assert statement.
Also, Discovery of tests, Writing your first test is easy.
Just add a test_ prefix to the name of the function and put that function inside the file, that is also prefix with test _.
That way, pytest will automatically recognize that it's a test function.
So no need to create a test class if you don't really need it.
Plenty of CLI options, pytest has a very extensive set of parameters that you can pass when you run in the terminal.
Easy to use fixtures, Using fixtures in pytest is quite easy.
All you have to do is to decorate the function with pytest .fixture, and that's it.
Now you can use this fixture in your tests, and also pytest comes with some building fixtures.
You can easily mark tests across all the files, marks are a great way to split your tests into different groups.
For example, if some of your end to end tests are slow and you want to run them only on the CI server, you can mark them as slow and exclude them from running on your computer.
Plugins are a true power of pytest.
If some feature didn't make it to the main pytest module, there is a huge chance that there is a plugin that you can use.
There are plugins to run tests in parallel, using multiple code, to generate the test coverage for your code or even to print the test reproducing emoji.
And if you want, you can always write your own plugin to further extend the functionality of pytest.
And finally, pytest is compatible with unit test, so you can switch and your old test will still work.
Then you can gradually replace the unit test with the pytest version.
|
|
|
23:54 |
|
show
|
0:42 |
In this chapter, we are going to write and display documentation for our code.
One of the most popular tools for managing documentation in Python is called Sphinx.
It's easy to use and comes with a lot of useful features out of the box You can, for example, generate documentation in different formats like HTML, LaTeX, epub, plain text and so on, and with LaTeX, you can easily generate the PDF files.
You can easily create hyperlinks between different functions and modules in your project.
You can automatically generate documentation for your API directly from the dock strings, and you can test code examples in the documentation, just like we did with pytest.
|
|
show
|
1:43 |
To write documentation, we first need to have some code that we can document.
So I have created a simple calculator class with a few functions inside, the calculator class And some of the functions inside have docstrings, so they explain what they are supposed to do.
The add function also have some examples of how to use this function, and the other three functions don't have any docstrings.
I also have another file with the re implementation of some basic math operators.
Just so we have more files that we can use in the documentation.
Here, every function is documented, but some of them are documented with more details and some of them with less.
The first two functions have a very solid documentation, with both code examples and parameter documented.
Then I skip the example for the multiplication and for division I have just one line summary.
We will later see how those different docstrings are displayed in our documentation If you want to see how this calculator works, we can run it in the terminal.
So I select all the code and I well run in the terminal, and now I can create a new calculator instance, 10 is the initial value start in the memory.
Now we can add five, and since each function returns calculator instance, we can change those methods.
So next we are going to, installs sphinx and generate the documentation for the simple folder.
|
|
show
|
3:02 |
To install SPHINX were going to run.
pip install SPHINX without this -u.
As always, we have to create virtual environment.
I already have one called Documentation Chapter, so let's copy this instruction.
Let's remove -u, -u installs packages for given user So if you have multiple accounts on your computer, it might make sense to use it.
But since we're using virtual environments, we absolutely don't need this value.
Well, this parameter, okay, sphinx is installed.
Now we can run Sphinx.
Quick start docs command.
This command will ask you a few questions, and it will generate the scaffolding of your documentation.
So it's pretty similar to what cookie cutter does.
First, you need to select if you want to have separate directories for the source code of your documentation and for the generated files by default things wants to put everything into one folder.
But just to show you what's the difference between the source and build directories, I'm going to select the separate option.
Next.
We need the project name that we're going to use in the documentation.
Authors name.
And then you can specify what's the version of your project?
Let's say 1.0, If you're writing the doccumentation in a different language than English, you can select different language code.
I will leave to default English value and that's it.
As you can see, SPHINX has created four files for us.
The conf.py that stores the configuration for things, index .rst, which is your first documentation page and then Makefile and makefile.bat, makefile.bat is the make file on Windows, and the make file contains commands that you will use to build documentation.
We now have everything set up to use SPHINX, but we don't have any documentation yet.
To actually generate some documentation from the source files.
We have to run this make builder command.
We're builder stands for whatever format we want to generate so it can be there HTML LaTeX, link checks or whatever.
So let's generate the HTML documentation.
Make sure you are inside the Docs folder and then run make HTML.
If there were no errors, you will see this message that built has succeeded.
Your built might fail.
For example, if you're linking to some files that actually don't exist and things can't find them, in this case, you will see the list of errors and the HTML pages won't be generated.
So now let's go inside the HTML folder.
And as you can see, we have index HTML.
Let's open this in the browser and voila!
This is the documentation for your project.
There is not much going on here yet.
We only have some placeholders that Sphinx has generated.
So in the next lesson will actually add some real content.
|
|
show
|
2:01 |
In the previous lesson, Sphinx has generated the index.rst file for us.
So let's open it.
With Sphinx, we will be using restructured text to write documentation.
Restructured text is similar to mark down, but it has some customs syntax on top of it.
It's more capable than mark down because it supports installing extensions.
And you can, for example, easily link or include other files, which is quite important when you're writing technical documentation.
If this is the first time you will be using restructure text, check out the restructure text primary page from the Sphinx Documentation.
It will give you a quick overview of how restructure text works and how we can use it.
So with VSCode by default We get no syntax highlighting.
We can't even choose restructure text here, so we'll have to install an extension, open extension marketplace and search for restructured text.
Okay, this is what we need.
That's install it.
Now let's go back to our file.
So now our extension is asking us how to generate the HTML file from the RST files.
This is needed.
to, Get the preview of RST files, so you can see we have this icon here, and when we click it, it will generate the HTML from the RST file.
So that's like Sphinx because we are using Sphinx.
We don't have the preview engines, so we can install it, and in the meantime, you can already see that we have the syntax highlighting.
So this is a comment.
This is the main header of our page.
Next, we have the table of Content directive, which will generate the table of content from the files that we specify.
And then we have the content on the main page.
In the next lesson, we're going to change this file and add some custom files here.
If you want to use markdown instead of this restructure text, you can install the recommonmark extension for Sphinx.
|
|
show
|
2:02 |
With this restructure text extension that I have installed in VSCode, I now can preview how the restructure text will look like in the browser without actually going to the browser.
So when I click this button, this is how this page will look when render to HTML.
As you can see, we have some syntax highlighting for Python.
And, let's close it.
And I have also created this tutorials directory with two tutorials.
Although they're not very useful.
This one has just a simple list, and this one has a very long paragraph of some dummy text.
But let's pretend those are actually useful files in your documentation.
Now we have to added to the index page.
So under the toctree directive, make sure that you add one empty line.
Otherwise it's not going to work correctly.
And here we can add the names of the files to include, so first quickstart.rst and then both tutorials files.
Now we have to regenerate the documentation because refreshing this page doesn't change anything because we have to regenerate the documentation first.
So we run make HTML, no errors.
Let's refresh, and as you can see, we have our table of content.
We have the quick style guide.
You can go back and we have the tutorials.
And Google actually thinks it's Latin.
Well, it is Latin.
We also have this simple search.
I have no idea why I have those things here, but Okay, well, not not very helpful.
So from here, we can continue building your documentation.
You can add more folders like this Tutorials.
You can add files in the main folder and so on.
In the next lesson, we'll see what to actually include in a good documentation.
|
|
show
|
2:03 |
Now, how do we actually write a good documentation?
Do we dump everything on one page, or do we split it into separate pages?
If, yes, then how do we split it?
If you don't know how to structure your documentation, you can use the following documentation system.
It splits documentation into four categories.
First, we have tutorials.
Their purpose is to.
Teach new user how to use your project.
A good example is the quick start guide that explains how to install all the dependencies of your project and how to get your application up and running on someone's computer.
In our case, a tutorial could explain how to install this calculator module with pip and how to start using it in a Python terminal.
Next, we have how to guides they are goal oriented, and they explain how to do a specific task with your project.
With our calculator, we could write a how to guide on how to other bunch of numbers together.
It's not very useful how to guide, but I hope you get the point.
Third category is explanations.
They explain how your project works behind the scenes and how different parts interact with each other.
we could explain how the calculator plus works.
For example, we can explain that we can change command calls because we return the calculator Instance.
Finally, we have reference category, reference Guides are like a Wikipedia page for your project.
They should describe every part of your application, all the classes, all the functions, all the methods, what parameters they take what they return.
So all the API documentation falls into this category.
In our case, we could take the API documentation from the doc strings and turn it into a reference guide.
I didn't come up with this classification.
I got it from Daniele Procida.
Excellent talk on how to write documentation, of course If you like to write your documentation in a different way, that's great.
There is no one perfect way to document every project.
But if you don't know how to start,this system is pretty good.
|
|
show
|
4:09 |
In our simple calculator module, we documented some of the functions.
Now it would be nice to display those doc strings in our documentation without having to manually copy and paste it into those rst files.
Luckily Sphinx has a plugin that can extract documentation from your modules, classes and functions.
So let me show you how to do this.
First, we have to open the Sphinx configuration file and added the auto doc to the list of enabled extensions.
Next, let's create a new file called API.rst that will store the API documentation.
Inside We have to specify which modules we want to show here.
So first I want to show the calculator class.
Let's make this a header and now I need to.
Add the directive that will auto generate the documentation from that module.
Now let's see if this works.
Let's go to the index.rst and let's add the API here.
Let's generate the documentation and no module calculator found.
So let's see, what we put here has to be a module that can be imported from Python.
So when we start Python session, we should be able to import calculator.
sphinx can't find this calculator model because it's generating the documentation from inside the source directory and calculator leaves outside of this directory.
So what we have to do is we have to go to the configuration file and we have to add the parent folder to the list of system paths.
Here is a short explanation of this problem.
Well, let's uncommented this and Let's make sure we are adding the parent folders.
Let's save it and let's try one more time.
Perfect.
Now it's working.
Let's check if we actually see the API documentation, here, let's refresh.
And here we have the calculator.
I should name it differently and perfect.
As you can see, we have the documentation for the calculator class.
We have the documentation for the add method, but we don't get the documentation for the other methods.
That's because only, the add method has a doc string.
If we add docsstrings to the other methods there will be also automatically displayed in the documentation.
Let's also add the documentation for the math operations file.
I go to the api.rst Let's copy this.
If, for example, you want to document only one function from this file, we could copy the name of the function, and I did like here.
So let's regenerate the documentation and let's go back to the browser.
So, as you can see, we have the calculator and math operations here, although I'm not happy that they are displayed like that.
So let's change.
What we need to do is to Add one higher level header in the a api.rst file.
And now I get a nice link to the API documentation.
And underneath I have the calculator and math operations.
displaying those docs strings here is pretty cool, but we can actually have the source code included with yet another plugin.
|
|
show
|
0:43 |
If we add the view code extension to those things configuration file, we should be able to also see the source code off our functions.
So, let's go here and let's add it.
That's it.
That's regenerate the documentation.
And let's refresh the page.
You can see we can click here and we automatically get the source code of our function.
We can go back to the documentation.
We can also check them math operations, and that's pretty useful to have us Well.
|
|
show
|
2:41 |
So we have our documentation with some code examples, and I'm not sure if you noticed, but I forgot to change the name.
Add subtract in those two examples.
So, actually, if you run this line in the terminal, he's going to return you 4 instead of 0.
So far, when we were building, sphinx, documentation.
sphinx wasn't checking if those code examples are correct, but we can easily change that with yet another extension.
So back to the conf.py.
And we have to add the DOC test extension, now to test the code examples.
In our documentation, we have to run, make doctest and we have 6 failures.
That's because sphinxes not smart enough to figure out that this add function is the function that we are actually documenting right now, so you have to be very explicit in your code Examples.
If you want to use the add function, you have to actually first imported.
Let's go back to our doc strings and let's fix them.
So first we have to import add function from math operations.
The same here.
Let's leave this back for now, and I think we're good.
Let's go to the calculator as well.
Yep.
Same thing here.
I think we're good.
Let's run the doctest again.
So we have 9 tests, 3 failures, and this time none of them is about missing function or method.
So the first error comes from here.
When we assign an expression to a variable in a Python terminal, we don't get any output.
But I accidentally put calculator five year.
So let's delete this line and then we have a problem with this add function.
So here we have to actually replace add with subtract function.
Perfect nine tests, no failures.
The build was successful, So running doctest is a great way to make sure that your documentation stays up to date.
But you also have to remember to explicitly import all the functions and classes that your going to call in your code examples.
|
|
show
|
0:57 |
Once you finish writing your documentation, you need to figure out where to publish it.
If your project is open source, then the easiest way to Host your documentation online is to use the website called read the docs It's a free hosting for open source projects, and it supports sphinx, out of the box.
You'll have to log in and connect your GitHub project.
But then read The docs will automatically build and publish your documentation each time you update your project on github.
As you can see, you can sign up directly with GitHub or GitLab or even Bitbucket if you're still using it.
And once you publish your documentation, this is how it will look like.
Here is an example of Read the Dogs Documentation for Request Package.
It looks very similar to ours sphinx, documentation that we used in the previous lessons because well it is using sphinx.
|
|
show
|
1:16 |
But what if you don't want to use sphinx?
Maybe you don't like the restructure text formatter.
You're looking for something else.
So another popular tool to manage documentation is called mkdocs.
Okay.
mkdocs is using markdown format and similar to sphinx, It has a configuration file, but this time in the YAML format.
Otherwise it works very similar to how sphinx, works.
So first you have to install it with pip.
Then you have to generate the new mkdocs project and inside you just create markdown files with documentation for your project.
Then you just need to run mkdocs serve command, and this will generate the documentation.
You can go to the local host.
URL to see this in the browser.
What I really like about mkdocs is how the search feature works.
It's very easy and fast to find whatever you're looking for, you can also customize it and use a different theme if you like.
And if you want to publish your documentation somewhere, read.
The docs also support mkdocs.
|
|
show
|
2:35 |
Now a days, a lot of projects are built based on micro services, so there is a chance that at some point in your career you will be building a rest API, rest API supports a set of standard http request.
For example, you can send a get request to get a list of users.
You can send the post request to create a new user or delete request.
to delete that user, when you are building a rest API.
You can use a tool like swagger or re doc that will automatically generate a documentation for your rest endpoints.
So in your code, you just write the get post, put, delete and so on methods and those tools will automatically create a page that list those end points.
Let's see an example.
So if you go to the swagger UI website there is a live demo page, and this is how the restAPI documentation could look for your project.
So all those things here are end points where you can send the rest request, and here you can see what parameters are required, what are optional?
What are the response calls, and so on?
There is even an option to try it out.
So you can add it this, Json, and then execute the query.
And here we have the response.
It's the same with Redoc.
We have the same example of pet store, and here we can see the same thing.
So documentation about the parameters, example Json that we can send to our rest endpoint and the responses that we can get back, If you don't know what rest api is all that might look confusing.
But when you actually build one, those tools will be super useful.
For example, if you are working with some front end developers who are using react or vue Js and they want to see how they could get data from the backend, you can just point them to those you or else, and they will be able to see all the documentation for themselves.
Some frameworks, like FastAPI, include those tools by default.
So when you build a FastAPI application, you can go to this /docs, URL.
And then you will see the rest API documentation generated for you and the same with reDoc, with other frameworks like Django rest framework, you can install them as a plugin, for me It's super convenient to use a tool like that, because each time I change something in my code, all this documentation will be automatically generated for me.
|
|
|
50:44 |
|
show
|
0:36 |
When you work on a Python project, there are other things that you have to take care.
Apart from just coding, based on what we learned so far, those things could be running tests, rebuilding the documentation each time you change the doc strings in your code or running some static analysis tool like black or flake 8 to make sure that your code looks good Those things are probably not as fun as writing code, but you still have to take care of them from time to time.
But since those are mostly repetitive task that requires you to run the same set of commands, you can automate most of them.
|
|
show
|
2:30 |
One way to automate those tasks is to use a tool like Tox.
Tox is a Python automation tool.
I have seen it most commonly used by developers who create Python packages.
That's because they need to test if they're package works with different versions of Python.
Let's say you need to make sure that your Python package works with Python 3.4 3.6, 6,7,8,9 and 10 That's seven versions of Python.
If you want to manually tested, it will take you ages.
You will need to create seven different virtual environments, one for each different version of Python, then go inside each of them and run test.
But you can instead automate all of that with Tox.
You will first have to write a configuration file called tox.ini and then in that file, you specify one version of Python you want to use and what commands you want to run.
The most common one is to run py test to make sure that all tests are passing under each version of Python.
But you can also run any additional command.
For example, you can run some static analysis tools like bandit to make sure that none of the Python versions has any security vulnerabilities.
Once you write this file, you just run Tox, command, and you check if there any errors, here We have a very simple example, but Tox is actually quite powerful.
One of the really cool feature is that you can specify a test matrix.
So you can say that you want to test different versions of Python, but also for each version of Python, you want to use different version of a specific dependency.
For example, here we say that we want to use Python 3.6,7 and 8, and then for each of those Python versions, we want to use Django 2.2 and Django 3.
And then for each of those Django versions, we want to use SQlite and mySQL.
As a database engine.
This one line of configuration creates 12 different testing environments.
The first one will have Python 3.6, Django 2.2 and SQLite.
The second one will have Python 3.6 and Django 2.2 as well, But this time we will use mySQL and so on Once we have all that prepared Tox will run tests inside each of them, trying to re create and maintain This kind of set up by hand would take you a lot of time.
And to be honest, it would be boring as hell.
|
|
show
|
0:29 |
I want to show you how to use Tox.
And for that I need to have some code to actually run and test.
So I will reuse the simple calculator from the previous chapter.
I've added the file with some tests, so we have three tests that we can run.
And one thing that is important about this code is that I am using, f string in this function and f-strings were introduced to Python in version 3.6.
So you will see that when we run on older Python version, this code should fail.
|
|
show
|
6:59 |
First, we need to create a Tox any file and let's go to the documentation and copy the basic example, what this code does.
It will create Python 2.7 and Python 3.6 virtual environment installed pytest and then run pytest command.
Since I don't want to use Python 2.7, let's remove it and let's use some more up to date version of Python.
Let's see if this works.
So first we have to actually install Tox.
I am inside of virtual environment, but you might as well use pipx to install it globally.
I will use pip, and let's run Tox command and we have an error The thing with Tox is that it requires you to have either a pyproject.toml or setup.py file.
If you're not building a Python package, then you probably don't have a setup.py.
And when you manage your dependencies with a simple requirements file, you might also not have the pyproject.toml.
So, instead of creating a dummy file just to make this error go away, we can set a configuration option, to tell Tox that we are not going to use any of those files, and I don't actually remember what's the name of this option So, let's search for it.
Yeah, here it is.
It Skip this and we have to set it to true.
Let's try one more time.
Cool.
So, now we have a different error.
So, the thing with Tox is that you have to first install different versions of, Python if you want to use them.
And my computer doesn't have a command Python 3.6 or Python 3.8, because I'm using pyenv.
The easiest way to make Tox work with pyenv is to enable multiple local Python versions.
But before we do that, I need to make sure that I actually have some Python 3.6 and Python 3.8 version installed.
So let's see the list of all Python versions installed with pyenv.
So I have Python 3.8, but I don't have Python 3.6, so let's quickly install that one.
Let's use the latest version.
This is going to take some time.
You can really go grab a coffee in the meantime, so now we have both version 3.6 and 3.8.
If we switch to version 3.8, we have the command Python 3.8.
If we switch to version 3.6, we have the command Python 3.6.
But, for example, we don't have a command Python 3.8 when we are using Python 3.6.
You can solve this problem by using some plugins like Tox-pyenv.
But there is a much easier solution.
Let me go back to my Global Python 3.9.
We cannot pyenv local and specify two versions of Python, and both versions will be available locally.
And now Tox should work.
As you can see, First Tox is creating a Python 3.6 virtual environment, installing pytest and running pytests.
Then it creates a Python 3.8 environment and again it install pytest inside and run our tests.
And we got a success message, meaning that tests are passing in both Python 3.6 and Python 3.8.
Let's add Python 3.5 to this mix, and that way we will see that our test will fail because the f-strings should not work.
So, let me go back to the tox.ini.
here we at py35, and again I have to install Python 3.5.
Let's add Python 3.5.
To our local list of Pythons.
We could run pyenv local command again, but what it does is it just creates this .Python version file, so we can go ahead and modify it directly.
And here we just add our new version of Python.
If we run Tox again this time, you should run with 3 Python versions.
So a few things happen.
First, Pyenv had to create the new environment for Python 3.5, and that we need tried to run test.
We have the syntax error because f-strings are not supported.
So, that's good.
And then for both Python 3.6 and 3.8, we already had the environment created, so it was much faster to run tests again.
So we know how to run tests.
But I told you that you can actually run any command that you want with Tox So, let's also at the Black and Flake 8.
To our code to make sure that it looks nice and doesn't have any errors.
And we also need to add black and flake 8 to the list of our requirements We could select a specific version of each of those plugins, but I want to use the latest one, so I will leave it without the version.
specifier.
And now let's run our Tox.
Command, Ah, one more thing.
I forgot to remove.
Python 3.5 I removed Python 3.5 so we can keep using the f-strings.
All right, so pytest, Black and Flake 8, are all happy about our code.
Let's just go back for a moment and make sure that if I make some violations to flake 8 rules, they will be picked up by Tox.
I'm going to import the module and then not use it, flake 8 should complain about that.
And as you can see, it's complaining that choice was important but not used.
So, that's how you would use Tox to automate some mundane tasks on your computer,like running tests or running static analyzer tools.
|
|
show
|
3:50 |
Another tool that can automatically run some checks on your code is called pre-commit.
Its name actually comes from Git.
When you open any good repository, you will see a folder called Hooks, so git has a set of hooks that can execute some scripts in different situations.
For example, before you try to push your code to the repository after you pull your code or before you create a new commit.
If you open any of those files, you will see that they're full of bar scripts, and they also contain instruction on how to actually turn them in to Working hooks.
You have to remove the .sample from the end of the file and then git will execute this file in a specific situation.
But we are not going to write shell scripts by hand.
We're going to use a tool called pre-commit.
This tool lets you automate what checks you run on your code when you create a you commit in your git repository.
So first you have to install it with pip or pipx.
Then you need to create a configuration file.
In this file, you specify what tools you want to use or what commands you want to run.
All those tools are called hooks.
Then you install those hooks by running pre-commit install command.
And, as you can see, this will create a pre-commit file inside our git repository, and then you are set.
Those hooks will be automatically run when you create a new commit.
Also, when you're running pre-commit for the first time, you can optionally run it on all the existing files.
A very popular way of.
Using pre-commit is to run some tasks that will format your code to make sure that it matches the code style that the rest of your team is using.
So, you might set a plugins like black or isort that will check if you're code, has correct style and correctly sort it import statements.
If it doesn't, they will prevent you from creating a new commit, and they will modify your code.
Then you can inspect your code to see if everything still looks fine, and then you try to commit again, and this time it should be successful.
Some other plugins, like Flake 8, will report errors, but they want automatically fix them so you have to fix those errors by yourself and then try to commit again.
You can also run tests, But unlike Black or Flake 8, running tests actually takes longer than a few seconds.
So, I usually don't run test in my pre commit because it takes too long.
I run my test by hand, either using Tox or directly using pytest.
And then I configure my CI server to run tests to, you might be thinking Why should I run black and Flake8 using pre commit, if I already configured my code editor to run them, when I write code?
That's because using pre-commit standardizes the settings for Black and Flake 8 between all the pip that you work with, you might have configured Flake 8 in your VSCode.
But your colleague, I, have a different configuration or someone doesn't know how to configure their code editor at all.
So, they don't use black, and they ride their Python code style in whatever fancy way they like.
Pre-commit uses a single configuration file that you keep in the git repository of your project, and once you set it up, you can forget about it, If you're call, this correct, then pre-commit will allow adding it to the repository.
If it's not, it will complain.
So, it's an automatic gate that prevents people from committing code that doesn't meet the standards of your team.
That way, you don't have to waste time during code reviews arguing about some incorrect code styles or some easy to spot problems that can be found with Flake 8.
|
|
show
|
5:34 |
Let's add pre-commit to our calculator project.
First, I need to make sure that I have pre-commit installed, and now we can generate a sample configuration file by running pre-commit sample-config.
Okay, so this doesn't actually create the file.
We have to create the file ourselves.
Let's do this in the VSCode and let's copy this.
So, what do we have here?
We have a hook to remove trailing-whitespace.
This is good.
And the file-fixer.
This is also probably good because I think this will fix the file endings depending.
If you're using Linux or Windows, check-yaml.
We don't really need this and Also, we don't need that.
What I want to have is black and Flake 8.
So, let's search for those hooks.
With Black We can actually get it from this example.
And to find Flake 8, we have to search in the list of all available hooks.
Let's go here.
So here is a page that contains all the hooks that you can use in your code, and it's actually quite huge.
So let's search for flake 8.
So here it is, but it says use GitLab pycqa flake8 instead.
So let's do that.
So, let's just copy this line and I don't remember what's the latest version of, flake 8.
So let's actually check.
Let's copy this, Okay, 3.8.4 and hopefully that should work.
What's next?
Next we have to run pre-commit installed, to set up all the git hooks that we just configured and that's it.
Let's try to create a new commit.
We have some changes, so we can commit them.
And here you can see some differences between using git without pre-commit and with pre-commit.
As you can see, it's now setting up all the pre-commit hooks that we have just configured.
It might take a moment.
Let's add the message.
And for some reason, all the hooks were skipped.
I think it's probably because this was the initial run.
So, let's modify something in the files and let's try to create a new commit, let's comment this thing out.
Okay, that's interesting.
Okay now, and now, our hooks were finally run.
As you can see, both the Trim Trailing Whitespace and Fix End of Files have failed, and they have modified some files.
So if we do git status, you can see that two files were modified.
We can see what changes were done by the pre-commit hooks by running.
Git diff.
And it was mostly adding new line at the end of those files Well, let's add them again, and let's commit.
And now everything is passing, Great.
Now, let's make some modifications that will make black fail.
Let's remove some white spaces here and there, and let's try again.
This time, Black is complaining.
But as you can see, it has reformatted the file, and we again have to added by hand.
If we go back to the file, you can see that we have our white spaces back.
Let's now try to make Flake 8 unhappy by importing a module and not using it.
So, again, Black has detected some issues, but it has automatically fix them.
However, Flake has detected some problems, but it can't fix them on its own.
So we have to actually go back and remove this line.
And now again, all our hooks are happy and we can create a new commit and that's how you would use pre-commit in your project and As you can see, we have this pre-commit config file.
So, when we push it to the git repository all the other team members on your project and download it and use it on their own local computers, let's talk about tools that you don't have to run on your local computer in the next lesson.
|
|
show
|
1:30 |
Both Tox and pre-commit requires you to do some manual work to use them.
You have to run Tox manually, and you have to set up pre-commit, to use it with your local git repository which is great because everything happens on your computer and you don't really depend on any external services.
But if you work with a team of people, there are always some problems with this approach.
Some people forget to run Tox.
Others don't use the latest pre-commit configuration, so they use the outdated set of checks and stuff like that.
So different approaches to run checks on the server that stores your code from every team member servers like GitHub and GitLab.
Basically, each time someone creates a new pull request or merge request, you run the same set of checks that Tox or pre-commit would do.
If they pass, you accept that pull request and if not, your report an error and you ask that person to fix it.
This whole process is called continues integration, and, as you might have guessed, it can happen mostly automatically.
Two popular tools to implement continues integration are GitHub actions.
For GitHub and GitLab CI.
or GitLab.
But there are also many other paid external tools that you can use.
You can even set up your own CI server using the open source tool like Jenkins.
But in this course I will focus on GitHub actions and GitLab CI, since there are free and built in into the most popular code hosting platforms.
|
|
show
|
3:02 |
GitHub actions is a continuous integration tool built into GitHub, and it's pretty easy to set up and use.
All you need to do is to create a configuration file with a few settings, first a name that will be displayed in the GitHub interface, and then you need to specify when this task should run.
In this example, it will be run each time someone pushes a new commit to this repository, and finally you define jobs that you want to run.
In this case, we have one job called Super Lint.
Since, jobs run in containers, you need to specify what image you want to use In this case, we want to use the latest Ubuntu image.
I will talk more about containers in the last chapter of this course where we will see how to use Docker to deploy our application.
Each job will run a set of steps, so under the key steps, you need to define what your job will do.
As you can see here in the first step, we check out the latest version of code, and in the second step we run a Super Linter command.
Both steps use a pre defined action.
So a plugin that was written by someone checkout is an action that is built in directly into GitHub actions.
And super Linter comes from GitHub.
You can search for this name, and that way you can see some examples of how to use it.
As you can see, this tool is a combination of various linters that you can use in different programming languages.
So this simple GitHub action will check out our code and run linter on it.
If there are some errors, it will report them, and this job will fail.
Of course, you can build much more complicated workflows where you run some custom commands.
For example, if you're working on a Django website, you will need to set up a database before you can run tests and you're not limited to running tests or linking your code.
You can execute any type of a shell command.
You can install Linux packages when you use a Linux container and then run tools from them.
You can, for example, create a file in one of the steps and use that file In another step, you can probably run some Cryptocurrency mining script although it will quickly get you banned, so please don't do that.
But my point is you can build as complicated pipeline as you want.
You can even deploy your code to some test server so you can then check manually that everything works fine.
Or if you're not scared by automatically deploying stuff, you can deploy your Code to production each time and you commit is merged.
This practice is called continues delivery, and it sometimes goes hand in hand with continuous integration.
But I personally don't trust it enoughto automatically deploy things to production.
|
|
show
|
10:14 |
Let's see GitHub actions in action.
I have pushed my calculator project to GitHub, and now I want to add some GitHub actions to it.
Setting up a basic workflow is quite easy.
You can go to the actions tab, and select the workflow most suitable to your project, as you can see GitHub has automatically detected that this is a Python project and I have some suggestions at the top.
But down there I have even more options.
So, here we have to select which workflow we want to start with.
Don't worry if none of them looks perfect, you can always edit it by hand.
So, just like with a cookie cutter template, select something that looks decent and then we will modify it.
Let's go with the Python application.
Now we have a chance to modify this configuration file and then we can click, start commit to commit it to our repository.
Name is how this GitHub Action will be displayed in the interface, so we can change it to whatever we want Next we have to define when we want to run this action.
With this configuration, we will run it each time we push a new commit to master branch, and also each time we create a new pull request to the master branch.
And that's actually what I want to have.
So let's leave it like that.
Next we specified that we want to use the latest version of Ubuntu image and then we have our steps.
In the first step, we check out the code, then we set up Pytho 3.8.
If you want, you can specify multiple Python versions, but 3.8 is good enough.
Then we install some dependencies.
So first we install an update.
pip,Then we install Flake ate pytest that we're going to use later.
And then if we have requirements.txt file.
We also install dependencies from that file.
So again, this is a very good setup.
I'm going to leave it next.
We run Flake 8 twice, each time with different arguments.
I will leave it for now as it is, to see how it works.
And then in the last step, we run pytest.
So Actually, this is a very good setup, so let's use it.
And once we're happy with it, let's add black later on.
So I click, Start commit I need a commit message and we're all set.
Let's go back to our code.
Let's modify some files and let's try to create a new pull request to see that our GitHub action is working.
So first, let's do git pull to get our latest code from github.
Now let's go to the code editor.
And let's, for example, modify one of the tests.
Let's change it, so it fails.
Now let's try to commit this code.
As you can see, I'm not running Tox, and I have pre-commit disabled.
So nothing stops me from committing a broken test with code repository.
When we refresh the repository, you can see that there is this yellow dots suggesting that there is something happening in the GitHub actions.
When we click it, you see that we have one check in progress.
Let's open the details and it's broken, our GitHub action has failed during the pytest phase, and here we have the error message.
You can click around to see the interface, and once you're done, you can go back to the code and you can fix it.
This time let's try to create a new branch and let's use a pull request with this new commit.
So we have our first pull request.
As you can see, the gitlab actions are still running.
It should take around half a minute.
We can click the details.
Okay, Now it's starting.
So as you can see, those are all the steps that we set up.
So, first we check out our code.
Then we set up Python version.
Next we install dependencies.
We linked with, Flake 8.
We test and now we have this green check, which means that we have just fixed our test, if we open it.
There is not much happening here because there was no error code currently, so we can go back to our pull request.
You can see this green check mark here, which means that the GitHub actions are passing.
So now we know that we can safely merge this pull request and it's going to fix our tests.
Great.
So one last thing that I want to set up is to use black to format, our code.
Let's do this directly in the code editor.
Let's open our GitHub Workflows configuration file, and we actually have to figure out how to add black.
So let's go back to the browser and the best place to search for available GitHub actions is the GitHub marketplace.
So here we can set for Black to see I already search in the past.
And there are some results.
None of them seems to be like the oficial black workflow from the PSF.
So, I will just select first one, and this one looks pretty easy to use.
So, actually, if we want to run in with default settings, we just need this one line.
We add a new step here.
Actually, I can skip the name and I can do this like that.
Let's actually run black before running tests.
Okay, let's see if it works.
Okay.
just this one file.
I'm in the wrong branch.
Okay, I can push it directly to Master.
Don't do get push first I'm just too lazy to untangle this mess.
Okay, We should be set up.
Let's see if our GitHub actions are still working.
This might take a bit longer because we just added than new plug in, but in the future, GitHub should use the cache, so it's going to be faster.
Okay, everything looks fine.
So one last thing.
Let's go back to our code and let's try to break it So black has something to complain about.
Again Let's remove some white spaces.
That's the easiest way.
That's what happens when you don't set up your git repository correctly.
As you saw, it took a moment before the GitHub actions kicked in.
So, we saw this green button saying that it's ready to merge.
But after a few seconds, it actually disappeared and our GitHub action started.
So, let's wait for it to finish.
Let's see the details, so we actually have something to look at.
So using black as a plugin seems to be kind of slow because it takes one minute to set it up.
So maybe a better idea would be to actually go to your git hub actions and in the install dependencies.
Just add black here and here you just run command black.
just like flake8 does.
Let's go back to our GitHub action to see And it failed.
For some reason.
I don't get the output, When you refresh, you can see the full output from this command.
And then black says one file would be reformatted, two files would be left unchanged.
And we have this huge red icon indicating that there is something wrong with this build So now you know that you actually have to go back to this pull request and you have to fix it.
I will leave this task to you because this lesson is already becoming quite long.
|
|
show
|
0:40 |
GitHub actions and other types of.
CI, can be harder to set up than a tool, like Tox or pre-commit.
They run on someone else's server.
So, if something goes wrong or if you miss, configure it and they don't work properly, you have a very limited debugging possibilities.
If the error message doesn't tell you what's wrong, then you can't just login to GitHub servers and poke around, so you might need help from your devops colleagues to sort it out.
But it's definitely worth spending time to set it up, because from now on it will work for your whole team, no matter how big this team is.
|
|
show
|
0:36 |
If you are using GitLab instead of GitHub, then you can use the GitLab CI instead.
The whole idea here is very similar to GitHub actions.
We have a pipeline that contains different stages, and those stages can contain different steps.
So, again you have to write a configuration file where you will define those different stages of the build.
And then, for each of those stage, you will define a set of steps to perform.
If any of the steps fails, you get a error and the pipeline stops.
Let's see a real world example.
|
|
show
|
11:29 |
This time we'll set up a GitLab CI for our calculator project.
So again, I have a GitHub repository with just four files and we don't have any GitLab CI setup, One way we could set it up is to click this set up.
CI/Cd I see the icon and this will generate a GitLab CI yaml file which is the configuration file.
For GitHub CI, by default, it's empty, but we can search for Python here, and this will add a lot of different options.
So as you can see, we could use latest Python version.
Define the cash for pip,so it will speed up our build, then install virtual environment, set it up, source it, install Tox and, flake 8, run Tox under Python 3.6.
Then we could build a binary well distribution of our application, Which makes sense if you're building up pypi, package but doesn't really make sense if we just have a few Python scripts and then in the final step, we will be building as sphinx documentation.
So all that is cool, but we don't really need most of those steps We don't need a binary distribution because we are not building Python package and we don't need these sphinx documentation because, well, way don't really want sphinx documentation for now.
So, instead of using this, let's go back to our code and let's write a simple configuration for gitlab ci from scratch.
So I'm going to discard those changes.
And let's go to the code editor, here We need to create a new file.
I think that's the correct name.
And now let's copy and paste some code here so I can save you from watching me typing all those stuff.
Okay, so what's happening here?
First, we define which image we want to use.
I want to, Python 3.8, because that's the same image I used on Git Hub.
And then I defined three stages first build, then static analysis and then test, and each stage can contain different steps.
So first step is install and this belongs to the stage called Build, and here will simply up make sure that we have updated pip version and will install for flake 8 black and pytest this first step with updating pip.
It's not really necessary because we are using Python image, and it should have the latest possible pip version.
So you might need this step if you are using.
Let's say Ubuntu image, but I still have it here to show you how you can execute multiple commands per step.
Our state called build will just install all the dependencies.
And next we have stage called static analysis that has two steps.
First we want to run Flake 8 and then we want to run back pretty simple And when all that is successful, we want to run our tests.
So we define pytest, step belonging to stage test, and here we just run pytest.
Okay, so let's commit this file and see what happens.
As you can see, we have this icon here, which means that there is a pipeline running.
So Gitlab has detected that we have a CI configuration and it's trying to use it.
And here we have a nice UI that shows different stages and different steps.
For each stage.
We can go inside the install to see what's happening here.
The first run is usually the slowest because it's not using any cache, and we have this message saying that the job was successful.
So let's go back and let's refresh.
All right, so this is green now.
Those two are running in parallel.
They should be very fast.
And we have a problem because black is not found Actually, both steps should fail.
Yeah, and as you can see, pytest was not run.
So what went wrong here is that I forgot that whatever we build in this build stage is not going to be preserved for the static analysis or for the test stages.
So we have to modify our configuration file.
We have to take those two lines, and we can either put them at the beginning of each script.
Or we can add a config option called before_script and add them there.
So let's go for that.
So whatever you put inside this before, script, will be executed before each of the script.
So before flake 8, before black and before pytest.
And that way we can actually get rid of this install step and the stage build because we are not using it anymore.
So, one more time, we have another build running, and this time we no longer have this build stage.
Let's check out black.
Great, This time the job was successful.
Let's check out the other job.
Not here, here.
So both Black and Flake 8 were successful.
And now the pytest is running.
So again, GitLab CI is pulling the Python 3.8 image.
Then it will run Pip, Install, Flake 8 Black and pytest, and then it's going to run our tests.
This pipeline is quite inefficient because we don't need flake 8 and black in our test stage, but it's much easier to write it like that, and the job was successful.
If we go back to the main page, we can see that now we have this green mark, saying that the pipeline was successful great.
So, as always, let's go back to our code and let's try to break something First We're going to mess up with black Again I remove some white spaces, and that's actually break the test.
While we're here, let's switch to a new branch and let's create a new merge request.
If we refresh, we get this, create merge request button and we can submit it.
Now we can see there is a pipeline running.
So first the static analysis stage is running If we go here, we can, for example, go to flake 8.
Actually, the Flake 8 will be successful.
So let's go to Black.
And go to jobs and here is the black and as you can see, it failed.
Let's open the pipeline.
it's actually flake 8 that failed, not black.
That's interesting.
So Flake 8 is failing because of the same reasons that Black would complain, its expecting to have to blank lines.
But we have zero.
So let's fix only this part.
Okay, so our static analysis tools should be happy.
But we still have this failed test, so that should fail.
Let's add a new commit to our branch.
Okay, I made a type is going to bother me when we push This will start another pipeline.
Let's go back to our merge request up and we have the pipeline running.
So let's wait for a moment.
All right, so the static analysis stage is happy, but our tests are failing.
So finally, let's go back and fix them.
So we have 5 + 15.
That's 20, 20 - 15.
That should be 5.
Okay, One last run of our pipeline, but this time everything should be fine.
And now everything is green, so we can go back to the merge request and actually merge it.
This pipeline that we just build, is very simple.
But if you're looking for some more complex examples, you can go to the GitLab CI examples page and set for a Python.
So, here we have Python on Heroku, and here we have a standard Python example.
So, the first one can be used to automatically.
Build and deploy your code to Heroku.
And the second one is actually the same example that we have when we try to create a CI configuration from the Web interface.
So, as you can see, it will run, test using Tox and then generate documentation with sphinx for every commit that you push on the master branch, you can use a setting option called artifacts to preserve some files after the pipeline is finished.
In this case, I want to keep this sphinx documentation that we just generated inside the public folder.
|
|
show
|
3:15 |
So, we have tools like Tox to automate some tasks on your computer.
We have pre-commit to automatically run some checks before you commit code.
And finally, we have CI tools like GitHub actions and GitHub CI.
Do We use all of them.
Do we use each of them in a different situation?
What's the rule of thumb here?
Nothing stops you from running all of them.
Each to requires a different amount of effort, and their feedback can span from instant to when they build finishes.
Tox can be run any time on your computer, and it gives you immediate feedback.
It's especially useful if you need to make sure that your code works under different versions of Python.
So it's a best friend.
to people writing Python packages.
You set it up once, and then you have an easy way to test your code under various Python 3 or even Python 2 versions, pre-commit, It's mostly run when you finish writing a piece of code, and you want to add it to the git repository.
I mean, you can run pre-commit in your terminal whenever you want, but people mostly set it up as a pre-commit git hook and use it like that.
And just like with Tox, you get instant feedback.
So you know, if your code checks all the requirements imposed by your team and if it can be merged, a continuous integration service is the easiest to use because you don't have to set up anything on your computer.
Usually, when you work in a team, someone will set it up for you in the git repository, and then it will automatically run on everyone's code.
But here, the feedback time depends on when the server finishes processing your code, especially the free tools ofer only one concurrent build, which means that if a lot of people submit pull request to the repository at the same time, the CI server will run checks one by one.
So it might happen that you will have to wait for a few minutes or even a few hours, depending on how big your team is.
So if I were to choose Only one tool from those three, I would go for the CI server.
Setting up tools on your computer can be error problem.
Someone might not know how to properly set up pre-commit or might not have all the dependencies for Tox.
Like the additional Python versions, A CI sever doesn't require any set up.
When a new person joins your team, you just sit down.
You decide what tools you are going to run on your code, so everyone on your team is happy.
You set up, CI and then you let it run.
Everyone will get an automatic feedback when they're commit has a problem that way.
Not only everyone has to follow the same rules, but it also speeds up the code review process.
A, lot of low hanging fruits like fading test or incorrectly formatted code will be reported automatically by the CI tools.
Of course, if you're not working as part of a team, but it's just you writing the code, then setting up a CI server might be an overkill.
If all you need to do is to run, test and maybe run black on your code each time you create and you commit for your hobby project, then using a tool like Tox or pre-commit is perfectly fine.
|
|
|
44:21 |
|
show
|
0:37 |
In the past 10 chapters.
You learn a lot about different tools that you can use when building Python projects, and now it's time to put all these new knowledge into action.
In the next three chapters, we will be building different types of Python projects.
We will start with a CLI, so a Python program that you can run in your terminal.
In the next chapter, we will take that CLI, turn it into a Python package and publish it on PyPI.
And then in the third chapter, we will build an executable application that you can send to someone, and they will be able to use your project even if they don't know anything about Python.
|
|
show
|
0:40 |
One popular family of programs are command line tools, so programs that run in your terminal, even though we are used to click things in a graphical interface or in a browser CLI tools are still very popular, especially among programmers.
A perfect example of such a tool is Git.
There are many different graphical interfaces to manage your git repositories, but a lot of people prefer to run commands in the terminal.
Another good example of a CLI is black, flake 8 or pylint.
You can run black command in your terminal, and it will format all Python files in the current directory, according to the PEP 8 rules.
|
|
show
|
2:28 |
You are probably wondering now what kind of program we're going to build.
Is it going to be a yet another calculator?
No.
We are done with writing calculators in Python.
Let's actually build something useful.
I've decided that I'm going to show you how to build a tool for uptime Monitoring.
Uptime, Monitoring means checking in regular intervals that a specific website is up and running.
You can use it, for example, when you have a personal website on the Internet and you want to get a notification when it goes down, so you know that you need to go and fix it Some of the existing uptime monitoring tools are, for example, pingdom or a free alternative called uptimerobot.
So if you have never seen an up time monitor, you can go check them out to see how they work.
But basically what they do is that you have an interface where you provide which URL you want to monitor, and this tool will check every few minutes or every few seconds.
If your website is still running and then you can get, for example, an email, notification, slack notification and stuff like that.
There are even applications that will send you a push.
Notification.
Our uptime monitor will be simple.
In the first version, we wanted to accept a URL, sent a head request and returned What status?
Code was returned.
Head request is identical to the get request.
But when you perform the head request, the server doesn't actually returned the body of the website.
It only returns the headers and the status code.
And since we only want to get the status code, we don't need to get the full content of the page.
So using ahead request will make our checks a bit faster because less data will be traveling through the network.
And once we get that response, we will display using some colors in the terminal.
Status called of 200 means everything is fine.
So it will be green.
Three or four means a redirect.
So we will display it in yellow just to give you a hint that probably this URL should be updated.
But your website is still accessible and any status code that starts with digit 4 or 5 like 404 or 543 means an error.
So we'll display them in the red.
Once we have that, we will extend our tool so that it can accept multiple URL and keep pinging them in regular intervals.
That way, we can constantly monitor multiple URL, just like a proper up time monitor would do.
|
|
show
|
2:04 |
We are not going to write everything from scratch.
Now We will use a few existing Python libraries to make it faster.
The most important choice to make is to select what CLI Python library we will use We could decide to not use any, but those libraries are actually very helpful.
They let you easily turn a simple Python script into a CLI application by adding a few decorators here and there.
I have made a very conservative choice, and I will use one of the oldest CLI libraries out there.
By oldest, I mean, it was one of the first Python CLI libraries created.
Not that it's no longer updated.
Flake is doing perfectly fine, and I often use it when I need to create a CLI tool.
There are many other alternatives, like argparse, fire or typer, so if you want an additional challenge, you can follow this chapter using a different library then click.
Next We'll use request library to send our head requests, and we will also use Colorama, which is the most popular Python library to.
Add some colors in your terminal.
If you know the click library a bit, then you might be surprised why we are adding colorama, If Click already has support for colorized output, that's because it's needed for people who are using windows.
If you go to the documentation, you will see that if you are on windows and you don't have colorama installed, you probably won't get colorized output.
So just to make sure that my code works on everyone's machine, I will install colorama.
And then, of course, I will use Pytest for testing and sphinx for documentation.
I will also use poetry as our project management tool.
It might be an overkill for such a small project, but I just want to show you how you can use it when building a Python project.
In the next chapter.
We won't use poetry, so you will have a comparison.
How to work with and without the project management tool.
Okay lets start coding.
|
|
show
|
13:35 |
So I start with an empty folder, and as you can see, I already have poetry install.
So to start the project, using a poetry, we would have to run poetry new and the name of the project.
I'm going to call my up time monitoring to uptimer, but before we do that, let's talk about where poetry stores the virtual environments.
By default.
Poetry stores the virtual environments outside of the project folder.
If we want poetry to store them inside the project, we can modify the poetry configuration and said the virtualenvs.inproject option to true So let's do that.
If you see that error message, it means I used a wrong config option.
It actually should be without the settings.
Okay, now we are good.
So now we can run poetry New.
and poetry will create a virtual environments in the current folder.
So later, when we use VSCode, VSCode will automatically detect this virtual environment so it will be more convenient to use it like that.
Well, let's create a project.
And now let's add our dependencies.
I want to use the latest version of each of those tools.
So I'm not providing a specific version and we get a error because poetry is still using pytest py.
So let's fix that.
Let's go to the pyproject toml and remove pytest from there.
And let's rerun.
Okay, we have all the dependencies installed and pinned.
Now we can open VSCode and start writing some code.
The default convention with poetry is that inside of your project, you will have a folder called the Same as your projects and inside of that folder this is the place where you should put all your Python files.
Well, let's go inside the uptimer.
We have the underscore init file(__init__) that tells Python that this is a Python package.
Inside, we only have the version number, so it's not very important.
Let's actually go one folder up and open this folder in the VSCode.
Okay, so, inside our uptimer, let's create a file called uptimer.py, And this is where we are going to store all the code for our application.
First, we need the function that will send ahead request and return the status code.
Let's write the standard Python snippet that will execute some code When you call this file with Python, let's provide some random URL, I know at least one that should work, and we get some errors because we have incorrect indentation, so you can run a command called format file, format Document.
And now we are good.
To test if this code works, we have to actually print something.
Let's run this code in the terminal to see if this actually works.
Perfect.
So we got a status code 200 which means that my modernPythonProject website is still up and running.
Let's change it, so that it can accept URL as an argument.
Well, let's write a new function that will check, given URL.
And now we have to mark this check command as a click command.
so we can do this with the decorator.
And ofcourse, we have to import click.
And we also have to tell Click that this URL parameter should be provided by a user.
So we do click arguments and that should work.
Let's try, agin, if we don't provide an argument, we got this error message.
So let's specify the URL here.
Let's other colorized output now.
Since status is an integer, we have to check if this integer starts with 100,200, 300, 400,or 500.
One way to do this is to write a nested if statement, so let's actually start checking from 200.
So first we actually check if the state is divided by 100 then rounded down returns true, this will be true for all the numbers from 200 to 299.
So that's what we need.
So if the status is 200 we want to return green text.
Let's make sure that still works.
Perfect.
We got some colors, so let's take care of other statuses, in just in case we have a different status.
Let's also display it with a different color.
We can either go and try to find the website of this broken, or we can use this website called httpst.us, It contains examples of different websites that return different response codes.
So let's take one with the status code 301 Okay, that's not very useful.
Copy link address.
I'm using 302, but it doesn't matter.
Perfect.
It's yellow.
Let's try a broken page.
Okay?
And now 543 Cool.
So it's working.
But what happens if we provide the URL that doesn't even exist I really hope no one will go and register this domain to break my code.
So please don't do that.
Oh, now we have an exception.
So we're not handling the case when the website doesn't even exist.
So let's take care of that.
Let's simply catch this exception.
And let's return an error message saying that this is not a proper URL.
That should work.
No, it doesn't.
Like that, Still wrong, Actually Let's do this here.
Okay, now it's fixed and the other website are still working.
We have 301, because I forgot to add the US here before we move on.
Let's clean up our code a bit.
I'm not a big fan of this huge if else statements, So, let's try to refactor it a bit.
So, now we have a dictionary where the colors are corresponding to a key that we get from dividing status by 100 and then rounding it down.
We also need to add the magenta color that we should get when none of the other colors feed.
So, let's add the default value of -1, and that should work.
We can remove all that and let's reformat.
Okay, this dictionary is now kind of ruined but let's leave it like that.
Let's make sure it still works.
Okay, Cool.
One more thing that I don't like is that we have to call this poetry run Python and the name of the file.
And that doesn't really feel like a proper CLI application.
I would prefer to call uptimer and the URL of the website.
Not all this long command.
And we can actually actually that very easily, we can go to the poetry settings and define a script there.
While we are here, we can also move pytest and sphinx to the dev dependencies just to indicate that they are part of.
Dev dependency, is not part of the dependencies for our final package.
The difference here is that if you want to install this up time on the production server, you can call poetry, install --no-dev and it will install everything from poetry dependencies, but not from poetry dev dependencies.
So let's add a new section with our script when you type, make sure you don't make any spelling errors.
For example, when I was first recording this part, I made a typo, and instead of writing Tool, I wrote tools.
I got no error from poetry saying that this config value is incorrect.
And when I was trying to run my script, I was getting some weird file, not found error.
But I spent like, half on hour trying to fix this, so, make sure you don't make the same mistake.
Under the poetry scripts, we can define some scripts and map them to some functions in the code.
So let's say I want to have a script called uptimer and this should go inside my up timer package and call function check from my uptimer.py file.
Once you have that, go back to the terminal and run poetry install to make sure that poetry recognizes the script.
And now we can do poetry.
Run uptimer.
Great.
We still have to add this poetry run because that's the way to run our command inside the virtual environment of poetry.
But if we are inside of the poetry virtual environment, which we can do by running poetry shell.
from here, we can just call uptimer.
So here it really feels like a proper CLI command.
And that's what we wanted to achieve.
So we have the first version of our uptimer ready.
In the next video, we're going to add more features to it.
|
|
show
|
3:44 |
Okay, so we have the first version working.
Now let's change it, so it can accept multiple URL and ping them regularly.
First, we will add a daemon mode so our program can keep running indefinitely.
daemon is a type of a program that runs in the background of your computer.
Since our Python code will still run in the terminal, it's technically not a daemon, but simply a loop.
But I like the daemon name because it better reflects what our program is supposed to do.
So we have to add an option flag to our check command and here we can specify a daemon option or a shortcut just -d the I always make a spelling mistake in the word daemon.
So this -d is going to help me a lot.
By default, we wanted to be set to false, and we also want to use it as a flag so we don't want to write --daemon = true.
We wanted to do something whether or not we specify this option.
So we said is_flag = True, now inside our function, we want to start a while loop.
We want to check the status coad of URL.
And if it's not a daemon mode, we want to stop.
I forgot to pass the daemon here.
But if the value of daemon flag is true, we want to sleep for a few seconds and then run again.
So let's sleep for five seconds.
This is undefined.
So we need to import sleep from the time module and let's sort the imports.
Well, let's check if this works, cool It seems to be working.
It's pinging my website, and then it sleeps for around five seconds and then it's spinning again.
Now how can we accept multiple URL's click that argument except another parameter called nargs.
But you can specify how Many arguments it can accept.
If you want to accept unlimited number of arguments, you just provide nargs = -1.
Well, let's rename URL to URLs.
And then inside our while loop, we want to track all the URLs, and that should work.
Let's go back to the terminal and try some more.
URLs.
Nope.
That's not going to work.
What did I break?
What if we use only one?
No.
Still nothing, URL okay.
Yeah,okay, One is working and all of them are working to.
And let's try the daemon mode.
Perfect.
So everything seems to be working fine.
We have our uptimer script that we can use to check if a specific website is still up and running or if it's returning some error code.
|
|
show
|
11:06 |
Now that our uptimer application is ready, it's time to add some tests.
If we go to the test uptimer file, you can see that poetry has generated a scaffolding for our test.
And right now it's simply testing that this version from the Underscore version init file is equal to 010(__version__ = 0.1.0).
It's not a great test, but at least that's something.
I have prepared some tests that I'm going to copy and paste here, and we will go one by one and discuss what they do and they look complicated.
But we are actually testing an application that it's supposed to perform a request to external services, so they have to be a bit complicated in the basic test.
But don't worry, I will try to explain them as best as I can.
So, first of all, we have to install one more package called pytest-mock and this will make mocking with pytest is much, much easier than using the default stuff.
Let's go here and add pytest-mock.
Okay, now that we have it, let's go back to our tests on Let's discuss them one by one.
Let me hide the sidebar so we can see the whole code.
Ignore this mock response object function.
We'll talk about it in a moment.
So first we have a test for the check URL method.
You can see that.
Check URL method.
Takes the URL performs head request and returns the status code.
So here we have the first problem.
Our code is performing an external request.
So what happens if we don't have the Internet connection?
Or maybe that website is down.
We don't want our test to rely on sending a real http request.
So instead we have to mock it.
So in the first line, we patch the head method from the request library and we tell it to return whatever this function returns.
And now we go back to this function, and as you can see, it takes a parameter of code.
Then it creates a response object and assigns this code the status code that this response will have, and at the end, it returns this response object.
So all this code does is that when we perform a head request, we get back a response, object with the status code that we specify here, but no real head request is sent over the network.
So, we test that when we get back a response object with a status code 200 we can actually correctly get the status out of the response.
So here we perform.
Check_url with a dummy_url.
Because it really doesn't matter what you well you use.
We are not going to make the http request, and then we check that if the head request returns a response with 200 we correctly get this 200 back, and then we do the same thing with 404 and finally, we check that if we call check_url with no arguments is going to raise a type error.
So, let's comment out the other tests and run this one.
And as you can see, it's passing.
We have two tests.
One is this dummy version, and second one is this one.
Next, we have a test that will check that if we call colorized status, it's going to call click.secho command.
So let's go back to the code and see what the color I status does.
Basically, based on the URL and status.
It's supposed to call click.secho command.
So here we first mock the click.secho command.
And when we do that, we can actually make assertions if this command was called or in the second example, we can make an a session if it was called with specific parameters.
So this is a simple version of a test that only checks that if we call colorized status this function intern calls click.secho.
But we can comment it out and use this more advanced version Here, We actually check that if we call colorized status, it's going to actually call click.secho with parameters of message and the fg=green.
So let's run it.
And it's not working.
It's not working because I commented out the mock.
So if we don't mock this function, we cannot call this method, and now everything is fine.
Next, we have an example of a parameterised test.
So basically, this one test will take five different configurations of parameters and create five different tests.
So, what this parameterise do is it takes all the occurrences of code and replaces it with the first parameter from this tupple.
So this and this and this will be replaced with 200 and then it takes the second parameter, so color, and it replaces it with a string green.
So to make it easier to follow, Let's actually remove this parameterization.
And let's use value 200 green.
So let's look for code.
We have code here and here.
Look for value color.
This is not the value.
This is a parameter, so we only change this one and we can actually remove this.
We can also remove this, and that way it's no longer an f-string.
So let's try to run it to see if it works.
It does.
So let's see what actually happens here.
First.
Again, we mock the request.head, so we get a response object with 200.
Instead of actually sending the http request, then we create a CLI runner, CLI runner, It's a way of testing click application because click applications are supposed to run in your terminal.
There is no easy way to actually test what's happening in your terminal when you invoke them.
So Click Runner is a little helper that you can use And then you can call the invoke command to, pretend that you are calling a command from the terminal and providing some arguments, and then you can check the output from the results to see that whatever was printed to the terminal is equal to what you expected.
So here we invoked the check command.
We passed the dummy URL and we tell it to use the colors in the terminal because we want to see if we get the correct color in the terminal.
Then we prepared the message that we expect to have.
So basically, we call the click.style, which is like click.secho just without the echo.
So, it will take whatever parameter you provide.
It will take whatever color you want to use, and it will return a string containing the ASCII code for the colors that you could use in the terminal.
Then we use this expected message and compare it with the actual output.
And we also add design for the new line.
Because whatever we pray into the terminal, it contains the new line character at the end.
Now let's go back to the parameterize version.
So, now instead of having one test, we have five tests.
The first one will check that if the mock response object Return Status 200.
This status will be printed with the green color.
Next one checks that if we have 304 is going to be printed with yellow and so on and so on until we get a weird status of one and this should be printed with the magenta color.
Actually, this one test help me find a bug in my code.
As you remember, before we had -1 here and we had -1 = magenta here.
But then I realized that this is actually not going to work and we should put magenta color here.
So as you can see, tests are a perfect way to find bugs in your code.
Who would expect that?
So, let's go to the terminal and let's run them.
You can see here we had four tests and now we have eight.
So pytest parameterize is a great way to simplify some tests that are testing the same steps, but with different parameters.
With just a few lines of code, we're testing five different scenarios and then Finally, we have one last test that will check multiple URLs Another new thing here is the side effect in all the previous cases, when we're calling request.head, we wanted it to return the same response objects.
But this time we wanted to return 200, when we call it the first time.
And we wanted to return 500, when we call it the second time.
So, side effect is a way to assign different return values.
to an object that you're mocking, when you assign an iterable to decide effect parameter of the mocker.patch.
Each time you call the head method, it's going to return the next item from this iterable.
So then, in our test, we want to test to you or else the dummyurl1 and 2 And then we expect to get 200 in a green color for the first one and 500 in a red color for the second one.
So we prepared two expected messages and then concatenate them together, adding this new line character and compare it to the output from the runner.
And when we run this test in the terminal.
It's passing, so I know this was not an easy chapter, but this is how you usually write test for real applications.
You don't want to interact with external services, so you have to mock some of them.
But you just have to be careful to not mock too much.
For example, if we don't mock the request.head, but we mock the check_url function and make it always return a specific number, then there is no way that we can check if the check_url function is actually extracting the correct status code from the response object.
So make sure that you always mock Only.
The functions that comes from third party packages, for example, from packages installed with pip don't mock functions in your own code.
|
|
show
|
7:00 |
One last thing that we have to do is to add some documentation, to our code.
So I went ahead and I documented all the functions in my uptimer file.
They're quite simple, one line summary, and then I document the parameters and I do this for all the functions here.
So next we're going to call sphinx and generate the scaffolding of our documentation.
So we're on the Sphinx.
Quick start command.
And then, as always, we have to answer some questions.
I use separate directories for source and build.
Okay, let's go back to the code editor.
And now we have two folders source and build.
Build is empty, and source contains the usual stuff.
So the index.rst file and the conf.py, let me replace that with some text that I wrote before.
So, basically, I have two sentences explaining what this tool does.
A quick start explaining how to install poetry and how to use poetry.
to run our uptimer, and then I have a table of content that will contain api.rst.
So, we have to add this file and again let me copy some text here, so I used the automodule sphinx extension to automatically extract the documentation from all the functions in the uptimer, and to display them in this page.
So to make this work, we actually have to go to the conf.py and add this autodoc to the list of extensions.
So, we go here and we at the autodoc and the viewcode so we can click this link to see the source code directly in the documentation.
And one last thing that we have to do before we leave this file, is to go up and add the parent directory to the system path.
That's because we are currently in the source directory and our uptimer lives in the uptimer directory.
So currently, if we run the make command from the source directory, it won't be able to find the files that live in this uptimer directory.
So, let's just change this 1 dot to two dots(..) and now the parent directory of source.
So, the whole uptimer folder with our project is in the system path, and I think we're good to go.
So, let's go to the terminal and let's generate the documentation.
As always, we have to use, poetry run make not just make.
So, we have the HTML in the build.html.
Let's go there and let's open index in the browser.
Cool.
So we have the basic documentation of our uptimer.
We have the quick start section and we have a link to the API documentation.
If we go here, you can see the documentation of two methods and the source code for them.
Two methods.
So, what happened with the last one?
So, if we go to the uptimer, we have this check method, but apparently it's not documented here.
That's because methods decorated with this click command param not by default documented.
So we have to add this function manually to the documentation.
So we go back to api.rst.
R S t and we use the autofunction module, and we specify that we want to document the function called Check.
Let's rebuild the documentation and now we have the check command.
One last thing that we can do is to use a plugin called Sphinx-click, to extract, the click documentation for our method.
You will see in a moment how it looks like.
So, first we have to install it.
Now we have to enable it.
And now in the index.rst, I want to show how to use my uptimer from the CLI, and this can be achieved with this click sphinx extension.
So, I want to explain here how to use the CLI command.
So, I add header And then I add this directive that you can find in the Sphinx Click documentation.
Let's recreate the documentation.
And if we go back to the main page of our documentation, you see that we have a CLI command section and here we have uptimer and all that is automatically extracted by Sphinx.
So, we have the documentation for our function, and we also have the example of how to use it in the command line.
And we also have a bug here because URL is not an optional argument So, let's fix that.
If we go to the uptimer, when we specify nargs, it changes the your url argument when optional one, I think we have to add required=True.
to make the required again.
Okay, it's not changed.
So let's actually try to change something in the documentation because I think if you only change the parameters in the code, not the documentations sphinx has some caching that doesn't get busted.
Or maybe I'm using wrong parameter.
Nope, I'm using the right argument required, Let's remove it and added again.
Okay, so there was some issue with the cache.
But now you where else is required argument.
So, we have a starting point for our documentation.
You can add more sections.
You can add more documentation to some API methods that you write in the future and they will be automatically added to the api.rst Now you can write some tutorials or how to sections explaining how to use your uptimer with some new features.
|
|
show
|
3:07 |
This is where we end, adding new features to our tool.
But if you find it interesting and you want to actually use it, there are many more improvements that you can add.
First of all, we can make this code asynchronous.
Otherwise, if there is a website that takes long to respond, it will block all the other URL's.
So, when you want to ping multiple url's, using an asynchronous code is a textbook example.
You can also send an email, when the website is down.
Python has a module called SMTP lib, that you can use to send emails.
You can provide credentials, for example, for your Gmail account and then add a function that will send you an email when one of the website is down.
You can also add an option to display and notification when the website goes down.
Right now, when we run uptimer in a daemon mode, we still have to check the output in the terminal from time to time.
So, it would be cool to get a pop up notification on your computer.
This is not an easy task because every operating system uses a different tool to send desktop notifications.
So if you're looking for a challenge, check how to display notifications on your operating system from Python and try to add it to our uptimer application.
You can also add the script to a Cron job and have it run every minute or every five minutes.
Cron is like automatic tasks scheduled on Linux, so if you have never used it before, you can check it out.
But when you use it, it will run your application in a regular interval so you don't have to actually run it in a terminal all the time.
You can add a database or a configuration file to store the list of websites that you want to monitor.
That way, you don't have to always provide full list of url's, that you want to monitor.
When you restart your application, you could start the URL's in the database and add some functions to add list and delete URL from the database.
You can use a fancy database like Postgres SQL or mySQL.
But Python comes with SQL module built in, so you can just import SQLlite three and create a database in the current folder and that that's enough to get you started.
And if you want something really advanced, you can add different types of requests, not just the head request.
All those tools that I showed you at the beginning So the uptime robot and pingdom they let you, for example, specify a user name and password, and then they will try to look in at your website.
That way, you can check not only if the front page of your website is running, but for example, if the users can log in, so if the database is working fine, this will require a lot of changes.
You need to add additional parameters to accept the user name and password.
Then need to set up some SSL configuration to send those password securely, and you need to add support for post request to the main function.
But when you do this, you'll have a really powerful uptime monitoring tool.
In the next chapter, we will take the uptimer and publish it on pypi.
|
|
|
39:38 |
|
show
|
0:40 |
Now that we build a CLI tool to check the status of websites, we are going to turn it into a Python package and distributed to the pypi server.
We could do this with poetry and be done in a few seconds simply by running poetry build and poetry published.
But I want to take a different approach, this time since not everyone wants to use a tool like poetry.
This time we will rebuild our project from scratch without poetry Ofcourse, I will copy most of the code test and the documentation, so you don't have to see me typing the same stuff again.
But the way we create and manage our project will be different.
|
|
show
|
0:39 |
This time we will start with a cookie cutter template.
We will search for one to create a pypi package on.
This will be our starting point.
Next, we will modify the setup.py file and we will add an entry point, so the up timer can be used directly in the command line.
Entry points are similar to the poetry scripts that we defined in the previous chapter.
They let you connect the CLI commands, to Python functions in your code, and then we'll follow the usual steps to publish the package on pypi.
So, we will build it and we will publish it using the standard steps defined in the documentation.
|
|
show
|
2:36 |
The first step in our project is to find a cookie cutter template that we're going to use.
Okay, there is actually Step 0 where we have to install cookie cutter on our computer.
But as you can see, I have already done it, in the past chapters.
If you don't have cookie cutter, you can run pip install cookie-cutter or pipx install cookie-cutter and you're all set.
So now let's go to the Git Hub page of cookie cutter and find the list of available templates.
So, we go here and search for Pantry.
So we go here and here we can see all the repositories tagged with cookie cutter Let's search for a package, since building a Python package is a popular thing to do.
It's one of the very first cookie cutter templates that we see, and it actually has a lot of stars, so that's probably the one we want to use.
Let's go inside and see the list of features that it offers, out of the box will have pytest, which is good.
Travis-CI, which I don't really care about, tox if you want to make sure that your package works with different Python versions, Sphinx that were going to use.
So, that's good.
bump2version, which is a tool that you can use to bump versions, when you want to release a new version of your package.
There is also feature to Auto-release pypi when you push a new tag to master, which is quite interesting, but it's a bit more advanced features, so I'm not going to use it.
And although CLI interface using click, which we already have, so that's not good nor bad all in all it looks like a pretty solid and standard.
Python package cookie cutter template, so let's use it.
So here I have the empty directory and I'm going to run my cookie cutter here, again, we have a set of questions that we have to answer.
|
|
show
|
5:14 |
This cookie cutter template has created a lot of files here, so let's quickly go through them, and remove those that we don't necessarily need.
So let's open VSCode and let's go through them one by one.
So, first we have the editor config.
This is actually useful because it will set up a configuration file, no matter what Editor you're using.
So if you have VIM or emacs or VSCode, they will all respect this configuration file.
Then we have gitignore with a lot of Python specific files, which is good.
Then we have Travis yml.
I'm not planning to use Travis, so I can remove this file.
We have a file where we can list all the AUTHORS.
I'm going to leave it for now.
Then we have a contributing guide that explains how different people can contribute to this project So if they want to report a bug report, they can goto GitHub Repository.
That doesn't actually even exist.
But we can always create it later.
And here are all the steps that people should do when they want to submit a pull request.
This is actually very useful file.
Let's leave it.
History is a file where you specify all the changes that happened in each release.
So again, another useful file, license since we created the MIT license.
This is the whole text of it.
Here, We have to MakeFile.
That looks a bit complicated, but actually it contains the typical stuff.
So when we run maketest, we're gonna run pytest.
When we run, make test-all.
We gonna run tox that will test our package under different versions of Python, flake 8 set as our linter, we can remove some test files, Some compiled Python files and stuff like that.
So, it's actually pretty useful file.
If you're following this video, I suggest you take a look and see what useful commands you can find here.
Then we have the manifest file that will be useful when we'll be building the Python package.
Typical.
README with some badges from Travis from pypi and stuff like that requirements_dev, So, here we have the requirements that I will use as a developer Not necessary the final requirements of the end user of the package.
So, for example, I have pytest here, but If people want to use my uptimer, I don't expect them to install pytest.
Then we have a setup.cfg with some configuration and setup.py which is the main file of the Python package.
Although now there are more and more projects that they're using.
pyproject.toml also setup.py might be replaced in the future but for now, this is the file that will be using.
So, those are the requirements for the end user of our package.
So, whenever people want to install uptimer from pypi, they will also have to install everything that you specify in this list.
And finally we have the tox.ini file where we specify all the tox settings So here we say that our project should support Python 3.5,6,7 and 8.
Apart from that, we have some folders here, so let's go from the uptimer.
So, here we have the init file that tells Python that, This is a package, the CLI file where we can specify the CLI commands and the actual uptimer that should contain the main code of our project.
Next we have test folder.
So again, the init file and some examples of fixteures and tests and actually here we have the example of how to test cli cli application.
So that's actually pretty useful boilerplate code.
Next, we have the docs folder, so we don't have to generate this sphinx documentation from scratch.
As you can see, there is a lot of things going on here, so we would have to go and modify all those files.
But I will leave that for the end.
We have a folder with the VSCode settings that was just created.
And finally, we have the .github folder that contains the issue template.
So this is a template file written in markdown that will be used when someone wants to create a new issue in your github repository for this project.
So, again, this is a very useful file when you want to give people a skeleton of, how they should document the issue with your code.
Otherwise, people will just write whatever comes to their mind like it doesn't work, fix it or whatever.
Here, atleast we tell them to specify which version of the package they are using.
What's there Python version was the operating system and stuff like that.
So, again, very useful stuff.
So apart from removing this travis yml file, the rest of the files can stay.
|
|
show
|
6:00 |
Now that we have the cookie cutter template set up, let's copy code from the previous chapter.
So I have one VSCode instance open with this new Python package that we are creating.
And I have another one with this code that we wrote in the previous chapter.
This one is not in the full screen mode, so we can easily distinguish with trying to switch.
So let's start with the main code.
Since the cookie cutter template is suggesting that we use a separate file for the CLI we'll go with that.
This will provide a nice separation between the CLI functions and other helper functions.
And one important advice that I have for you here is don't name any files with the same name as the name of the package.
So, we have the package called uptimer and inside we have the uptimer.py We got away with that in the previous chapter.
But if we have a package and inside we have Python file of the same name it will be confusing when you try to import commands from there.
When you type from uptimer import something, do you mean from the uptimer file or from the uptimer package, Especially since we have the relative and absolute imports in Python.
It will be confusing to some people or even to you in the future.
So let's just rename this file to better reflect what it's supposed to do, and we're going to call it helpers.py Okay, so let's go to the old code.
And from here we will copy the first two functions and put it into the helpers folder, well helpers file.
We don't need this.
And then let's take the last function and let's add it to the CLI file.
Let's keep the name main for the main function, and main here Actually, let me reuse this code because I like the system exit.
What I will do.
I'll just copy this.
Rename it, and I also need some imports.
I dont need request and let me sort the imports because I'm pretty sure they are wrong.
No, they actually are not.
And we have to import functions from the helpers, let's sort again, so let's test manually.
If this works, let's go back to the terminal.
First things first.
We have to create a new virtual environment.
If we go back to the VSCode, it will detect this virtual environment.
So we want to use it, now back to the terminal.
So now we actually have to install our dependencies.
The click package, the request and so on?
How do we do this?
Well, we could create a requirements file.
But when you are creating a Python package, you will also have to specify dependencies inside the setup.py file.
So by specifying them instead of .py and inside the requirements.txt, you're basically duplicating your work.
So to avoid that, we're going to keep creating the requirements.txt file.
But we'll keep the requirements dev, because those are the requirements that I will be using when working on the package.
But the end users of this package don't really need it.
So inside the setup.py, you can see that we have the requirements variable, and here we have to add any other requirements that are needed to run my uptimer command in the terminal.
So let's go back to the old code and let's see what other requirements we had there.
Let's go back here and let's copy all that and let's temporary paste it here.
So Python obviously doesn't have to be specified here.
Flake is okay.
lets add requests lets add colorama and the last two We don't really need it here, but we're going to need it in the requirements .dev.
So let's take care of that now, in the requirements dev, I'm specifying the exact versions of my dependencies.
That's because I want to have reproducible builds.
So when I switched to a different computer and I want to work on my uptimer package, I want to have the exact same versions of packages that I used before.
On the other hand, in the setup.py, I'm specifying the minimal requirements for my package.
If there is a new version of Click released, I will anyway tested before I publish it on pypi.
So, there is no risk that updating click to version 8 will break my uptimer for the end users because I will have to first try it out.
And if it works, then I will have to build it with the click 8 included.
|
|
show
|
2:24 |
Now that we have that we're going to, install our package.
But instead of using pip, we are going to install it from the setup.
py.
File.
If you go to the make file, you can see that there is this insatll command that will run Python, setup.py install.
If you use it through the make file, it will also run the clean command first, which is pretty nice because it will clean all the cache files or tox, coverage and stuff like that.
But just for now, in case you don't want to use the make file, let's run Python setup install manually.
Nope, I'm not in a virtual environment, so let's first activate.
That's why poetry is actually useful cause it activates the virtual environment automatically.
Now we can go as you can see the output is slightly different If we do pip freeze, you can see that apart from all the dependencies, we also have our uptimer installed.
So now we could, for example, open the Python REPL and imports some modules from the CLI or helpers files, and they would be imported from the module that we installed not from the source files So we want to run our tool by simply calling uptimer and the name of the website.
And to get that behavior, you have to create an entry point in the setup.py.
So if we go here to setup.py and you scroll down under the setup, there should be a parameter called entry points.
And here we have console scripts and here we define that the uptimer command will call main function from the CLI file inside the uptimer package.
All that was automatically generated for us from the cookie cutter template.
But if you're not using cookie cutter, you just have to write all that by yourself.
So this should actually work and it does pretty cool.
So when we published our package to pypi, I and someone install it.
They should be ableto call uptimer and the name of the website and everything will work.
But before we publish it, let's also add test documentation and make sure that everything is nicely polished.
|
|
show
|
3:49 |
Let's go and add some tests.
Here We have the test file with some tests that were created with the cookie cutter template, so we can actually remove all of that.
And let's go back to the old project and we can copy the test from here We have to change a few things because we use the different file names.
So from the CLI we import the main function and from the helpers we import the rest.
important, but unused.
Okay, we'll use it in a moment.
So let's search for the check function because we renamed it.
Not check.
Let me check.
Okay, this is how we call it, so we no longer have a check function.
We have a main function and here as well, and I think we should be ready to go.
Let's go back to the terminal and let's run pytest and nothing happened because we actually don't have pytest install, So pytest is installed in this file, but we never actually installed it.
First, let's remove this old version of pip because it's way too old and let's try to install those packages, make sure you are inside the virtual environment and we have some version conflicts.
pytest mock requires pytest higher than five.
But I have pytest 4 specified somewhere.
What we can do is to change the version in this file.
But a much better idea is to actually repin all those dependencies to the latest version So, let me quickly do that.
We are going to remove all those versions here, and I'm going to rename this file to requirements_dev.in.
And now I am going to quickly install pip-tools and run it so it pins the dependencies.
And now we have the requirements_dev.txt with all the dependency versions pinned.
So, now we can tell pip to use this file, no more errors, and we get the additional benefit that we use the latest version of our packages.
Now we should be able to run pytest.
Great.
So all nine tests are passing, but we got some warning here.
Unknown config option: collect_ignore.
If we search for this configuration option, you can see that in the setup.cfg We're using some configuration option that it's probably no longer supported, so, you can either check what it's doing and may be update it, or you can simply remove it.
And this will make the warning go away.
Great.
So we have all the tests passing, and now let's actually work on the documentation.
|
|
show
|
6:34 |
Next, Let's take a look at the documentation.
So, we have the docs folder.
We have a lot of sections set up by default, which is good.
So we have the read me file.
We have the installation.
We have the usage, modules.
I'm actually going to rename modules to the API.
I Just because I like this name better.
I also have to.
There is actually no modules file here.
So let me create one.
I have too many tabs, Okay, Back to the index rst.
We have API contributing guide list of authors and the changelog.
that's good.
So let's actually try to build the documentation.
You have to go to the docs and we can run make HTML as we always do.
And we have a lot of errors.
Let's start from the top.
First, there is a warning that the static path does not exist, and if we go here, you can see that yet there is no underscore static folder, but in the configuration, we are specifying that all the HTML static files should be in it.
So let's quickly create this folder.
And while we are here, you can see that We also should create underscore templates folder for the template files.
So if you want to use some HTML templates where you would store the general structure of a page and then Only change the main text you would put those files inside this underscore templates folder.
On the other hand, if you not for sure that you don't need the templates and the static files, you can simply remove those two settings from the configuration file Let's check the other warnings.
The rest of them complaints that the API file doesn't have a title.
So, let's add one.
Let's copy one of the existing ones.
Okay, this is all including files.
Anything.
Oh, usage.
Let's leave the rest empty for now.
One more try.
Okay, This time it worked.
Let's see how the documentation looks by default, and it looks pretty fine.
So what do we have here?
We have the main page, where we specify what uptimer is.
We have a list of features that this that has to be actually filled in, and we give credits to the cookie cutter py package.
Next, we have the installation guide, which is nice.
Again, We get all this text from the cookie cutter template, so we don't have to write it by hand.
Next we have the usage, which is kind of incorrect because we want to show people how to use it in the command line.
So we will fix it, API that this empty for now, but we will fill it with the auto generated documentation.
contributing guide again, very detailed.
I'm very happy that I don't have to write it by hand.
credits Oh, I'm the development lead.
That's nice.
And then we have the change log, All right, so we have to change a few things.
First, let's add some documentation to the API Page.
So let me copy that.
And let's rename the files, that should work.
Let's check, cool.
So we have the helpers documented here.
We also want to document how to use the main function.
But this thing will add to the user instruction let's rename it to quickstart.
To use uptimer in your terminal.
I'm not even sure if that's a really website, but it doesn't really matter.
I hope people will get it.
Let's copy the index RST from the previous version.
This we don't need, because we already have that in the installation instructions.
Okay, you know what?
Let's actually add click to the API as well.
That looks like the most appropriate section.
And again we have to rename some things.
It's not uptimer, it's CLI, and the function is called main, and the main function is still called uptimer.
So we leave it like that, back to the terminal.
Oops!
Unknown directive click.
Okay, so we actually forgot to add click extension to this configuration file.
Let's go here and let's add it.
Perfect.
Under the helpers, we have the documentation of how to use the uptimer.
You can see this is how you call it in the terminal.
There are some options you can pass and the list of you where else.
So, I think it's good enough to start.
For a proper documentation, There still a lot of things that you should add, but it's really up to you what information you want to include.
I will leave it like that because I don't want you to waste time watching me write some text in the RST file.
So let's actually move to something exciting and let's finally publish this package on pypi.
|
|
show
|
8:58 |
I think we have most of the stuff ready to publish our package on pypi.
Let's quickly check the setup.py file.
I will leave the version 0.1 because this project is not yet polished.
Some parts of the documentation are still missing, but it's good enough to use.
And we have the quick start guide explaining how to use it.
So, we should be good.
What else do we have here?
We have the entry points set up.
We have the classifiers, If your project is actually more mature, you might want to change the status.
You can search for pypi classifiers and you will see a list of available different statuses.
We also specify which versions of Python we support.
We could drop 3.5 and replace it with 3.9, but that's not really necessary.
So let's keep that, we have the requirements here.
Yeah, I think we're ready to go.
So let's search for the instructions on how to publish this project.
If you Google for Python package guide, you will find this page and We are actually interested in this website and here is the whole guide on how to publish packages on pypi.
So we are quickly going through all the steps here If you more or less know how publishing packages already works and you don't want to go through all the steps, there is a much faster way.
If you go to the make file, you will see that there is a release command that also calls dist.
So those two commands will basically prepare your project for distribution and publish it on pypi So we could basically call make release in the terminal and we would have everything done for us with one single command.
But since this is a tutorial, let me guide you through this process.
But we have a simple project.
We have the package files.
We have setup the setup.py file Or you can go here to find the list of classifiers.
We have to read me.
We have the license.
We probably already have setup tools and will installed.
But just to be sure, let's rerun those commands.
Yeah, we're good.
So, now we have to run this command not pytest stupid memory, around folder.
So what just happened is that Python has created a distribution files for our project.
This has created two files.
whl, .tar.gz, whl is more or less a binary file.
So whenever Python can install your package from a whl is going to use it.
That's why your package is packed together with all the dependencies.
If you don't provide a whl or for some reason Python can't use the whl, it will try to build your project from sources from the .tar.gz file.
But here Python will actually try to compile your project, so you might need to have some dependencies installed and in the next step will use a package called twine to upload our project to pypi, Once again, I think we have twine installed, but just to be sure, but not like that.
Yeah, we're good.
So, now we run this command oops,not like that, Python -m cause we want to call a module.
So one thing to notice is that we are using a test pypi server here not the real pypi.
This is perfect.
If you want to test publishing Python packages.
That way you won't be cluttering the real pypi with some dummy package, later on you can simply remove this repository parameter, and you will publish to the real pypi and voila!
We can see our package here, and now I am a proud parent of a little uptimer package on a testpypi Server.
I can copy this instruction to install it, so let's see if this will work.
Let's go back to the terminal and let's create a new virtual environment to make sure that we're not using the local uptimer package.
So let's go up one folder.
Let's deactivate the previous environment, and lets create a new one.As you can see, I'm inside a new virtual environment and I don't have a single package installed.
If I run uptimer, I should get an error, and I do.
And when we run this, we get an error pip complaints that it can't find the request matching version 2.25 which is kind of weird, because when we're testing this package locally, there was no problem with requests.
Well, the problem here is that we told pip to use the test pypi server and not all packages are published on testpypi Server.
As you can see, colorama was there.
But if we try to search for requests, you can see that the last version that was published here was 2.5 and that was in 2015.
So to fix that, we have to tell pip to also look for dependencies on the standard pypi server So instead of telling pip to, install our up timer from the test pypi, we will tell it to use the standard by pypi server and if it can't find a package there to use this extra index URL of testpypi.
So we've got all our dependencies should be installed from the standard pypi and our uptimer can't be found on pypi because we haven't published it yet there.
So for that we will instead of use test.pypi and now everything is fine Here we have our uptimer in version 0.1 installed, so let's test it, without parameters.
We got the error message saying that we have to specify some URL's, which is good.
So it means something is working.
Perfect.
So we have just successfully installed uptimer from the testpypi server Since everything works fine, let's now publish it on the standard by pypi Server Once we test our package with the test pypi Server and we know that everything is fine and we can install it, we can now go and upload it to the realpypi Server.
So I'm back to the folder with our uptimer and instead of running twine, upload to the test pypi repository.
I'm going to remove this parameter, and this will upload our package to the normal pypi server, again I have to provide the user name and password, and if I goto pypi, you can see that our package is there.
Let's try to install it and it's working.
Great.
So we have successfully published our simple package on pypi and now everyone can use it.
If you're following this tutorial, you can actually run pip install uptimer in your terminal and then you can give it a try.
|
|
show
|
1:35 |
It took us quite a few steps to publish our package on pypi.
We had to create a setup.py file, and we had to make sure that everything is correctly set there.
And then we had to run some commands manually.
So if for some reason you don't want to write your own, setup.py file.
But you also don't want to use poetry.
Another great tool is called flit.
All it does is it simplifies publishing packages on pypi.
You can go to the flit, read the docs page to see the installation and user's instructions, but there are other simple.
First, you have to install it.
Then make sure you have the underscore version(__version__) file where you specify the version of your package.
Then you can run flit in it, and this will create the pyproject.toml file.
Here You have to specify similar options as we did.
We've setup.py So, for example, under the build system, you have to specify the requirements for your package, and under the metadata, you have to specify some information about your project.
So who's the Author?
What's your email.
What's the homepage and stuff like that?
The style is much more simple than setup.py.
And once you have that, you just have to run flit publish command.
It will also ask you for your pypi user name and password and then it will publish this package.
And if you want to install it locally, you can run flit install command, and that way you can check if your package is working fine.
So that's a very good alternative to writing setup.py filed by hand.
|
|
show
|
1:09 |
And one last little tip from this chapter.
In case you're wondering, what if I have some private code that I can't share publicly, but I still want to make a Python package so I can use it in my other projects?
Well, there is a tool called pypi Server, that you can use to create your own private pypi server.
You can go to the GitHub page, and when you scroll down, you will see that what it does.
It basically starts a Web server where people can publish packages and other pip can download packages using pip.
It's a perfect tool, for example, when you work in a company that can't open source their code, but it still wants to share it with other teams.
So you can set up your own private pypi server and then use it in your code.
Once you have it up and running and you want to install packages from there, you just have to add this extra index URL that we used two lessons ago.
It's also possible to put some authentication like the user name and password, in case you want to protect this private pypi server from unauthenticated users.
|
|
|
31:38 |
|
show
|
2:23 |
We published our uptimer package on pypi.
So now everyone can install it from there and use it.
But what about people who don't have Python on computer installed, or more importantly, who don't even know what Python is.
When you install software on your windows or Mac computer, all you have to do is click the installer and it does all the magic for you.
You don't have to install dependencies manually.
The installer has all the dependencies inside.
It would be cool to turn our uptimer into such an application.
That way we can send it to someone, and they will be able to use it even if they know what Python is.
The good news is that this is possible.
There are tools like py installer that lets you create a standalone executable.
You can then send this executable to someone, and they will be able to use it without installing anything.
py installer lets you create packages for Windows, Linux and macOS.
If you only want to create packages for one of those operating systems, you can check out the py2exe that lets you create executable applications for windows or py2app that lets you create applications for macOS.
That was the good news.
The bad news is that they are not cross compilers which means that to build a package for Windows.
You have to build it on Windows to build a package for macOS.
You have to build it from macOS and so on.
There is no tool that will let you build a standalone Python executable for Windows from a macOS computer.
That's simply because macOS doesn't have the libraries required to run Python programs on windows, so you have to use the Windows computer.
Install all the required dependencies yourself, and then py installer will take them and wrap them together with your Python application.
It's not the most convenient process because you need to use multiple operating systems if you want to support multiple operating systems.
But that's pretty much the only way to make it work.
And it's still better to do this that way than trying to explain to hundreds of users what Python is and how to install Python on windows, so they can use your application.
So in this chapter,how to use pyinstaller.
|
|
show
|
6:13 |
We will continue using our uptimer.
But this time I have turned it into a very simple GUI application.
So instead of using it from a terminal, we will have a simple graphical interface that will be more user friendly.
Let me quickly show you how this application looks like.
I have removed all the additional files from the previous chapter and I left only the setup.py and two Python files.
First one is called helpers and this is where I store some helper methods.
Here, I have left only the check URL method because I no longer need to use click if we are not going to run this application from the terminal.
That way I will also have less external dependencies because I no longer have to install Click.
But if you like, there is nothing wrong with leaving all the files from the previous chapter and then just adding this new file with the GUI.
That way you will be able to run this module either from the terminal or from the graphical interface.
Speaking of graphical interface, I also have the second file called gui.py.
Here I've added some functions to create a very simple GUI.
I've decided to use the tkinter Library.
There are other libraries, but tkinter is a builtin modules, so that's the easiest one to use.
Let's first start this application so you can see the interface, and then I will talk you through the code.
So let's go to the terminal.
I already have some venv activated, but I need to create a new one.
As usual, we start by creating a new virtual environment, and then we activate.
Next, we have to install some dependencies.
So if we go to the setup.py.
you can see that I still depend on the request library.
So let's install it.
As you saw in the previous chapter to install dependencies from the setup.py.
We have to run Python setup.py install.
This will install both the request and also our application that I'm now calling guptimer because it's a GUI application.
And as you can see, we have the entry point that it's now pointing to the main function from the GUI file in the tree uptimer module, so we should be able to run guptimer in the terminal and that will start the graphical interface for us.
Let's give it a try.
So, that's how it looks like.
Depending on how you install tkinter, you might get some duplication Warning.
I think I'm using tkinter that comes built in with macOS, and it's some kind of old version, but as long as it works, we don't have to worry about that.
So this is our simple application.
And if we try to put some URL's here, we can click check.
And now we have this green 200 saying that everything is fine with this url.
Let's see the code behind this.
Move this here, in the Gui.py.
We have the main function, but when we run it, this function is called.
At the top, We have the check_URLs(), but we can skip it for now and then we have a function that creates this window.
It changes the color to a light gray so we can see where we have a text area and where we have just a background.
Next, we are the label here.
We add the textbox and we add another textbox.
But this time we disable it, so users can't modify it by hand.
Then we create some tags, and we can later use those tags to change the color of the text.
And finally, we place the button at the bottom and we connect this button to attract URLs Command.
And just before the end of this function, we start the main loop, so our program will actually respond whenever we click the button.So when we click the button, the command check_URLs() is called and this command grabs the list of URLs from the textbox and strips them and creates a list from them.
Next, we have to change the state of this response box, and that way we can actually edit it.
When the state is disabled, you cannot modify the text that is there.
Next we go through each URL in our list and we call the helper function, check URL to get the status code.
If we get the status code, we write it to the response box.
Then we grab the color corresponding to that status code, and we colorized this text And if we don't get a status code, we just write wrong Url and use the default magenta color.
And finally, we again disabled this response box so users can't edit it.
Let's test it with a few more.
URLs, At the bottom of this page, I have a few more test URLs That will return Different Http codes.
So let's copy them.
Let's run them here.
Correct.
So this is how it's working.
We can modify things and you can see the changes are reflected on the right side So this is the simple GUI for the uptimer that we will be using in this chapter.
|
|
show
|
0:57 |
Building an executable application is probably the toughest part of this course because depending on the specific set up of your computer, you might get different errors than I do.
When I was preparing this material, I run into multiple issues.
Some dependencies were missing or I run some commands wrong and I got weird output.
I checked all the problems, and I try to prepare the easiest way to build a package.
But be prepared that you might encounter different issues.
You might be missing some global dependency that I accidentally had installed.
Or maybe some environment variables are set differently on your Mac Os than on mine.
So if something goes wrong and you have different errors first, try to remove the virtual environment.
Try to reinstall this guptimer package and try to build it again Otherwise, I'm afraid you will have to Google how to fix some of the problems.
|
|
show
|
7:13 |
Since I'm using a Mac.
Let's use the pyinstaller to Create a Mac application first.
Later, I will switch the windows and we will do the same for a Windows application.
First, make sure you have created and activated the virtual environment and that you have run.
Python setup.py install.
I have done this in the previous lessons, so I don't have to do this, but there is no harm in running it again.
Next, we have to install pyinstaller.
The safest bet is to Install it in the same virtual environment where we keep our application.
So inside the virtual environment run pip install pyinstaller.
If you go to the pyinstaller documentation, you will see that the basic usage is quite simple.
You just run pyinstaller command and you provide the path to your main script.
So, in our case, it's the gui.py file.
So let's try this.
You will see a lot of output and it's probably going to fail.
That's because we are using pyenv and pyenv by default doesnt install some development packages that pyinstaller requires, but luckily we have this helpful message saying that we should rebuild our Python with enable shared on Linux or enable framework option on macOS.
So we have to either install one of the existing Python versions or install a new one.
Using this enable framework option, I've decided to install a new one.
And since I already have the 3.90 and 3.86 I will insult the 3.85, to install a Python version using pyenv with this enable framework we have to run command like that.
This will set the Python configure options for the current command to also include the enable framework command.
So let's give it a try.
Remember, if you're on Linux, just replace enable framework with enable-shared.
Now that we have this new Python version, we have to switch to using it and recreate our virtual environment and reinstall all the packages again.
Let's run pyinstaller again.
This time it should work, no errors, That's a good sign.
So now if we look inside our current folder, you will see that we have two new folders.
We have the build folder where all the building happens and we have the disk folder that contains our application.
pyinstaller has also created this gui.spec file where we can specify some options for building, but we're not going to touch it, first let's go inside the build folder.
There is a bunch of files that pyinstaller is using when building your application.
But one file here is actually quite important.
There is this warn-gui.txt file that contains all the warnings from the build process.
In our case, we got a bunch of warnings that some optional dependencies are missing.
Hopefully, it won't cause any problems.
But if in the future you run into some problems and you can't build your application this is the first file, that you should check before you start debugging.
Let's go back, inside that dist folder we can find the final product of pyinstaller our Python application.
It's simply called gui.
Let's try to open it from Finder.
We can find it on this list because it's one of a few files in this folder that contains the terminal icon.
It takes a few seconds, but it seems to be working.
Let's test it with some URLs.
Cool, so it seems to be working fine.
One thing that I don't like by default is that we have all those files here in this folder, that we have to include with our application.
So we can rerun pyinstaller with the --onefile option, and it will pack all those files together into one file.
Let me first remove this dist and build folders.
And also the gui.spec, wrong folder.
dist.
Okay.
And now, inside dist folder, we have a single gui binary that contains all the dependencies.
Let's give it a try.
oops without the dot, actually, with the .com.
Now it's working, so we can run this simple program on our computer, But we could actually do this even before, So, let's test it on a brand new macOS operating system that shouldn't even have latest Python installed.
|
|
show
|
1:35 |
So, here I have this brand new Mac os installation.
It's still using Mac os Catalina because I don't like to be an early adopter of new operating systems, but anyway, it's a pretty empty macOS system, and I haven't installed anything, so it's a perfect place to test, our GUI application.
So here I have mounted my main operating system.
If we go inside Chapter 13, this here is our GUI.
So let's move it to the desktop and let's try to execute it.
Cool.
We have the window, and if we try to ping, some website, it seems to be working.
That's awesome.
And just to show you that I don't have any Python version install let me run python --version.
So here I still have the Python 2.7 that comes with macOS.
I don't have a Python 3 installed.
No, I don't want to do that.
Stop it.
OK, so yeah, all the Python binaries, all the pip packages like requests.
Everything was bundled together inside our gui application and but it seems to be working Fine.
So that's how you can use pyinstaller on macOS, in the next two lessons I will show you how to build the same application with Windows and how to test it.
If you're not a Windows user or if you're not interested, you can skip those lessons.
|
|
show
|
9:08 |
In this and the next lesson we're going to build the gui application using the Windows operating system.
I'm running windows 10 on a virtual machine, and it's a brand new installation, so I haven't actually configured anything, and it's going to be fun to use because I haven't been using Windows since ages and I'm not really sure how things works here, but, well, let's give it a try.
First, we need to install Python, so let's open the browser.
Last time I used Windows, it was still using Internet Explorer, and you mostly used it to download the different browser.
But I've heard that edge is now much better.
This is going to be a bit slow.
As I said, it's a virtual machine, so I will try to speed up some things to get a Python installation.
We are going to go to the Python.org website and download the installer From here.
You might be wondering why I'm not using pyenv for gui application that uses tkinter.
The oficial installer will also install some additional bindings for tkinter and I'm not sure if pyenvwin would do that for you.
You can of course use Still, use pyenvwin.
There is nothing wrong with installing one Python version with the oficial Python installer and then using pyenvwin to install different Python versions.
So that's what I would recommend you to do.
First time you start using windows, you install the oficial Python version, probably the latest one available, and then you use it to, install pyenvwin, and then you use pyenvwind\ to install any additional Python versions.
So make sure you click at Python 3.9 to path, and I will go through the customized installation, so we can see what's happening.
Documentation is not really necessary, but you can leave it.
This part is very important.
Those are the additional dependencies that we will need for tkinter, so make sure this is clicked and the rest is up to you Okay, once installation is finished, we can probably also click that because it looks important.
And now let's grab the files from the host folder.
So, I go here and let me just copy the whole folder.
I have so many files because I have the virtual environment there and we are copying all the install packages.
I could remove it to make this copying a bit faster but it's too late now, so I'm just going to speed up the video, Okay?
Once we have that, let me drag this to the middle.
Come on.
Oh, man.
This machine is so slow, let's go inside and clean up a little bit.
We don't need build.
We don't need this.
Oh, wronng key.
I keep pressing the command key instead of control.
No, I don't want to start menu.
Go away.
How do I close it?
Okay.
Come on.
Stop doing that.
One by one.
No Not this button.
Sorry.
This this and this goes away.
not this button.
And inside here, this goes away and the rest stays.
And I want to see the hidden files to remove the venv.
So this goes away.
You can see there's a lot of files here.
This goes away, this and this goes away, and here everything looks fine.
All right, so now we have basically the same files that we had when we started this chapter Now I have to create a virtual environment and install all the additional packages.
So, how did I do this on Windows?
I guess we have to find a command line.
Ha there is.
I hope so.
No, there is, Let's move our files here.
So now we should be able to run Python command and have something here.
Yes, we do have Python.
So let's create a virtual environment.
And now I have to activate it.
So I need to find the equivalent of source command on the windows.
Let's go inside this folder and see what we have there.
I guess I can run the activate.bat Yes.
Perfect.
All right, let's go up and let's install all the dependencies and let's see if we can run our guptimer.
Okay.
Perfect.
It's working.
Great.
That was actually pretty smooth experience so far.
Okay, let's install pyinstaller and let's try to create a package.
Once we have that, let's try to build our package.
Oops, Different kind of slash.
I forgot.
We're using backslash here.
Looks like everything was successful.
Awesome.
We have the gui.exe file and that it's starting our uptimer.
Let's make sure that it still works and it does work indeed.
Great.
Let's run pyinstaller one more time to create one file with everything bundled just as we did for macOS.
And then in the next video, I will use a yet another clean Windows virtual machine, and we'll try to run this package.
Cool.
I have the gui.exe, and it even has the pyinstaller icon.
That's so nice.
So, now I'm going to take this exe file and move it to a clean new Windows installation that doesn't even have Python install, and then we'll check if it still works, so I will see you in the next lesson.
|
|
show
|
1:32 |
I have created another Windows virtual machine, and I have moved our gui application here.
Let's try to run it.
Unfortunately, it will take a bit of time to start, so I'm speeding up the video here.
I'm not sure if it's a problem with the fact that I'm using a virtual machine or that pyinstaller, is slow by default.
But you can check out the documentation and see if you can optimize pyinstaller a bit.
After a long way, we can finally see our familiar graphical user interface.
Let's test some website, and as you can see, even running the Python code here is slow.
It took around five seconds to test this url, even though it was super slow to run our application.
At least we didn't have to install anything.
If I run Python Command, you can see that nothing happened.
And I got this pop up for the Microsoft store to install Python.
So yeah, we don't have Python installed yet.
We managed to run our Python application, which is pretty great.
So, as you can see, pyinstaller is a really great tool that you can use to package and distribute your Python application, so that sums up our lessons about by installer on Windows.
If you get some pop ups about viruses, check out the next lesson where I will explain whats the problem with that.
|
|
show
|
1:16 |
When we were building a Python application using by installer with Windows, you probably notice this pop up about the virus.
So it turns out that some anti virus software will detect programs build with pyinstaller as viruses.
In my video, I managed to take this executable file and move it to a different Windows machine, and there it ran without any issues.
But it might happen that when you send this executable file to someone, Windows or other anti virus software will complain and they won't be able to run it So when you Google for pyinstaller and virus, you will probably find some answers.
One of them is that pyinstaller comes with some pre compiled boot loaders for different operating systems.
And when your Windows machine is complaining about that, then you will have to compile your own boot loader, so we can go to the pyinstaller documentation.
Here and here you will find all the information about building your own boot loader.
So if you're building an application with pyinstaller and you get this virus warning, I suggest that you go to the pyinstaller documentation and search for the building the boot loader,and then you follow your steps to build your own.
That hopefully should solve it.
|
|
show
|
1:21 |
In this chapter, we took an existing Python application, and we turn it into an executable application that we can send to someone who doesn't even have Python installed.
It happened to be a gui application, but it might as well be a CLI terminal application.
If you want to build GUI application from scratch and, you know that you wanted to work on multiple operating systems?
There are two other solutions that you can use.
There is Kivy, and there is beeware.
They both let you build Python applications for different desktop and mobile operating systems.
kivy uses their own widget styling, and beeware uses the native elements.
So an interface built with kivi we will look similar on different operating systems and then interface build with Beeware will look more native.
That is, it will use the typical Windows style of buttons or select boxes on Windows and Mac Os styles of macOS.
Check them out if you're thinking about building a cross platform application with Python, although don't expect that they will work flawlessly Python is still not the best choice to build.
Let's say android or iPhone applications and choosing different tools like react native is probably a much better idea.
|
|
|
54:52 |
|
show
|
1:27 |
In the final chapter.
I want to talk about deploying your application.
Depending on what you build, it will be easier or harder to deploy it.
Also, the meaning of deploy will be different.
If you have a Python package, all you need to do is publish it on pypi, and people can use it.
If you haven't executable application, then you can either send it directly to someone or put it on, GitHub or GitLab and people can grab it from there.
If it's a large file, then you can put it in Dropbox or in Amazon s3 bucket or any other type of file hosting.
So that's also rather straight forward.
Now the problem starts when you have a website.
You can't just serve websites from your computer.
Okay, technically, you can.
But what if you need to reboot your computer or if your IP changes?
And if your website gets a lot of traffic, your computer might not have enough resources to handle it.
Not to mention the fact that you don't want people constantly hammering your computer with their requests and waste Your computer resource is to serve a website.
No, there are better ways.
So, in the rest of this chapter, I will talk about deploying a Web application.
I will start with comparing virtual private servers, platform as a service and container solutions.
And then I will show you how to deploy a simple website to Heroku and how to build and deploy a docker container.
|
|
show
|
1:14 |
Unlike the Python package that you can just put in the Python package index or unlike the GUI application that you can just put on some file hosting server.
If you're building a website, you need to host it somewhere.
And there are surprisingly many different options that you can choose from.
Three most popular ones are a virtual private server,platform as a service and the solution based on containers.
For example, kubernetes.
There are some main differences between each of those solutions, the level of control that you have, how scalable it is.
For example, spinning up 100 more nodes in kubernetes, is just a matter of one configuration.
Setting in one file, cleaning up another 100 virtual servers by hand is a nightmare, and obviously the price.
Usually the more you have to do yourself, the cheaper it is.
This is especially visible with the platform as a service type of hosting.
They will take care of everything securing your application, making sure it's up and running, updating packages on the server and so on, so it saves you a lot of time, but it will cost you some extra money.
Lets discuss each solution in more detail.
|
|
show
|
0:58 |
Using a virtual private server is probably one of the oldest ways of deploying your application You rent a server from a company and you use it to serve your website The most popular solutions are digital ocean, linode or AWS.
If you're using Linux, then basically in the same way as you set up everything on your computer.
Now you have to set it up on a remote server.
Compared with other tools, this one gives you the most control.
You can really do whatever you want on your server, but it also means that you're the only person responsible for your server.
You need to make sure that it's secure, so you have to enable firewall.
You need to run periodical updates and so on.
It's perfect if you have experience with setting up Linux servers and if you have time to actually do everything yourself, and with that, you can set up a few websites that should be able to handle little to moderate traffic.
|
|
show
|
1:23 |
If you don't have the skills required to manage your own server, or if you simply don't have time to take care of that and you prefer to pay a little extra to, have someone else handle this for you.
You should check out the platform as a service solutions.
The most popular one is Heroku.
Heroku will take your code from GitHub or GitLab and deploy it to their own servers.
If you need the database or a ready server, they also got you covered.
They will create and manage a database for you, and all you need to do is to modify the configuration of your website and point it to use that database.
They will also make sure that servers are up to date and there Configuration is secure, but you don't get control over their servers.
To deploy your application, you need to write a special configuration file that Heroku will use, If something goes wrong.
It's a bit harder to the budget.
In the past, there was no way to ssh to the server with your application Now there is, but you still don't get as much freedom as owning your own virtual private server.
Also scaling up your application with Heroku is much easier than with the virtual private server, because all you have to do is to increase the number of Web servers in the user interface or in the command line tool that they offer.
|
|
show
|
3:20 |
Yet another way to deploy your application is to turn it into a container.
This way got very popular with the introduction of docker, and now a lot of containerized applications are running under the kubernetes orchestration.
This is a great solution if you have a massive application that you need to quickly scale up or scale down, depending on the traffic spikes or if you deploy your application multiple times per day, so you need to set up some continuous delivery pipeline On the other hand, learning kubernetes only to set up a cluster for a website that has less than a few 1000 visitors per day is a terrible overkill But the idea of containers is pretty great, and I recommended even for medium and small projects.
Basically, if you can build a container with your application and make it work on your computer, then you can take this container deployed on the server, and it will also work there.
I saw this joke on the Internet when one developer was saying it works on my machine and his boss responded, Okay, then we will ship your machine to the customer and the caption was.
That's how docker was born, and I think it pretty accurately shows how docker and containers work.
I use the word container a lot, so if you're new to Docker, you might be confused.
Container is basically a package that contains your application and all the dependencies that it requires, so it's very similar to the executable Python application that we built in the last chapter.
It's like an executable application that docker can run.
It can be a Django or flask website.
It can be a command line application, or it could be a GUI application.
It can be basically anything that you can run on your computer.
So, once we have such container, how do we deploy it?
One way is to spin up a virtual private server install docker and then run this docker container.
Some hosting providers can further simplify this process.
For example, digital ocean has those one click applications that lets you create a server with some preinstalled software, and as you can see, they have one for docker.
So when you click this create docker droplet, it will create a server with docker installed and ready to use.
You can also search for a phrase managed docker hosting.
This will return you a list of different services that works like Heroku.
But for Docker Containers, they will take care of, setting up everything for you, and all you need to do is to give them your container.
And if your application is really big and you need a lot of machines, you can use kubernetes.
The main difference between kubernetes and Docker is that kubernetes works across multiple machines.
That's why we often say kubernetes class On the other hand, docker runs on a single machine.
You can create and manage a kubernetes cluster yourself, but you can also search for manage kubernetes platforms.
This will return you a list of companies that can setup a kubernetes cluster for you and give you some interface to easily.
Manage it.
As you can see, we have a manage kubernetes from digital ocean, something from ovh and so on.
|
|
show
|
7:35 |
before we can move on to deploy our Web application, we need to have some kind of Web application to start with.
We were not building a Web application in this course, so we can't use the uptimer as it is.
I went ahead and I turned our uptimer into a website.
Let me walk you through the code.
The main file of our application is called main.py, and I've decided to use the FastAPI to build the website.
Initially, I wanted to use flask, but then I decided that FastAPI.
is very similar to flask, and we can also benefit from the Asynchronous support that it has.
So if you don't know FastAPI, I don't worry.
It's a very simple website and I will try to explain you everything.
If you want to learn FastAPI there is already a course at talkPython that will walk you through the basics of it.
So, what do we have here?
First we create the FastAPI application.
Then we define the folder with our templates.
So, here we have only one, template with some HTML code.
Next, I have a dictionary that maps some information.
I will go back to it later, and here we have just two websites.
First we have the home page, that all it does is to display the home.html template.
So let's open the template Finally, and here we have pretty standard HTML stuff.
So, we import the bootstrap to get some nice styling, and then we create a simple to roll layout to display some form, and then the responses from the website, looking at HTML code probably doesn't tell you anything.
So, let's start the Web servers, so we can see how it works.
So, go to the terminal, make sure that you are using a virtual environment and then install the requirements from requirements.txt file.
I already run this command before, so I will have all the requirements ready.
Once we have the requirements installed, we have to start the Web server, and now we can go to the browser.
So, that's how our simple application looks like.
We have the text area here where we can put some URL to check.
Then we can click this button and we'll get the responses here.
Let's test it with some URLs.
I have some of them store in the main file.
So let me copy this.
So when we click the check, we go to the slash check url, and it's still displays the same form.
But this time we get the responses color coded on the right side.
Since Bootstrap has only one red label, I'm using the Orange label for 404.
Okay, so now that we see how it works, let's go back to the code.
So, that was our home template.
The only complicated part is the special handling.
So if the status is zero, we output wrong, url.
And if the status is -1, we output time out.
Otherwise we output the status.
Let me show you how it works.
Let's go back here with http Status website.
We can add the sleep parameter and here we can specify for how many millisecond we wanted to sleep.
The library that we used to ping those websites by default has five seconds time out.
So if we put the time out to seven seconds, it should get a time out.
And let's also put a website that doesn't exist.
And as you can see here, we have two errors.
First is the wrong URl and the second gives us a time out.
Go back to the main.py.
So the home page returns just the form and then from the form, we can send the post request to the /check url, this function accept check url, sent in the form.
That's how we have to actually write down that we're sending form using the FastAPI And then since URLs is basically a block of text with the new line characters, we have to split it to get a list.
Then we create a list of task using this get status method from the helpers file that I will show you in a moment.
Once we have the list of task we use.
asyncio.gather to execute all the task in asynchronous manner, and then we get a list of statuses and then we'll return.
This list of status is together with the color dictionary, and that's basically what we display here.
So, this color dictionary is used to map bootstrap classes to status for example if we get status 100,we divided by 100 which gives us 1, and then we map it to the background primary, the same for any state of starting with 200 300,400,500.
And then we have two special cases for a wrong URL.
I'm returning status 0,and from a time out, I'm returning, Status -1, this is not very pretty handling of the special cases, but since we only have two I will go with that.
So the last part is the helpers.py.
And here we Only use the http x library to perform the head request in an asynchronous manner.
Http X is basically a replacement for requests Library to perform asynchronous request.
So if you're familiar with the request library, just keep in mind that https is basically a drop in replacement.
It has the same functions and it accepts the same parameters.
So, as you can see, I have the asynchronous functions basically everywhere.
One of the reasons I decided to use FastAPI is that it supports asynchronous code out of the box and for this type of website that we are building asynchronous code is actually a good idea.
Let me show you an example.
So let's say I have those four URLs, each of them has a time out of 3 seconds.
If I don't use asynchronous code, then it would take around 12 seconds to check all those websites.
If we check the developer console.
No, no, I wanted to go to the downside.
Want to move them?
Okay, let's go to network and let's click check again.
I'm not doing any caching or anything, as you can see it sending and it takes around 4 seconds.
So, it's much less than the synchronous code would take.
And if we rerun it and then 3.5 seconds.
So 3 seconds was the delay from the http status website and half a second was some handling on our site.
So that's how our simple uptime monitoring works.
Let's now try to deploy it.
|
|
show
|
8:23 |
Let's deploy our application to Heroku.
As I said, Heroku is more expensive than other options in terms of how much you pay for a Web server.
But it's a platform as a service.
So if you're not very familiar with Linux and setting up servers, it's a perfect solution because it takes away all the difficult stuff.
They have a Web interface that you can use to connect your application from GitHub.
They have a really good documentation.
And for advanced users, they have a command line interface app,that you can install and used to deploy or manage your application from the terminal.
And even though I said they're a bit more expensive than other solutions, they also offer a free tier that is perfect for a small project.
So let's dive in.
Once you create an account with Heroku and you lock in you will see this interface or a similar one.
If you're watching this course in the future, if you scroll a bit down, you can see the documentation for your language.
But if we go for Python, you will see a getting started guide that explains how to basically prepare and deploy your application.
If you go to the setup.
You can see that this guide is using the Heroku cli app and later on you can, for example, use the command line interface to, build the logs to scale your application and so on.
But for a much simple project, you can actually connect Heroku to, GitHub and use the Web interface to deploy your application.
So, let's do that.
So here we have to click.
Create new app.
We can name it, Somehow, we can choose the region.
Since I'm based in Europe, I will go for Europe.
Just make it a bit faster for me.
Here you can click this option add to Pipeline, and this pipeline is used for the continues delivery, so you can set up different steps for different parts of your CD pipelines.
The best example here would be to.
Have a staging server where you automatically deploy code for your application, and once you see that everything is working fine, you will have a different production server where you manually deploy stuff.
Since I don't have a CD pipeline, I'm going to skip this step and I just click create app.
Now we have to specify how we're going to deploy our application.
We have three main options first to use the Heroku cli app that I showed you a few moments ago.
Then we can use the GitHub.
So this is what we are going to use.
And also we could deploy a docker image also using the Heroku Cli.
So, let's connect our GitHub account.
If this is the first time you're using Heroku, it will ask you for permission from GitHub.
So you have to give access and then you should be able to search for a repositoy.
I'm using a personal repositoy and I call this project Uptime your website.
So here it is, I click connect.
And now we have two options.
We can enable automatic deploys.
So each time we push a new commit to a specified branch, it's going to automatically Deploy your application.
All right.
We can do a manual deploy, so we will have to every time click this button if we want to deploy a new version.
But how does Heroku knows how to deploy our application?
I mean, what kind of Web server use or whether to use the database or not?
Well, all that is specified in a file called procfile that you have to add to your repositoy.
So let's do that now.
We create a new procfile, and as you can see, it even has the Heroku icon here, and here We specify that we want to have a Web worker, and this Web worker will start a gunicorn server with three workers using the Uvicorn worker type of a worker.
And we're pointing it to the app from the main.py file.
If you're curious, I basically copied this piece of code from the documentation.
So, now we can save it and commit this file to our repositoy.
And then we go back to the Heroku, UI to deploy our application.
So, now we can click Deploy branch, and underneath you can see the progress.
It will take a bit of time to install all the dependencies, but we should be done soon.
Cool.
So, as you can see here we have our uptimer-website.heroku app and your app was successfully deployed.
We can click this view button and here it is.
And as you can see, it's working cool.
What if I break it?
Yep, they're working.
So, that's how easily you can use Heroku to deploy your application.
And just for the fun.
Let's enable the automatic deploys and let's make some changes to see how they automatically applied to the heroku app.
So we click this and we go back to the code.
Let's change something in the template, and if we go back to the interface, we should somewhere see that, it's building our application.
Let's refresh if it's not here, let's go to the activity.
And here you can see the build automatically started.
cool, we can click view build progress and we'll see the same output.
If something goes wrong and you're not using the Heroku cli app to view the logs you can click to view logs here to see the logs from your Web server.
As you can see, we have the same output as we had in the terminal, so our app should be up and running.
Let's go back here, deployed cool.
We can click open app, and as you can see now, the online is upper case, so that's really cool.
So basically we are using our free tier to deploy our FastAPI to Heroku I didn't even have to put my credit card anywhere one last thing.
So, our application is pretty simple.
We don't have a database or anything, but as your application grows, you probably will have to add more things like, Well, first of all, a database.
But then maybe a ready server and stuff like that.
So Heroku offers ah, lot of addons.
You can go to this, find more addons Page, and here you can see basically any kind of service that you might need for your application.
You have, like different versions of redis, different versions of databases.
Here there is, like the postgres version offered by Heroku and most of those addons, They have, like a free tier.
If you go down here, you can see that for free.
You get like database with 10,000 rows, which is not really much, but well enough to test your application.
But then, as you can see, the pricing goes up very fast.
So, that's basically how you would use Heroku to deploy your application.
You can use the Web ui or you can use the command line tool and then you can enable more addons as you go.
In the next lessons will compare this to using a docker image.
|
|
show
|
1:42 |
The second way of deploying your Web application that I want to show you is to use Docker.
When you first start using Docker, especially for such a simple application as we have here, you might feel that we are over complicating things, instead of just running pip install requirements.txt and then running the Uvcorn command.
Now we have to write a Docker file that will download something, build something, move stuff around, change some permissions and so on.
Sure, that's a bit more work.
But once we have that deploying our application is going to be much easier.
We can either find a managed Docker hosting provider that will take this image and run it for us.
Or we can just spin up a virtual server install docker and tell it to run our image.
We no longer have to do all the installation steps by hand.
In the long run, it will save your time, especially if you have a more complicated set up that requires having multiple services like a database, redis server and other external tools talking with each other to follow the next lessons Make sure that you have Docker installed on your computer.
If you don't just go to the Docker.com website and click this get started button.
Here just download the Docker for your operating system.
Ah, I clicked windows, But if you're on Mac, click this download for Mac.
If you're only Linux, you have to use this Linux engine.
I already have Docker Installed.I'm not going to do this.
And if at some point you are asked to look in, just go to the Docker Website and click sign in and creating a account there it's free.
Once we have Docker up and running, let's move on to writing a Docker file.
|
|
show
|
4:00 |
Docker uses a configuration file called Docker File.
Inside of it, you have to define all the steps to create your image.
So let's create this file and let me copy paste some code, Let's see what we have here.
First, we have to define what image we want to start with.
You could start with a completely empty image and then install everything yourself.
But that's too complicated.
On the other hand, since we are using FastAPI, you could search for an image that is more appropriate to FastAPI.
If you go to the documentation, you will see a section deploy of Docker and here you can see that there is an oficial image that will include Uvicorn- gunicorn-fastapi, which is pretty much what we want to have.
But maybe you're not using FastAPI So instead I want to show you how to start with, like a generic basic image.
That's why I decided to use a Python image.
It's built on top of the DB ambassador, and it has the Python installed inside, in general when you're building a Python project using the oficial Python image is a good idea.
So first we define the image we want to start with.
Then we copy just the requirements files inside of it.
That's because we want to leverage docker caching.
Each instruction that you define here creates an intermediate container, and if nothing changes, then this container is being reused by Docker caching.
So if the requirements.txt file remains unchanged, Docker is going to.
Reuse the cache container that contains all the installed packages.
Installing Python dependencies will usually take a couple of seconds, or maybe even a couple of minutes, so it's a really good idea to leverage cache in here.
Next to create a new user called app.
This step is completely optional, but it's a good practice from the security point of view, by default.
If you don't define a different user, Docker will run your container as the root user.
If by any chance your container gets hacked and, someone can ssh to it.
If they have the root user, they will also be able to compromise other containers in the same network and in general do bad things.
If you are the new user and you run docker containers This user, you really limit what potential hacker can do with your Docker container So once we have this appuser, we tell Docker that now the working directory is the home directory of this appuser, That way, when we copy something, we don't have to add the full path, /home/app /whatever.
And then finally, we tell Docker to use this app user from now on.
So when we run the entry point command at the end, we will be doing this with the APP user.
Next we expose port 80.
This step is actually optional because we'll have to anyway expose some ports when we run Docer, run Command.
But it's a good idea to list which ports your Docker image is going to expose.
So people who will be using this image can immediately see that And then finally, we copy the rest of the code inside the container.
At this point, usually the caching will stop working.
So whenever you change, something in one of the files, inside this chapter and you run Docker build again, we will use the cache containers all the way up to the step and then from now on Docker will be rebuilding the containers again.
And the final instruction in our Docker file is the entry point where we specified that we want to run the gunicorn Command with three workers.
We want to use the Uvicorn worker type.
We want to bind the 000 ip with the Port 80 and then we want to run the main.py file.
With the app function from it, in the next lesson will build this image.
|
|
show
|
5:05 |
Before we can build our Docker image, make sure that we actually have Docker Desktop running.
And if we do around Docker build -t uptimer-website.
command And what this command does is that it will build a container and given the name uptimer website, and it will use the Docker file from the current directory.
That's why we use this dot at the end.
Now we have to wait for a bit, depending on the speed of your Internet connection.
It might take longer to pull the Docker image from the Docker repo, but do not or later we will be done.
Since I was playing with Docker a bit before recording this part, you can see that I'm using the cached layers in Step 2, 3,4,5 and then since I made some modifications the last step so the copying is actually being executed.
But since copying files doesn't take that much time, it's actually going to go very fast.
So now that our images built, we can run it with the Docker run command.
This command will run our uptimer website image and it will map port 80 from the container to port 80 on our computer.
If you forget to add this -p 80:80 parameter, your Docker container will be running.
Your application will be running in the Docker container, and it will be accessible on Port 80.
It's just the container won't be exposing port 80 to you, so you won't be able to access that website from your computer.
You would have to somehow log into the container, and then you would be able to see it.
Once you run it, you can see a few things, first We are using the IP 0.0.0.0 on Port 80 from inside of the container.
Due to the nature of containers, it's usually a good idea to use ports 0.0.0.0 instead of 127.0.0.1 Next.
As you can see, we're putting three workers, which is more than needed for our local host example.
But I just wanted to show you how we can use more workers to handle more traffic in the future.
So now we should be able to go to the browser and access Port 80 on our local host.
Perfect.
So that's the up timer running in the Docker.
If I stop it, you can see that it's no longer accessible.
And just to show you what happens if we forget this parameter, let's remove that.
Everything looks the same because this is the output from the container But if we try to access this website in the browser, it's not accessible.
Okay, so we have a Docker file, and in the next lesson, we will try to deploy it somewhere.
But before we wrap up this lesson, I just want to show you what happens if you don't separate.
The requirements.txt from the rest of your code.
So if we go back to the Docker file, I mentioned that we first copy the requirements .txt we installed them and then we copy the rest of the code.
Now, if we go and change one of the files, usually template is the best idea.
And after each change, we have to rebuild our Docker image.
So let's around the build command again.
And as you can see, we are using caching all the way until the copy step.
If we moved things around, for example, if we just copy all the files here and then we run pip install and then we run gunicorn.
Let's let's also remove this thing.
Everything will work fine.
We will be able to build our Docker file.
It's just each time you change one of the files, you will be rerunning the pip install Step over and over again.
Let me show you what I mean.
If we go back here and we rebuild Docker, you can see everything is cache until the copy step.
But now we are running pip install.
So as you can see, it's already 20 seconds and the previous and previously it took less than 4 seconds.
Okay, I will stop this and revert my changes.
So using caching correctly using non root user and things like that are some of the things that you have to remember when used Docker.
So, if you want to learn more, I suggest that you read the Docker documentation.
They also have the section with like, best practices where you can learn the basics of how to write good Docker images.
But for now, let's move on And let's publish our Docker image in the Docker hub.
|
|
show
|
3:02 |
The easiest way to use the docker image that you have created is to publish it on Docker Hub.
docker hub is displays where you can find different docker images.
So, for example, if we search for Python, we can see the official Python image.
And here, if we search for tags, here is the image that we used to build our own uptime and website image.
So to publish an image on docker Hub, make sure that you have a docker account, because first, we'll have to log in.
So first step is to run docker log in command and it will ask you for your user name and password.
So I'm gonna type that behind the scenes.
Now make sure that you actually have a build image.
As you can see, last time I cancelled.
so I will have to build it again.
And now we have to tag our image.
So, first you specify the local image.
As you can see, when we were building, we use the uptimer website name for the image and then you specify your user name on docker hbb slash(/) the name of the repository and tag that you want to use, latest is one of the most common ones, and this will suggest users that this is the latest version of your image.
But you can also use stuff like version 1, version 2 or whatever you want.
And once you have tagged your image, we just have to run Docker, push and specify the stag.
Well, I forgot to copy the latest tag, but, as you can see by default will be using latest.
So while this is running, let's talk about the images on docker Hub.
Basically, everything that you published on docker Have is a public image.
So if you're in a situation where you want to publish private image because you don't want to share the code with anyone, there are some alternatives.
Like for example, GitLab has the container repository.
So,if you create a private project on git lab, you can also build a docker image and store it on GitLab, so that that's one solution that you can use for private docker images.
Another idea would be to just start paying for a paid docker account, and with that, you can have unlimited private repositories.
I think even on the free account you have one free private repository where you can push some images without sharing them with the rest of the world.
Okay, so this is done.
Let's actually see how it looks like in my profile, if I go here.
Yeah.
Yep.
Here.
I have it.
uptimer Website updated a minute ago.
So once you have your image published on Docker Hub, you can just use this command to, pull this image on any other computer, so let's try to deploy our image somewhere.
|
|
show
|
3:53 |
Before we move on with this lesson, I had to make one change to the docker file.
I have changed the part that were binding from 80 to Port 8000.
So in the Expose, we are exposing port 8000 and here we are also binding port 8000.
So if you're following this on your computer, make sure that you change port in those two places and then rebuild, retag and republish your image.
So, basically, run Docker Build, Docker Tag and Docker Push.
I had to do this because since we are using a non route to user, there are some problems with using ports that are lower than 1000.
So, instead of adding some additional comments to our docker file and making it more complicated the easiest way is to just use a high port.
Now let's talk about Docker Playground, before we move on to use a paid docker hosting I want to show you a website called Play With docker.
This is a playground for Docker where you can get a virtual machine for four hours and you can set up your docker images there.
It's very simple, but the most important thing is that it's free.
So, let's get started.
Play with Docker.
Yeah, Not here.
Yep.
Or you can directly go to the url labs.play-with-docker.com again.
You will have to log in with your docker user name and password.
You will be redirected, and when you log in, you will get the start button.
So once we click start, we can see this new interface.
So here we have the clock that is counting down for four hours, after four hours, everything that we have here will be gone.
So, Docker Playground, it's not a place where you can build like a persistent docker application.
But if, for example, you have problems installing docker on your local computer, this is a perfect place to go.
Also, when you want to practice using Docker outside of your local computer, you can use it.
So let's add a new instance, and this will create a small virtual machine for us.
And here we have to run the same command that we were running on our local computer.
But this time we'll be using the image from the docker hub.
So, keep in mind that this time we're exposing port 8000.
And since docker can't use the upptimer website image locally, it's downloading it from the docker hub.
Once this is done and everything seems to be working fine, we can click this button to open port 8000 and here we can see our uptimer great.
So even though we have a terrible url here, we actually have our uptimer somewhere on the Web, not on our local computer.
And all we had to do was to run this one single command.
So, as you can see, once you have the docker image ready, deploying it, it's much, much easier.
Let's test it to make sure it's still working.
Cool its working.
So that's the URL that you can actually take and send to someone.
And if they are fast enough and your virtual machine doesn't disappear after four hours, they will be able to see your up timer website up and running.
How cool is that?
So, next let me show you how to deploy Docker image to some more production ready setting.
|
|
show
|
3:27 |
In order to deploy your Docker image on a production great server, you have to rent the server from somewhere, you might use Amazon.
You might use Linode, Digital ocean and just create a virtual machine install docker and do everything manually.
But some of those companies provide you with, like one click apps.
For example.
Digital Ocean has this droplet that has docker pre installed.
I think also Linode has something like that.
But let's go with digital ocean.
So, when you go to the digital ocean marketplace and you search for Docker, you will see this Docker one click app and here we have a create docker droplet button.When you click it, you will be probably taken to the page where you can log in and then you'll be taken to digital Ocean Control panel.
We can also click, create droplets, go to the marketplace tab and search for Docker or just click this guy here Then we select the cheapest VM type.
And, I will select Frankfurt as the location of my droplet.
Since I'm based in Europe and then here I will select all the ssh keys and that's it.
I could create droplet.
And once this is ready, I can ssh to it.
Once we have the droplet up and running, we can copy the ip and ssh there.
And I cannot.
Is it still not ready.
Now it is and we get permission denied because we should not log in as tutorials, User, we should use root.
So let's try one more time.
Okay?
Now it's working.
And as you can see, if we run Docker -v, we are using Docker.
So everything seems to be working fine.
To deploy our Docker image, we basically have to run the same command as we did with play with Docker Playground.
So Docker run, then expose the port and specify the name of our image.
Let's actually use port 80 this time.
So first we specified the port on the host and then we specified the port in the container and everything seems to be working fine.
So, now we should be able to go to the browser.
Open this IP and we should see our simple form.
Let's give it a try and perfect.
Now our uptimeer website is up and running on the digital Ocean server.
We are still using an IP.
But you could connect the domain name here and use it us any other website.
Even though it's up and running, there are still two simple improvements that we can apply.
So first we have to handle the situation.
When our droplet is rebooted, we want to have docker up and running each time we restart the server.
|
|
show
|
6:39 |
One of the problems that we have with our Docker application right now is that if we rebooted the droplet or if this container stops, for some reason, it won't automatically restart our docker container.
So let me show you what I mean.
Let's start docker container in a detached mode, so I still can see my terminal.
We can see that our container is running, and if we go to the browser, it's here.
So, if I reboot this machine right now, Docker won't be able to restart our container, So let's give it a few seconds.
If we check for running docker containers, we will see that there are no containers and our website is down.
It keeps loading and then it's going to crash.
So this problem can actually be easily solved with providing one additional parameter.
If we run Docker with the restart parameter and we tell it to restart, unless we explicitly stop this container, Docker will automatically restart this container whenever we reboot this Droplet or whenever something goes wrong and container goes down.
So let's do another test.
Let's go to the browser, it's working and let's reboot the machine.
And if we list the running containers again.
You can see that after the reboot, Docker has restarted our container.
Cool.
So with the simple trick, we told Docker to automatically reboot our container if something goes wrong.
Another thing that we could do is to have some kind of very simple continues delivery mechanism.
For example, whenever we push a new version of image to Docker Hub, we want to automatically deploy.
And this we can do with the watchtower tool.
So watch tower is basically a container that will monitor other containers, and if there is a new version of image available, it will restart them.
All we have to do is to take this piece of code and run it inside our droplet.
I will, however, make some modifications.
So first I want to also run in this restart mode.
So whenever our droplet goes down, we will also restart the watchtower.
And I also want to change the default interval in which Watchtower will be pulling for new images by default.
Watchtower will check every five minutes if there is a new version of an image under Docker Hub, which is a good default because right now Docker has some limitations of how many images you can pull per hour.
But I don't want you to sit here and wait for five minutes to see if our image was updated.
So, I'm going to change the pull interval to 15 seconds.
Okay, so we have the watchtower running.
Let's just restart our uptime our website.
So, let's see if our uptimer is running right now.
And as you can see it is, So now we have to go back to our initial code, change something and rebuild retag and republish new docker image.
So, let's change this online to lower case again.
And that's it.
That's not like that.
I was experimenting with something in the background.
Let's retag and push.
Okay, Now we go back to our server and we have to wait for a bit We can use the Docker PS command to see.
When was the last time one of the containers was restarted.
And as you can see, our uptimer website was restarted less than a second ago.
So yeah, that was the watchtower would detected that there is a new image.
It pulled this image and it restarted our container, right.
So now if we go back to the browser and if we refresh, we should see the new version.
Cool.
Yeah, so it's working.
So, that's how you can have a very simple continues delivery mechanism with Watchtower pulling new images.
It's far from perfect, because when watchtower is updating your container, it's actually stopping it.
So if you have a visitor coming to your website during the time when the new images being deployed well, your website will be down.
But to properly deal with this problem, you would need to have two different servers and the load balancer.
And when you're updating one of those servers, the load balancer will stop sending traffic there.
And then you will update the other server Stop load balancer from sending traffic there and then you should be fine, which sounds like a lot of work, and it has a lot of work.
You can do this with kubernetes, but for our very simple website, using the watchtower is perfectly fine.
Yeah, that's how you can build and deploy docker images.
In the next lesson, I will talk about building something more advanced that requires,for example having a database or some kind of other external server.
|
|
show
|
2:44 |
We only really scratch the surface of what the Docker can do.
Our usage was very basic because all we had to do was to start a gunicorn server and that was it.
We didn't have to connect to any database.
We didn't have to store files in Amazon s3 bucket or set up redis for caching.
However, once your application starts growing, you will need to add more things.
And this is where you will need to use something like Docker Compose.
Docker Compose lets you configure different services like a Web server or a database and make them work together.
So you would have one Docker file for, Let's say, gunicorn, another Docker file for Postgres or mySQL database And then you would use docker compose file to connect them together, open some ports and let them talk to each other.
Covering Docker Compose is a material for a whole new course, So I'm not going to get into that, especially since this chapter is already long.
But if you go to the Docker documentation, you will see that there is a section dedicated to Docker Composedand one of the things there is an example of how to use Docker compose with Django.
So, if you go here, you can see that we have a Docker file.
And then we have the Docker compose file, where we specify that we have two different services.
One is a database that is using the postgres image.
It's defining some environment variable for the user name and password.
And then we have the Web service, which basically starts a Web server mounts on volumes, open some ports, and it depends on the database.
So, obviously, Docker documentation is the best place to get started.
But usually what people are looking for are some good examples of existing docker compose configurations.
And for that you can goto awesome-compose, GitHub repository.
This is a repository that contains examples of different Docker compose configurations The good news is that it contains a lot of different configurations, and each of them has a nice read me explaining how to use it.
The bad news is that for Python, you only have the NGINX,Flask and mongo DB or my SQL combination.
So I suggest you take a look at how the existing Docker compose file might look like, and then you can make the necessary changes and that concludes our deployment chapter.
|
|
|
17:57 |
|
show
|
0:20 |
Hey, are you still watching?
Just kidding.Although I really hope you didn't fall asleep in the middle of the course.
Congratulations.
You have made it to the end of the course.
I hope you learn something new, and I hope that you are now ready to build some Python projects using the modern tools that I showed you.
Let's quickly review each chapter.
|
|
show
|
1:03 |
If you're planning to use VSCode to work with Python, the only thing that you really need to setup is to install the Python extension You can also install Pylance to get a better suggestions as your type, and you can also add some additional extensions, like Django or flask snippets.
If you use them indent-rainbow to easily see different indentation levels.
Python doc string generator to help you quickly, write documentation for your functions and so on.
Once you have that setup a Linter like flake 8 or pylint configured black as your code formatter and start coding, when you want to run some code, the easiest way is to just press this green arrow in the upper right corner.
But if you're working with a Web framework and you need to start a server, go to the debugging panel and create a launch configuration, then you can start a Debugger.
Put some break points here and there and poke around your code.
And finally, don't forget that we can also test your code to directly from VSCode When you select a testing framework like pytest, you will get this nice sidebar for running your tests.
|
|
show
|
1:01 |
When you install a new operating system, it might come with some version of Python preinstalled, but quite often it's an old version.
So, no matter if you have an old version of Python, or if you don't have any version of Python installed.
I suggest that you installed pyenv and use it to manage Python versions on your computer.
If you are on Windows, installed Python from Python.org website to get all the dependencies and then install pyenv-win with pyenv you can easily switch what version of Python you are currently using.
You can change Python version globally locally, so for a specific folder and all the sub folders, or even for the current shell session.
So it gives you a lot of flexibility.
And if you mess up something, you can simply uninstall Pyenv and you will get back to whatever version of Python you had before.
That's because pyenv doesn't touch the system Python that comes preinstalled on your operating system, but it installs everything in a seperate location.
|
|
show
|
0:36 |
Since pip installs all Python packages in the same folder, we can't have two versions of the same package installed at the same time.
For example, when you have Django 2 installed and you want to install Django 3 pip will first uninstall that Django 2.
That's not nice, because if you work on multiple projects, you probably need to use different versions of different packages.
That's why we're using virtual environments.
When you activate a virtual environment, you tell pip to use a different folder to install Python packages, and you tell Python interpreter to use modules and packages from that folder.
|
|
show
|
0:58 |
Sometimes you want to have a Python package that is available globally on your computer.
For example, the code formatter black or code Linter flake 8.
Those are packages that you want to use in all your projects or even outside of any project.
So there is no point in installing them inside the virtual environment, because then you'll always have to activate the virtual environment.
Run black or flake 8 command, and then deactivate it.
I mean, you can do this, but that's very inconvenient.
So a much better way is to use a tool called pipx.
pipx will install each package in a separate virtual environment, but at the same time they will be available globally.
So for you, there will be no difference in how you use them.
You will just call black or flake 8 in your terminal, but at the same time, because they're installed inside virtual environments.
pipx will make sure that their dependencies don't conflict with each other.
|
|
show
|
1:19 |
When you want to build a Python project, it might be difficult to start.
First of all, you have to figure out what's the best way to write a specific project.
But also there will be a lot of mundane tasks that you will have to do at the beginning, writing a read me file, maybe writing.
They setup.py if you're building a Python package and things like that so a much better way to start the Python project is to use a cookie cutter template.
For example, if you're building a flask or Django website or a Python package, there are a very good templates that will help you start.
First, you will have to answer a few questions like, What's your name?
What's the name and the description of your project, what database you are using and so on?
And then cookie cutter will generate a scaffolding for your project.
You will have a lot of things already setup.
For example, you might already have a setup.py test folder with some example test or a contributing guide for people who want to contribute to your project and now you can start building your cool Python application.
I really like cookie cutter because it makes starting a new Python project much, much easier.
Even though I sometimes have to remove some things from the files that it generates for me, it's still much faster to use it and remove the unnecessary files than to write everything from scratch.
|
|
show
|
0:56 |
Once your simple project starts growing, you have to think how you are going to structure it.
When you have only one Python file, it's rather simple.
You put everything into one folder.
Then at some point you will have to split your functions into separate files.
And when the number of those files grows up, you can put them inside various sub folders.
For example, you can have tests in the test folder and all the other Python files inside the folder named in the same way as your project.
And as your project keeps growing, you will start adding sub folders here.
For example, if you have a few files related to user authentication, you can put them inside the auth folder or when you have some static files, you can put them inside the static folder.
Having a good folder structure will make it much easier for new developers toe understand what's going on in your project, but it will also help you avoid some Python issues like the circular import errors.
|
|
show
|
1:08 |
One very important thing to keep in mind when building a Python project is to pin your dependencies.
If you only specify the name of a package that you want to use, pip will install the latest version, and it's possible that this latest version will have some changes that will break your project Or they can simply contain a bug.
So a much better idea is to always specify which version of a package you want to use.
So don't just say that you want to use Django or that you want to use Django 2 always say that you want to use for example, jungle 2.2.4 because this is the version that you test it and you know that it will work with your project.
So always pin versions of all the packages that you use in your project.
And the best way to do this is to use a tool called pip-tools with pip-tools.
You just create a file with some dependencies pinned and you run.
pip compile.
This will create a new file, but this time, with all the versions of third party packages that you use in your project pinned.
Then you just pass this file to pip when you want to install dependencies to your project.
|
|
show
|
0:39 |
Working on a Python project involves a lot of different activities.
You need to create a virtual environment and remember to activate it each time, you need to pin your dependencies and you need to update them from time to time to make sure that you have the latest security fixes.
Luckily, ah, lot of Those repetitive tasks can be automated with a tool like pipenv for poetry.
They will automatically create and use the virtual environment for you They will provide you with commands, to easily pin and update dependencies.
And with poetry, you can even publish your package to pypi with two simple commands poetry build and poetry publish.
|
|
show
|
1:59 |
When you write Python code, there are two guidelines that you should keep in mind The most important one is Pep 8, which is a style guide for Python code.
It specifies how maney spaces to use for the indentation, how to sort your import statements and things like that.
There is also Pep257 that explains how to properly write the documentation strings for your functions, classes and modules.
The best way to implement those guidelines in your code is to use an automatic code formatter.
The most popular one is called black, and when you run it, it will automatically format your code according to the PEP 8 rules.
So you don't have to fix anything manually, and you can even configure your code editor to automatically format your code each time you save it, which is pretty convenient.
Another category of useful tools are linters, like flake 8 or pylint.
They will tell you when you make some mistakes in your code.
For example, when you try to use a variable that it's not defined, or when you import a module, but you don't use it.
Of course they wont detect all kinds of bugs but having a tool that will automatically monitor your code and try to spot at least some of the errors is very helpful.
You can extend Flake 8 with many useful plug ins, and if you want something more, you can use other static code analyzers like bandit, prospector or sorcery.
They all have different purpose, and they will report to different kinds of errors.
So if you want, you can use a few of them at the same time Last but not least when you want to take some Python code for a spin you can use the default Python interpreter.
But there are some other interpreters, like IPython BPython or PtPython that have much more features.
For example, they have syntax highlighting, auto completion, automatic indentation and so on.
So if you spend a lot of time in the Python terminal, using one of them will make your life much easier.
|
|
show
|
0:46 |
Once you write some code, you probably want to add some tests to make sure that it keeps working when you add more features in the future.
Python comes with the unit test module that you can use for writing tests, but a much better alternative is to use Pytest.
It's the most popular testing library for Python.
It has a very simple assertion system, so you don't have to remember a bunch of different assert statements.
It can automatically discover your tests.
It counts of a lot of configuration options and plugins so you can configure basically any aspect of pytest.
And it's also compatible with unit tests.
So if you have some existing tests written in the unit test, you can start using pytest right away and slowly convert them to pytest.
|
|
show
|
1:04 |
When you are ready to share your Python project with the rest of the world, you need to find a way to tell others how to use it.
So you will need to write some documentation.
And the best way to write documentation in Python is to use the library, called Sphinx.
Sphinx will generate the documentation from the RST files, and then you can display them in either the HTML format, in Latex and in a lot of other output formats.
One of the best features of Sphinx is that it comes with a bunch of plugins that makes writing documentation much easier.
You can automatically extract the doc strings from your code.
You can add links to the source code, and you can even generate the CLI documentation when you are using libraries like Click.
When you are ready to publish your documentation, you can visit the Read the Docs Website and host it there.
And if you want to use a different tool, than Sphinx.
There is also MKdocs that works very well and swagger or redoc that will automatically generate the documentation for your RestAPI.
|
|
show
|
2:18 |
Building a Python project is not just coding.
You will be constantly running tests, rebuilding documentation from time to time and running black or flake 8 to make sure that your code doesn't have an easy to spot problems.
Instead of doing all that by hand, you can use tools to automate.
a lot of those tasks.
One of those tools is Tox.
Tox will automatically run a set of commands in different virtual environments.
The most common use case is to run tests under different Python versions.
This is especially useful for developers who build Python packages instead of manually testing that your package works with Python 3.6,3.7,3.8 and so on.
You just write a Tox configuration file where you specify what Python version you want to test and what commands you want to run, and Tox will take care of everything for you.
Another tool is pre commit.
This one will generate a git hook.
So a script that will be run each time you create a new git commit, this hook can run some scripts like black or flake 8 on your code.
That way you can make sure that your code meets the quality standards set by your team, before you push it to the code repository.
Both Tox and pre commit are great, but they require you to configure something on your computer.
So, a much better alternative, especially for a large team, is to set up a continuous integration server.
Both GitHub and GitLab comes with a continues integration solution built in.
But there are also plenty of other external services that you can use, so a CI, tool, will monitor your code repository, And each time there is a new commit or a new pull request, it will automatically run some checks on them.
You can run tests, you can run black or flake 8, and you can even run any kind of Linux command that you want.
If your code can pass all those checks, it will get a green light and it can be merged.
The main advantage of using a CI.
Is that you set it up once, and then it works for everyone on your team.
No one has to install anything on their computer,and the same checks runs on every ones code.
|
|
show
|
0:29 |
After all this theory, we finally got our hands dirty and we built a CLI application.
We use click and request to, create a simple uptime monitoring tool to check what http response code, we get from a given website.
We built this project from scratch and we used poetry to manage all the typical tasks.
We also added some tests and documentation to see how all those parts of a Python project comes together nicely.
|
|
show
|
0:55 |
Then we turn our CLI application into a proper Python package.
Doing this with poetry would require us to run just two commands, and we would be done.
So just to make it more interesting, I've decided to start from scratch without using poetry.
So we found a cookie cutter template for a Python package.
We generated the scaffolding.
We added the code tests and we wrote some documentation.
The cookie cutter template took care of a lot of things.
So actually, a lot of documentation was already written for us.
And the setup.py file required just some cosmetic changes.
We also learned the difference between using the requirements file and setup.py for specifying the dependencies.
And then we published our uptimer package to pypi following the instructions from the packaging guide.
|
|
show
|
0:38 |
The last thing that we built was a gui application for our uptimer.
That way, people who are not programmers and who don't know how to use the terminal application can use it.
We took that GUI application, and with the Help of pyinstaller, we first turned it into a Mac application.
And then we switched to a Windows machine where we did the same steps to build a Windows application, and it actually worked on different Windows Computer that didn't have Python installed, which is great because it means that now you can send it to your windows using friends, and they can use your cool uptimer tool.
|
|
show
|
1:26 |
In the final chapter, we investigated how we can deploy our application.
And for that we used a very simple FastAPI website version of our uptimer for deploying a website.
One of the very common solutions is to rent a virtual private server, which means that you basically pay to use someone else's Linux server, and you need to set it up and configure everything there.
Just like you did on your Linux or Mac computer.
Of course, you need to.
Add some extra security settings because now the server is exposed on the Internet.
If you don't want to do that and you prefer to, pay someone to set up everything for you.
There are also platform as a service solutions like Heroku.
They will take care of managing the server for you, and you just have to select which code repository you want to deploy and other small configuration file, that explains how to run your application and finally, something that gains more and more popularity in the past years.
We have the container solutions like Docker, with Docker, You just pack everything that is needed to run your application together and you create an image.
Then you send this image to a server that has docker installed, and docker can take it and run it for you.
It's a bit more complicated, especially at the beginning, but this is the best solution when you want to easily scale your application using tools like kubernetes.
|
|
show
|
0:22 |
Thank you so much for watching this course and I hope you enjoyed it.
If you have any questions or comments, you can contact me through my website switowski.com Or you can find me on Twitter where I'm @SebaWitowski, If we ever meet other Python conference, come say hi and hopefully I will see you in some other courses in the future.
|