Building Data-Driven Web Apps with Flask and SQLAlchemy Transcripts
Chapter: Deployment
Lecture: Deployment overview and topology
Login or
purchase this course
to watch this video and the rest of the course contents.
0:00
Now that we've built our web app it's time to share it with the world right? It's great to have a little app that we built but it's basically useless
0:08
if we don't put it on the internet. It is a web app, after all. At least put it on an internet for your internal company, right?
0:15
So that's what this chapter's all about. We're going to see how to deploy our web application onto a standard Linux server in the cloud.
0:24
I want to be clear that this is not the only option. There's certainly other ways to put our web app out there. We could go use something like Heroku
0:32
and just configure Heroku to grab our stuff out of our GitHub repository and launch it into their system. Those to me seem easier.
0:41
They're less flexible and often more expensive but they're easier. So, what I want to do is show you how to deploy to a standard Linux server
0:49
run it in some cloud VM somewhere and you can adapt that from Digital Ocean, Linode AWS, Azure, wherever you want to run it.
0:57
So we're going to do that in this chapter. And that brings us to our overall architecture and topology.
1:04
One of the things we're not going to focus on here is setting up and configuring a database server. I consider that a little bit outside the scope
1:12
of this course and you can pick the database server that you want and then configure it. So you'll have to fold that in here.
1:18
Right now, we're choosing SQLite which means as long as that file comes along we have our database. So with that caveat, here's how it's going to work.
1:26
We're going to go get a box in the cloud which is going to be an Ubuntu Server, probably 18.04 that's what that little icon in the bottom left means.
1:35
On here, we're going to install Nginx Nginx is the thing that people will actually talk to. This listens on Port 80 and on Port 443 for regular HTTP
1:48
and on 443 for HTTPs encrypted traffic. A request is going to come in here but this does not run our Python code.
1:56
It serves up static files and it delegates to the thing that actually runs our Python code. That thing is called micro WSGI, uWSGI
2:05
muWSGI, I guess it should be. Now uWSGI, when we run it will handle our Python requests. However, we don't want to just run one of 'em.
2:15
Remember, you may or may not be aware that Python has this thing called the GIL or Global Interpreter Lock
2:20
which inhibits parallelism within a single process. And a lot of the ways people get parallelism in Python is to actually create multiple processes.
2:30
This also has great benefits for failover something goes wrong with some process running one of our requests
2:36
we can kill it and have other processes deal with it. It's not the only way to get parallelism but it's one really nice way.
2:41
So we're going to do that and have uWSGI spin off a whole bunch of itself. This will actually run our Python code.
2:49
It's going to host the Python Runtime and it's going to launch and initiate a bunch of copies of our website running in parallel.
2:57
Here we have it configured to run six worker processes. So here's what's going to happen. Request is going to come in, hopefully over HTTPS
3:06
right, you want to set up some sort of encrypted layer here right, that's the way the web's going these days. And once we're inside the server
3:14
we no longer need encryption so we'll just do a regular HTTP request over to uWSGI itself. Micro SGI will decide which of its worker processes
3:22
is ready to handle it. This time that one is. Maybe next time, this one's free, or maybe that one. It'll find one that's not busy, or less busy
3:30
and then pass that request off and then return the response back through Nginx back out over HTTPS to the client. So this is what we're going to set up
3:40
an Ubuntu Server with these things along with Python 3 and our web app ready to roar.