Modern APIs with FastAPI and Python Transcripts
Chapter: Modern language foundations
Lecture: WSGI and ASGI servers

Login or purchase this course to watch this video and the rest of the course contents.
0:00 Have you ever wondered how you can write
0:02 a web application in one Python framework,
0:05 whether that's Flask or Django or Pyramid or even FastAPI?
0:09 And then you get to choose where you run it. You could put it on Heroku and
0:13 they run it somehow. Who knows?
0:14 You could run it under Gunicorn.
0:16 You could run it under micro WSGI,
0:18 UWSGI. All of these different options are available to us because these Python Web
0:23 frameworks plug into a general hosting architecture.
0:27 For the longest time,
0:28 that architecture was called "Web Service Gateway Interface" or WSGI, and the WSGI servers,
0:34 well, those are the ones I named, Gunicorn, Micro WSGI,
0:37 a whole bunch of other ones.
0:39 And they have a specific implementation that expects you pass function,
0:44 that request is processed, the function is called,
0:47 the return value is then returned,
0:49 and then the server takes the next response and goes with it.
0:52 So here literally is what the definition of a WSGI server is.
0:56 It basically has this single request,
0:59 and somehow that plugs into the Web framework, like Flask or
1:03 FastAPI or whatever, starts a response,
1:04 Then it gets into Flask and does all its work,
1:06 and then it returns that response over the network.
1:09 But what you don't see is any mechanism to handle concurrent things,
1:14 to begin with some sort of async call and then respond to it and so
1:18 on. So because these servers were built in a time when Python literally did not
1:23 have support for async and await, and
1:25 asyncio, they obviously didn't factor that in.
1:28 And if you change it, it's a breaking change,
1:31 right? So we don't want to change how WSGI works.
1:33 So in order for us to run a asynchronous Web framework, like FastAPI,
1:39 to full advantage, we have to use what's called an ASGI, or
1:42 "Asynchronous Server Gateway Interface". Now there's some servers that support this, right.
1:48 Uvicorn is one of them.
1:50 There's others as well, and they have an implementation
1:54 that looks a lot like this,
1:55 and here's a little arbitrary implementation that I threw in here.
1:58 Maybe we're gonna call receive and give it the scope,
2:00 but we're going to await that and potentially handle other calls while this one's being handled
2:04 then we're gonna, who knows what other middleware we're applying whatever,
2:08 and then we're gonna work on the sending data back,
2:11 okay? Because ASGI fundamentally bakes in asynchronous capabilities,
2:17 we can do many requests at the same time.
2:19 We have 100 out requests,
2:22 all waiting on a database or some external web service or micro service or something,
2:25 we can handle another request because we're actually not doing anything at all.
2:28 We're just awaiting them, right?
2:30 So that's really, really awesome.
2:31 The reason I bring this up is many of these asynchronous frameworks will run under
2:36 standard WSGI servers, but they will only run in their standard synchronous mode.
2:42 They don't actually take advantage of the asynchronous capabilities,
2:45 even if they have them. So if you were to run some sort of framework
2:49 under a WSGI server and test the scalability,
2:51 well, you're not actually doing any async and await, potentially.
2:55 So what we're gonna do is we're going to make sure that we want to work with
2:58 an ASGI server and the one that we've seen so far,
3:00 and the one that we're going to use for the rest of this course is
3:04 Uvicorn. And That's a pretty awesome logo.
3:07 Come on. You can see it's a lightning fast ASGI server and it's
3:11 actually built upon uvloop and httptools. Uvloop is implemented in
3:17 C++ so it's very low level,
3:19 very fast and so on. This has lots of good support for many of the
3:23 things that you might want to do,
3:24 and it's a solid production server from the same guys that built Django rest framework,
3:29 the same guys that built API star and starlette itself which FastAPI
3:34 is based upon. So here's Uvicorn, we're gonna be using that. This is one
3:38 recommended possibility. But down here,
3:40 notice I've given you some resources. If we come over here to awesome ASGI and
3:46 this is just one of these awesome lists.
3:47 It shows you all the servers,
3:49 the frameworks, the apps and so on that fit into this space.
3:52 So, for example, under application frameworks,
3:54 we have channels, Django, and WOOP WOOP FastAPI.
3:57 But there's also Quart, Responder, Sanic, starlette itself, which FastAPI is built upon, and so
4:03 on. And coming down here we have some monitoring,
4:07 realtime stuff, these are not super interesting.
4:11 Until we get down to here,
4:12 so we have uvicorn,
4:13 that's the one we talked about.
4:14 There's also Hypercorn and Daphne.
4:17 All these are looking quite neat. Right now,
4:20 this only supports HTTP/1 not HTTP/2. Hypercorn looks pretty good.
4:25 Haven't done anything with it, but maybe it's worth checking out actually looking at this
4:28 right. So it'll show you,
4:30 here's how you test, like they talk about using httpx to test
4:33 some of these things and so on.
4:35 So if you're looking for stuff that fits into this realm,
4:37 you can check out awesome-asgi. So we're not really gonna run anything exactly here
4:41 but I did want to talk about the difference between WSGI and ASGI
4:46 or WSGI and ASGI and that it's super important
4:49 if you plan on writing code that looks like this, run
4:53 FastAPI under an ASGI server,
4:56 might as well run it, it's built on Starlette,
4:58 might as well run it on uvicorn,
5:00 which is built by the same people who build the starlette foundation of FastAPI anyway.