Async Techniques and Examples in Python Transcripts
Chapter: Why async?
Lecture: Concept: Visualizing a synchronous request

Login or purchase this course to watch this video and the rest of the course contents.
0:00 Let's visualize this concept of scalability around a synchronous series of executions. So many web servers have built in threaded modes
0:11 for Python and to some degree that helps but it just doesn't help as much as it potentially could. So we're going to employ other techniques
0:20 to make it even better. So let's see what happens if we have several requests coming into a system that executes them synchronously one after another.
0:28 So a request comes in, request 1, request 2 and request 3 pretty much right away. In the green boxes request 1, 2 and 3
0:35 those tell you how long those requests take. So request 1 and request 2, they look like maybe they're hitting the same page.
0:43 They're doing about the same thing. 3's actually really short. Could theoretically if it were to move to the front be needing a return much quicker.
0:50 So let's look at how this appears from the outside. So a request 1 comes in and it takes about that long for the response.
0:59 But during that time request 2 came in and now before it could even get processed we have to wait for most of 1.
1:06 So that's actually worse because we're not scaling. Okay, we can't do as much concurrently. And finally if we look at 3, 3 is really bad.
1:15 So 3 came in almost the same time as 1 started but because the two longer requests were queued up in front of it, it took about five times
1:25 four times as long as the request should of taken for it to respond because this system was all clogged up. Now let's zoom into one of these requests
1:33 into request 1 and if we look deep inside of it we can break it down into two different operations. We have a little bit of framework code
1:42 I'm just going to call that framework because you can't control that, you can't do much with that, right.
1:45 This is the system actually receiving the socket data creating the request response objects, things like that. And then right at the beginning
1:53 maybe the first thing we want to do is say I would like to find out what user's logged in making this request.
1:59 So when I go to the database, I want to get we'll maybe get a user ID or some kind of cookie from their request and I'm going to use that cookie
2:06 to retrieve the actual object that represents this user. Then maybe that user wants to list, say their courses.
2:13 So then we're going to do a little bit of code to figure out what we're going to do there. Okay, if the user's not None and we're going to make
2:19 this other database call and that database call might take even longer in terms of more data. And then that response comes back.
2:26 We do a little bit of work to prepare it we pass off to the framework template and then we just hand it off to the web framework.
2:33 So this request took a long time but if you look why it took a long time, it's mostly not because it's busy doing something.
2:40 It's because it's busy waiting. And when you think about web systems, websites web servers and so on, they're often waiting on something else.
2:50 Waiting on a web API they're calling waiting on a database, waiting on the file system. There's just a lot of waiting. Dr. Seuss would be proud.
2:58 Because this code is synchronous, we can't do anything else. I mean theoretically, if we had a way to say well we're just waiting on the database
3:05 go do something else, that would be great. But we don't. We're writing synchronous codes, so we call this function call the database query function
3:12 and we just block 'til it responds. Even in the threaded mode, that is a tied up thread over in the web worker process so this is not great.
3:23 This is why our scalability hurts. If we could find a way to process request 2 and request 3 while we're waiting on the database
3:30 in these red zones, we could really ramp up our scalability.


Talk Python's Mastodon Michael Kennedy's Mastodon