Modern APIs with FastAPI and Python Transcripts
Chapter: Modern language foundations
Lecture: Non-async web scraper
0:00 Async and await is definitely one of the most exciting features
0:03 added to Python in the last couple of years and FastAPI
0:06 makes it really easy to use.
0:08 It handles actually most of the juggling of all the asynchronous stuff,
0:12 you just have to make your code asynchronous for it to be able to work
0:16 with it. So let me paste the program in here,
0:19 and it's gonna have two versions,
0:20 a synchronous and an asynchronous version. So we're going to start with the synchronous version. Now
0:25 notice it has a couple dependencies,
0:27 those are all listed over here.
0:29 So we're just gonna go and go into async, sync version,
0:37 we're just going to run its requirements.
0:42 So it's using the usual characters here.
0:45 It's using beautiful soup to work with HTML,
0:48 and it's using requests to make some kind of requests.
0:50 So what is this thing going to do,
0:52 anyway? It's going to go out to "talkpython.fm",
0:55 pull up the page associated, the HTML page
0:58 with that episode, it's going to download the HTML and use beautiful soup to get
1:03 the header and use that to grab the title,
1:06 okay? The way it works, just like you'd expect,
1:09 it just goes one at a time, goes from 270,
1:11 271, 272, and so on and
1:14 it says, give me the HTML for that one, process it, print it. So, easy
1:18 right? Let's go do that. Notice there it goes, one, and then the next,
1:24 and then the next. The Talk Python server is
1:26 super fast, so it only takes five seconds to do that for 10 requests.
1:30 Run it a few times just to see where it lands.
1:35 Another five seconds, pretty stable.
1:40 Even under five seconds, how fast.
1:42 But here, let me ask you this question:
1:43 Where are we spending our time?
1:46 Where are you waiting for this response?
1:48 I can tell you the server response time for these pages is like 50 milliseconds.
1:53 But the ping time from here to the server
1:56 is at least 100. So we're not even just waiting on the server,
2:00 we're mostly just waiting on the vague Internet,
2:03 right? Like the request making its way all the way over to the east coast
2:07 of the US From the west coast,
2:08 where I am. Could we do more of that? The Internet's really scalable.
2:12 It would be great if we could send all these requests out at once and then
2:15 just get them back as they get done.
2:17 So what we're gonna do is we're gonna convert this from running in this traditional synchronous
2:22 way to using the new async and
2:24 await language features and the libraries that we'll actually make use of later as well.