MongoDB with Async Python Transcripts
Chapter: Performance and Load Testing
Lecture: Running Locust for Max RPS

Login or purchase this course to watch this video and the rest of the course contents.
0:00 Working with Locust is pretty easy. Let's go ahead and make a final chapter folder here, chapter 12. And notice I still have chapter 11 hanging around
0:15 and ready to run. So what we're gonna do is just run the code out of here, but then write the load testing code on the side.
0:24 So I'm gonna add a new Python file called Locustfile.py. pi, not py. It doesn't technically have to be named this, but you can type less to start
0:34 and control the tests if you do name it that. So let's call it that. Keep life simple, right?
0:40 Okay. So in order to do this, it's a little bit class-based. We're going to have import
0:45 locust, which we don't have. PyCharm will suggest we install it, but let's be a little more thorough here and put it
0:52 like in here and I'll generate that requirements file for you. All right, looks like a bunch of stuff got updated.
1:07 Also a whole bunch of things about Flask got installed. And Flask is used for Locust to show us real-time interactive reports
1:17 as well as some of these green threads and G event stuff. So that is all good to go. You are, so what we're gonna do is we're gonna create a class
1:26 and we'll just call this API test. It's gonna drive from Locus HTTP user. Now, the way this works is we create a function and then we give it a name.
1:41 And typically you wanna think about this as a scenario. So if we go back and we run that file, you can see it's ready to go up there.
1:49 Remember, we were playing with the static files when we made the colors kind of insane. Well, let's not worry about that.
1:57 But what we want to think about is there are four different ways that people can interact with our app here.
2:03 One of them is to get the stats, right, this thing. The other is to get the recent packages. We could get details about the packages.
2:14 This one turned out to be pretty intense 'cause in here we're returning everything. Now I said four, that looks like three.
2:22 The fourth one is this page itself. So although this is not really interacting with the API, let's go ahead and bring this in.
2:29 We'll see how this works in a minute. So we're gonna call this homepage. And the URL is gonna be just forward slash to make it obvious.
2:42 Now the way we tell Locust what this is, we say @locust.task. And here we're going to say self.client.get.
2:55 Now we don't want to put, you can see there's a bunch of options, we don't want to put the host actually in here because that might change.
3:02 We tested in production, are we testing locally? So all that's left is slash.
3:09 And what we can do, I'll put this off for just a minute, we can say host equals this.
3:15 The other thing it wants us to set in here is we'll set the weight to one. Okay, we'll come back to that in a minute. So here's our homepage.
3:23 And maybe that's not the only thing that users do. The users might also, coming back here, they might want to get the stats and say maybe
3:31 the stats, so this will be slash API slash stats, maybe that will be kind of common. We'll say stats. And this is what goes here.
3:43 What else might we have them do? Well, they're going to be surely looking for some recent packages, aren't they? And we'll just have them get five.
3:53 And let's start with just these three here. So be just call these recent.
3:58 Now nothing in this page, nothing in this file here tells you anything about what a typical user does.
4:06 Does the user mostly visit the homepage and just rarely get the recent packages? Is this mostly what they're doing? And then I almost never come here.
4:15 Also, how long, how quickly do they switch from task to task? Do they click around a lot? Are they thoughtful?
4:24 Is there a lot of content that they need to interact with or not very much? Is it a game where every key movement
4:30 is some kind of API call or is it a magazine? Right, so we're gonna come back to that. The takeaway here is what we built so far
4:39 kind of that maximum request per second, not how many concurrent realistic users can we
4:45 handle? How do we run it? A couple of ways we can do this, we could go to the terminal
4:52 and CD into code, chapter 12. And here we could just say locust, I'll go ahead and do
4:58 it just, we just run locust that might do it because the name of the file. So this running And look at this, if we click this, what do we get?
5:09 Awesome, we get this locust file. It says how many peak users do we want? Let's say we wanna have 20 users and they come in at one per second.
5:18 And here's the host, which we had to say, what was it? So here, let's come back. We put this in here now and we bail out. Make sure we hit save.
5:29 And run it again. Refresh, notice it automatically loads that. Perfect. What do we want? 20 and one at a time. It is ready, there are zero users.
5:42 And this importantly, over here is running, running this, so it's gonna be processing the request there. Not in debug mode, you don't want debug mode.
5:54 Here we go, let's see what happens. All right, it looks like it's working. These are our different endpoints right now. The number of requests.
6:03 This is useful, but what looks better are the charts. I'm gonna try to shrink these down so you can see them a little better. But look at it growing.
6:11 You can see we're adding more and more users. Where are we? We're up to, pretty much up to the max. And at this point, we've got, you can see 20 users.
6:23 How's the response time looking? So this is in milliseconds. The average response time is 33 milliseconds. Oh my gosh, that is awesome.
6:33 And up at the top, we have 543 requests per second, zero failures, both of those are good numbers.
6:40 You would see a red line growing up if there were failures. So, it's varying as you move around, but 500 requests per second, that's pretty good.
6:49 You can also go up here real quick and look at my iStats menu, and keep in mind during this whole process, I'm using OBS to record the screen,
7:01 to do green screening to cut my head out of whatever's behind me to just like have the minimal overlay of me, record the screen,
7:10 do a bunch of color correction. There's a lot going on here. Okay, so it's really, really busy
7:16 and that's gonna take a chomp out of what my computer can do. So PyCharm that represents running,
7:22 PyCharm I think represents running both FastAPI and Locus, although Locus shouldn't be putting too much of a hurt on things.
7:31 And then MongoDB is running about a thousand, a hundred. And if you're unfamiliar with the Mac percentages here,
7:37 this chip is a M2 Pro, and I think it has 10 cores, eight or 10 cores, whatever it is, these numbers represent how much of one core.
7:48 So a hundred percent CPU usage would be either 800% or a thousand percent, depending on whether it's eight or 10 cores.
7:55 Yeah, I just checked, it's 10 cores. So this is 100% represents 10% of the total CPU. It's been running for a while, let's see how it's done.
8:04 There was a weird blip here where it jumped up, what is this to, 43 milliseconds, not terrible. And those dropped probably when I was like messing
8:13 with those tools and performance stuff, right? It all takes away from the system. But it looks more or less stable like with a little variation
8:22 so what we're not seeing is we're not seeing it fall apart yet. So we can actually add more users to this run.
8:28 Let's stop, we can start over and do new test. And it'll actually keep the same report, but I wanna start a new report. So I'll go back down here.
8:38 We'll hit stop. I'll show you one way also that we can just put a thing you can run up in the top here, which is kind of cool.
8:47 So let's go over here and say we're gonna run Python. So notice if we switch this to module, we can just say run locust and it doesn't take
8:59 any parameters. You saw me just type it to tell it to go. What we need to do is just set the working directory to where the locust file is. Excellent.
9:09 So let's go ahead and run it. Now notice we can just press this. You can see we have that running. We also want to have our main running.
9:18 They can both run at the same time in PyCharm. Excellent. So here we have main and we have locust running. So if you want, you can control it up here,
9:26 not just the CLI, it's up to you. Let's go back and say this time, that was good, but let's go to 100 and we'll add 'em a little quicker
9:37 just so we don't have to wait for you all, and we'll do like that. Go to 100, add 'em two at a time every second. Switching over to the chart.
9:47 You can see we're adding the users. I'll go ahead and zoom that back. Just look, the users are right here, 28, 32.
10:01 Now notice something, as we're adding more and more users, we're not really improving, are we?
10:08 So we kind of guessed, and it was a pure guess, pretty good guess around 20 users that that was the max.
10:15 make any sense for this setup running right here, because just about there, that was our
10:21 peak, right? That was 28 users and 550 requests per second. And we're still handling it, but
10:29 notice the response time, the scalability starting to fall down. It's like 170 millisecond
10:35 response time. Not horrible, but it is worth considering that the service starts to degrade
10:43 pretty hard around, what is that, around 48 users and it, if you're just thinking
10:48 request per second, that's really what you should be doing here. 550, 547 is really the number that we're kind of at the peak here.


Talk Python's Mastodon Michael Kennedy's Mastodon