MongoDB with Async Python Transcripts
Chapter: Performance and Load Testing
Lecture: Introducing Locust
Login or
purchase this course
to watch this video and the rest of the course contents.
0:00
The tool I've chosen for us to do load testing and performance testing of our application and our data layer and our database is called Locust.
0:11
'Cause you swarm a bunch of things in and they all attack the application, right? And Locust is notable,
0:18
not just because it's a cool load testing tool, but it is a Python first tool where we program it and interact with it and control it using Python
0:28
in fantastic ways. It's a really fun tool and well put together. We're gonna get some good answers from it. It also has great graphs.
0:36
Let's talk through a couple of scenarios. There's two ways to think about performance of our application. The first way might be a realistic way.
0:47
Let's suppose, let's just say it's a web app or maybe they're interacting with this API through a mobile application.
0:55
In that case, the user is not gonna be going, reload, reload, reload, reload, as hard as they can, as fast as their browser will let them do.
1:04
What they're gonna do is they're gonna click, interact, that looks interesting, click. I wanna search for something, type, type, type, search.
1:13
Right, there's delays and there's like a slowness. And the question is under like kind of normal usage,
1:20
normal usage patterns, as best as we can predict, how many users can we actually support? So in this picture here,
1:28
you can see it's just going up steady, steady. The important part really, for understanding how it builds up is the top and the bottom graph.
1:36
In the bottom graph, you can see we're just linearly adding more users and more users. On the top, we're measuring requests per second.
1:44
And when we're in a really scalable mode, as we add more users, it doesn't really affect the other users, right?
1:52
As we add 20 users when there used to be 10, Well, you know, we'll do about twice as many requests per second, somehow related to the number of users,
2:01
pretty linearly there. But at some point, the system is gonna get overwhelmed. And as it gets overwhelmed, as we add more and more users,
2:10
instead of being able to do more requests, it's gonna just start to fall apart. And you can see that graph just curve off
2:16
right where I put this dotted line around 765 requests per second, which is 5,100 users. So it was pretty stable there, after that, it's gone.
2:25
And now, until recently in the middle graph, everything's been great until this sort of fall off. And then look at that, the median response time
2:36
and 95% percentile response time just explode to like 20 seconds and just completely falls apart. So using this, we can understand how many users
2:47
under a typical scenario we can handle, right? So you're like, all right, well, If we think we're only gonna ever have 2000 users
2:55
concurrently interacting with the system, whatever scenario we have here, whatever infrastructure and hardware we have applied,
3:02
plenty fine, no problem. On the other hand, you might look at it differently and just say, how many requests can we handle per second?
3:11
We just wanna go as fast as we can, no delay, just more and more things clicking and refreshing as hard as they can.
3:19
So in this case, we tried to apply that story and as we added just 75 users, they don't really represent users 'cause they're going completely crazy,
3:29
but more like testing threads or in that regard, you can see that pretty quickly, we can ramp up to around 1000 requests a second,
3:38
but no matter how many more requests we send at it, 1000 requests a second is really all we're able to tolerate. And once we get past this dotted line,
3:48
Here we have a 35 millisecond response time, which seems awesome, but as you get much farther, things start to slow down,
3:56
even though we're not doing more requests per second. So if you could say, well, I can't really conceptualize how many,
4:01
how a user might use this system. The other metric you can look at is, well, just how many requests concurrently can we handle?
4:09
Well, it looks like a thousand a second is a pretty good number for this system here. So these are the two perspectives you might want to take,
4:16
and they both tell you interesting things. And with Locust, we can do both of those.