Effective PyCharm Transcripts
Chapter: Performance and profiling
Lecture: Optimizing the database access code
Login or purchase this course to watch this video and the rest of the course contents.
0:00 We've optimized our computer analytics where to next? Well let's look at this search. It's Not very fast. It did take 1.4 seconds.
0:10 You know, I think the internet is just being slow right now. Let's try to run it again and see if what the time is going to be
0:17 like. If we look into it over here, you'll notice there's not a lot we can do we call get on requests and then you know, that's that's it.
0:27 We wait let's just go see if there's any sort of difference over here now. It's 448 milliseconds. That's a huge difference. But what are we gonna do?
0:36 There's not a whole lot else we can do. I'll go and keep this as our baseline now.
0:41 But we've got search not a lot we can do compute analytics which we've made about
0:46 10 times faster. And now we've got get records And over here and get records
0:50 We have create connection being called once that's 12% and there's run query which is
0:56 really running read row. So Send query might be the place to focus on but notice 889888. So read row is actually really the challenge.
1:07 Let's go and focus on this section. They're down here. Again, we're simulating.
1:13 We're not creating a real database with huge amounts of data that will make a slow
1:17 query. But imagine we're doing a query against a database here and then because it
1:22 has a lot of data, it's somewhat slow when we're working with databases and we have a lot of data. The thing you can do more than almost any other
1:31 thing to make it faster is to add an index. So let's imagine there was and this one we had no index and we decided,
1:39 you know what, we're going to go and look at the query performance stuff over in the database and realize, oh,
1:45 there's no data, there's no index on this. That's a problem adding an index when there's tons of data can easily make a query
1:53 1000 times faster. Let's just imagine this one makes it 100 times faster. So put a couple zeros there again,
2:01 somewhat contrived. But this is real world type of stuff. You see a database query that's slow and you go and you optimize it by optimize
2:10 it in the database first. The other thing that's often done in this case is if you want to run a query we might use something like Redis or an in
2:18 memory cache where we cash the results. So we don't even go to the database again.
2:22 But in my mind it's better to make the database faster if that's at all possible
2:27 because adding a caching layer, all of a sudden you end up with cache invalidation and staleness and all kinds of challenges.
2:34 So start by making the database faster. That's what we're simulating by adding index that's running again, we did it. Oh look at that.
2:42 This thing is starting to the zip. So we can't really speed up the queries or the web request.
2:47 I'll make a quick comment before we wrap up this video on what we could do some of the time. But in this situation we can't make it any faster.
2:55 The database seemed faster. And we already saw that this is much faster. Let's profile it and get a real answer up over the call,
3:02 graph those side by side. See how we're doing. Our program is still red because well, that's the slowest thing here. It's always going to be true.
3:11 But look at how much faster we got on our get records Now, it's 20% of the time, whereas before it was 55% goes from 1.1 second down
3:23 to just 0.26 or 265 milliseconds. That is a huge improvement. All right, well, I feel like we've massively optimized this app.
3:36 We've gone what was the very original one? It was something like six seconds. We've gone from six seconds. Stand at 1.2 and we've done that.
3:43 Being guided by what we got in this picture right here. Make a small change. Run it in the profiler. See if it got better if it got worse.
3:52 Don't don't keep it roll that back. Make another change. Figure out what's slow now in this new context. Optimize that and keep going and keep going.
4:01 My final comment on this search here. What we're doing is we're going over and passing a search text profile.
4:08 profiling over to talk Python and asking for the answer and it's got to go through
4:12 the internet all the ping Times and what not to the east coast from the west coast of the U. S. That is at least 100 milliseconds.
4:19 Ping time plus the server as to respond. And we have to process and print out the results. Okay, So we can't really do anything about that.
4:27 I mean we could not print out the results but we kind of want to see them. We could not do the query but we need the answer.
4:33 Right? So on one hand it feels like, well we can't do a whole lot but we could say if we're gonna if this
4:39 was more interactive, if it was longer running, we could actually go up here and say at Funk tools and add an 'lru_cache( )' to this as well.
4:48 At least that way, if we ever search for the same text twice it's going to be twice as fast. The second time we ask for the search,
4:56 we're going to assume that the data set over there hasn't changed does sometimes,
5:01 but it's not that frequent when assuming hasn't changed for the lifetime of this app running and we're just going to give back the same answer,
5:08 we can add this, it'll make zero difference in terms of performance because we're calling
5:11 it only once but if we were calling it multiple times with potentially the same input we have a much better response there.
5:19 Alright, final time, let's go and run our app. See how it works. Boom still works much, much faster. Pretty awesome, right?