Mastering PyCharm Transcripts
Chapter: Performance and profiling
Lecture: Concepts: Profiling
0:01 You've seen the profiler in action
0:03 and you've seen our technique for making our fake little set of functions faster
0:08 so let's go back and talk about some of the concepts that we saw.
0:11 So we can go to the run menu and choose profile our program here
0:18 or we could go up and just press this to profile it,
0:22 so there's also a way in the project to do it,
0:26 so there's a lot of ways to start profiling our run configuration
0:29 and like I said, this can be a unit test,
0:31 this can be a proper program like the one we did,
0:34 it could be a web app, whatever,
0:36 any run configuration you should be able to just profile it.
0:39 Now, you run it and it runs down here, like this
0:43 notice it's starting the C profiler
0:46 and then after it runs, it pops open with the stats,
0:50 by default it opens in own time
0:55 and I think that's really not the right place to be
0:57 you want cumulative time and sort of work your way down
1:00 so go and sort by the time ms not own time, right here.
1:05 Notice that compute analytics was probably the worst thing
1:10 that we are in control of, it's called 9 times,
1:13 it took 7.6 seconds, that's really a problem.
1:18 So we should probably go look at that and analyze it.
1:22 We also have learn, we also have read data,
1:26 those are the different parts that we've written that look especially bad,
1:29 time.sleep we didn't write that, I can't do anything about it
1:33 maybe we could call it fewer times or with a smaller value,
1:36 okay we also have get records
1:38 and so these are the places we should probably be looking
1:41 and that's what our analysis here is telling us,
1:45 probably starting with compute analytics.
1:48 We're also creating the connection
1:51 and you might think well there's nothing you can do
1:53 to make talking to a database faster
1:56 you have to open the connection to talk to it
1:58 but you could implement connection pooling
2:00 or at least make sure what you're doing
2:02 is leveraging the built in connection pooling of your database provider.
2:06 While the statistics are cool, I think the graphical version is much better
2:11 so here we can dig into the individual functions,
2:14 we have program, we have main, we have we go
2:16 and after go it gets interesting,
2:19 those are the three heavyweight things that go does
2:21 and that's really all the program is doing.
2:24 So search is the least bad of the three options
2:27 compute analytics is the worst.
2:30 So the way to read this is we start here in program.py
2:35 it calls main, from main that calls go
2:38 and from go we call this one, and then this one, and then this one
2:42 so we're calling these functions sort of in this order
2:45 so you can follow the flow until you get to a point like
2:47 okay, this looks bad and like something we can optimize.
2:50 And remember, color matters, so we've got green for search
2:55 it's pretty fast, relative to the other things
2:59 we've got orange for compute analytics,
3:02 and we've got red for main,
3:04 so this is a percent of time and you can actually see the percent there,
3:07 like search is 3.4%, compute analytics is 70% and main is 96%
3:13 so it's kind of a gradient from green to red with a little yellow in the middle,
3:19 relative to everything else it is probably fast enough.
3:23 Compute analytics, this could be faster, right,
3:27 but the color is kind of telling you it's not the worst you've seen
3:30 but it could be better, this is low right,
3:32 this is pretty much as bad as it gets from this particular program.
3:35 We could also navigate so we could right click in the tabular version
3:39 and say navigate the source or actually jump over to the call graph
3:44 so if you click on the show call graph, it will take you over here
3:48 but if you go to that one right click you could only navigate to the source,
3:53 so there's not this bi-directional take me to the graph, take me to the table.
3:57 So here we can navigate down to the source and see what's actually going on.
4:02 So those are the techniques and tooling that we use,
4:06 I want to leave you with one quick warning though
4:09 be aware of the effects of profilers
4:12 so profilers and their friends, the debugger,
4:15 these can have non obvious effects
4:18 so you might have two functions, one which is called one time
4:23 and one is called a 100 thousand times
4:26 and without the profiler, maybe they're the same amount of time, exactly
4:30 but because the profiler is in the way and collecting data about every call
4:34 the one that's called a 100 thousand times looks way worse in the profiler
4:37 than the other which just goes down to the system
4:41 and the profiler is not doing much
4:43 so you can think of these as having a little bit of quantum mechanics effects
4:46 kind of Heisenberg uncertainty principle
4:49 the more precisely you measure it,
4:51 you might actually be changing how it's behaving.
4:55 While C profiler is pretty good
4:58 and the debugger with the Cython speed ups are pretty good,
5:02 just keep in mind that this is not exactly the real runtime behavior,
5:07 this is the runtime behavior while it's being deeply observed.
5:10 Okay it's still super, super helpful to help track down these issues
5:16 and it's more important to look at the differences across time I'd say
5:19 than it is to look at the exact dummers and say
5:23 well now it is a tiny bit faster,
5:25 it could be just the profiler is affecting it.