Effective PyCharm Transcripts
Chapter: Performance and profiling
Lecture: Concept: Profiling

Login or purchase this course to watch this video and the rest of the course contents.
0:00 Let's review the core concepts around profiling with PyCharm and Python. Once you have a run configuration setup in PyCharm,
0:10 you just go over and hit the profile buttons, you can right click on the thing instead of saying run in this case program you
0:16 can say profile program or you can hit that little clock stopwatch looking thing next to the play button and debug button that you're familiar with.
0:25 Once you click profile you'll see that PyCharm automatically starts up what's called C profile This is a profiler within Python, that is a C based one.
0:35 It's pretty low overhead and works really well for understanding where our program is spending its
0:40 time once it's done running once you exit the program or if it's some kind of program that just runs and then stops,
0:49 you'll get this 'pstat' file opened up here and it opens up to the
0:54 statistics page where you can basically sort through and look at the different elements. It starts out by showing your own time.
1:01 I find that not to be very useful. I would focus in on the overall time spent in any given part of our program
1:08 for our program, we saw compute analytics learn run query search, get records. All of these were the things that were interesting for us to look
1:17 at. My personal favorite way to get a quick overview of where we're spending our
1:22 time is actually go over to the call graph and have this visual representation. I love how the colors in the flow,
1:29 show your where you're spending your time, for example, we're calling. Go and go is calling Get records and get records
1:35 Calling. Run query that right now is the path that we really need to be worried about when you're back over in that spreadsheet looking thing.
1:42 You don't see that nearly as obviously as you do here. So we are called program it starts up program calls main, main calls, go go calls,
1:51 three functions, compute analytics. Get records and search. And again the color matters when we see green.
2:00 Well that means that this is fast enough when we see yellow relatively speaking,
2:05 it's not horrible necessarily compared to the other parts of the program, but it's something you should pay attention to when it's red.
2:12 You definitely should, you know, focus in on that and see what's going on. Look at this graph, The colors and the flow are actually and really,
2:19 really good for you to be able to just jump right in and figure out what's going on. We can also navigate from both the statistics and call graph view.
2:29 If we go to the statistics one, we can right click and say show on the call graph and jump right into that
2:34 picture where we should be or we can say navigate over to the source or even double clicking get records right here.
2:41 We'll just take us there when we're over in the call graph we can select and
2:46 then right click one of these boxes and say navigate to source. In a final word of warning as we close out this section.
2:53 This chapter on profiling, profilers and debuggers. But profilers maybe even more so can have significant effects that actually alter how our program
3:04 behaves in quantum mechanics. We have this really weird principle that if you observe something
3:09 you may actually change it. The act of observing it may be changing it and
3:14 profilers have this sort of quantum mechanics feel to them there are certain parts of your
3:20 program that when run outside of a profiler it could be medium slow but if you
3:25 run it in the profiler it could actually be a lot slower and the profiler disabled This is really slow and you know,
3:31 in practice that may not always be true. One of those scenarios is where you're doing many,
3:36 many, many loops in Python code versus calling something external like a database query or
3:42 calling requests. The overhead of calling requests or a database query once is nearly zero
3:47 But if you're calling a loop thousands and thousands of times there's a little bit
3:52 of overhead for each function call from the profiler and that can add up. So just be aware that this might not be 100%.
4:00 The truth of how much time is being spent by each function but it's still really valuable information help you. Dive in and speed up your app.


Talk Python's Mastodon Michael Kennedy's Mastodon