Python for the .NET developer Transcripts
Chapter: async and await in Python
Lecture: Async for speed

Login or purchase this course to watch this video and the rest of the course contents.
0:00 Maybe you don't care about scalability
0:02 handling many requests the same speed you would handle one.
0:05 You care about raw speed, you want to take the one thing
0:09 you're doing and make it go faster.
0:12 And threading and parallelism
0:14 is definitely powerful for that, let's see why.
0:17 Here is my MacBook, as viewed from the virtual machine
0:21 that I was running Windows 10 on, and if we ask it
0:24 how many CPU's it has, it says I have 12 CPU's.
0:28 By the way, how awesome is that?
0:30 It's actually a core I9 with six cores
0:33 and then those cores are hyper threaded
0:34 so the OS sees those as 12 CPU's.
0:38 Look in the background, I've launched Python
0:40 the REPL, the read eval print loop.
0:43 Here just type Python, and then I wrote
0:44 a really simple program.
0:46 I said, x = 1 while true increment x.
0:49 Look at the CPU level of the system.
0:52 That program is absolutely pinned
0:54 working as hard as it possibly can, right?
0:56 It's while true adding one just 'til you stop it.
0:59 But if we look at the CPU level
1:01 the CPU level given the recording of Camtasia
1:05 the entire operating system plus this is only 9.57%.
1:11 This program at max can only do 1/12th
1:14 of what the operating system believes that it could do.
1:17 That is not great, so how do we solve that problem?
1:20 Well, we have to do more stuff concurrently.
1:23 This algorithm obviously doesn't lend itself
1:26 to concurrency, but many algorithms do.
1:29 So if we were able to write, break up our code
1:32 our algorithm, into multiple concurrent steps
1:35 and run those say on threads, then we would be able
1:38 to take advantage of all of these cores
1:40 and make our code truly fly, that's the promise.
1:45 On Python, there's a limitation
1:47 it's not as bad as it sounds, but there is this limitation
1:49 that really effects thread performance
1:52 and computational speed.
1:54 You probably have heard of this thing called the GIL
1:56 or the Global Interpreter Lock.
1:58 And we talked about reference counting, right?
2:00 When we create an object or we de-reference an object
2:02 we increment or decrement that counter
2:05 that reference count.
2:07 Now, Python had a choice to make.
2:09 Could it support concurrent interaction
2:12 with all of those objects where it would
2:14 have to lock on every one of those references?
2:17 Or it could say, You know what, we're not going
2:18 to allow more than one interpreter operation
2:21 to run at any given moment.
2:23 In which case, we don't need to lock that
2:25 because only one reference or de-reference
2:27 could possibly be happening there.
2:30 That's the path they went, and it's really about protecting
2:32 memory management consistency, not a threaded sort of thing.
2:37 But the effect is that Python can only run
2:39 a single Python instruction at the byte level at a time
2:43 no matter how many threads you have.
2:45 So that means for pure Python
2:47 threads do not really do anything other than add
2:50 overhead for computational speed, that's a bummer.
2:54 Well see, there's actually a couple ways
2:55 that we can get around that and make our Python
2:57 take advantage of all those twelve cores and fly, as well.
3:00 One of them is something called multiprocessing
3:02 which is where you take what would be a thread
3:05 and you kind of copy it into a separate copy of Python
3:08 along with the data it's using, run it over there
3:11 and then just get the return value.
3:13 That way it's a separate process which does not share
3:15 the GIL, right, the GIL is a per-process type of thing.
3:18 So that's Python's main way of getting around
3:20 this GIL situation and we'll see a little bit
3:23 about that right at the end, okay?
3:25 But if you want to read more about this
3:27 there's a great article over at realpython.com/python-gil.
3:32 What you'll find is that even with this GIL in place
3:35 if the two threads are working in Python
3:38 and one of them is talking over, say a network
3:40 or doing certain types of C code down at a lower level
3:43 it can actually release the GIL
3:45 and regain some concurrency so it's not as bad as it sounds.
3:48 But it is something you definitely need to be aware of
3:50 and it's something very different than C#.