Async Techniques and Examples in Python Transcripts
Chapter: Why async?
Lecture: Why threads don't perform in Python
0:00 I mentioned earlier that threads in Python
0:02 don't really perform well.
0:04 They're fine if you're waiting
0:06 but if you're doing computational stuff
0:08 they really don't help.
0:09 Remember that example I showed you
0:11 how we had our Python program doing a tight while loop
0:14 that was just pounding the CPU
0:16 and it's only getting 8.3% CPU usage.
0:19 Well if we had added 12 threads to do that
0:22 how much of a benefit would it have gotten?
0:24 Zero, it would still only be using 8%
0:28 of the CPU, even though I have 12 cores.
0:30 And the reason is this thing called the GIL
0:33 or the Global Interpreter Lock.
0:35 Now the GIL is often referred to as one of the reasons
0:39 that Python isn't super fast or scalable.
0:41 I think that's being a little bit harsh.
0:44 You'll see there's a lot of places
0:45 where Python can run concurrently
0:47 and can do interesting parallel work.
0:50 And it has a lot of things that have been added
0:52 like AsyncIO and the await, the async and await keywords
0:56 which are super, super powerful.
0:58 The GIL is not the great terrible thing
1:00 that people have made it out to be
1:02 but it does mean that there's certain situations
1:05 where Python cannot run concurrently.
1:08 And that's a bummer.
1:09 So this is the reason that the Python threads
1:12 don't perform well. The Global Interpreter Lock means
1:14 only one thread or only one step of execution
1:19 in Python can ever run at the same time
1:21 regardless of whether that's in the same thread
1:24 or multiple threads, right?
1:26 The Python interpreter only processes instructions
1:29 one at a time, no matter where they come from.
1:31 You might think of this as a threading thing
1:33 and in some sense it's a thread safety thing
1:36 but the GIL is actually a memory management feature.
1:39 The reason it's there, is it allows the reference counting
1:42 which is how Python primarily handles its memory
1:46 management, to be simpler and faster.
1:49 With the Global Interpreter Lock
1:51 that means we don't have to take many fine-grained locks
1:53 around allocation and memory management.
1:55 And it means single-threaded Python is faster
1:58 than it would otherwise be
1:59 although it obviously hurts parallel computation.
2:03 But the thinking is most Python code is serial
2:07 in its execution, so let's make that the best
2:10 and the GIL is what it is.
2:12 It's going to make our parallelism less good.
2:15 As you'll see throughout this course
2:17 Python has a lot of options to get around the GIL
2:20 to do true concurrent processing in Python
2:23 and we're going to talk about those
2:24 but the GIL cannot be avoided
2:26 and your should really understand
2:28 that it exists, what it is, and primarily
2:30 that it's a thread safety feature for memory management.
2:34 You can learn about it more if you want
2:35 at realpython.com/python-gil, G-I-L
2:40 and there's a great article there
2:41 it goes into all the depth and the history
2:43 and so on, so go check that out
2:44 if you're interested in digging into it.
2:47 Keep in mind, this GIL is omnipresent
2:49 and we always have to think about
2:50 how it affects our parallelism.