Async Techniques and Examples in Python Transcripts
Chapter: Why async?
Lecture: Async for computational speed
Login or
purchase this course
to watch this video and the rest of the course contents.
0:00
Let's begin our exploration of async by taking a really high-level view, we're going to look at the overall async landscape, some of the particulars
0:10
about working with async in concurrent programming in Python, and the two main reasons that you care about asynchronous programming.
0:17
In this first video we're going to focus on async for speed or for performance, the other main reason you might care
0:24
about asynchronous programming or concurrent code would be for scalability, doing more at once. Right now we're going to focus on doing things faster
0:33
for an individual series of computations. Later we're going to talk about scalability say for web apps and things like that.
0:40
Let's look at some really interesting trends that have happened across CPUs in the last 10 years or so, 15 years.
0:48
So here's a really great presentation by Jeffrey Funk over on SlideShare and I put the URL at the bottom
0:53
you can look through the whole thing, you can see there's 172 slides, but here I am pulling up one graphic that
0:58
he highlights, because it's really, really interesting. See that very top line, that red line, that says
1:04
transistors in the thousands, that is Moore's Law. Moore's Law said the number of transistors in a CPU
1:12
will double every 18 months and that is surprisingly still accurate; look at that, from 1975 to 2015
1:20
extrapolate a little bit, but still basically doubling just as they said. However people have often, at least in the early days
1:29
thought of Moore's Law more as a performance thing as the transistors doubled, here you can see
1:34
the green line "clock speed" and the blue line "single threaded performance" very much follow along with Moore's Law.
1:41
So we've thought about Moore's Law means computers get twice as fast every 18 months and that was true more or less
1:49
for a while, but notice right around 2008, around 2005 it starts to slow and around 2008 that flattens off and
1:58
maybe even goes down for some of these CPUs and the reason is we're getting smaller and smaller and smaller
2:04
circuits on chips down to where they're basically at the point of you can't make them any smaller, you can't
2:09
get them much closer both for thermal reasons and for pure interference reasons. You can notice around 2005 onward, CPUs are not getting
2:20
faster, not really at all. I mean, you think back quite a bit and the speed of the CPU I have now is
2:25
I have a really high-end one, it's a little bit faster but nothing like what Moore's Law would have predicted. So what is the take away?
2:32
What is the important thing about this graphic? Why is Moore's Law still effective? Why are computers still getting faster, but the CPU and
2:41
clock speed, really performance speed, single-threaded performance speed, is not getting faster, if anything it might be slowing down a little.
2:48
Well that brings up to the interesting black graph at the bottom, for so long this was one core and then when we started getting dual-core systems and
2:57
more and more CPUs, so instead of making the individual CPU core faster and faster by adding more transistors
3:04
what we're doing is just adding more cores. If we want to continue to follow Moore's Law if we want to continue to take full advantage of the
3:12
processors that are being created these days we have to write multi-threaded code. If we write single-threaded code, you can see it's either
3:20
flat, stagnant, or maybe even going down over time. So we don't want our code to get slower, we want our code to
3:27
keep up and take full advantage of the CPU it's running on and that requires us to write multi-threaded code.
3:33
Turns out Python has some extra challenges, but in this course we will learn how to absolutely take full advantage of the multi-core systems that
3:41
you're probably running on right now.