MongoDB with Async Python Transcripts
Chapter: Foundations: async
Lecture: Async for Speed

Login or purchase this course to watch this video and the rest of the course contents.
0:00 There's two reasons you might use asynchronous programming in general.
0:06 One is while you're waiting on things, I'm waiting on a database query, I'm waiting on an API call. The other is to take advantage of modern hardware.
0:14 I have 10 cores, and I would like to use more than one of them. This first part, we're going to talk about that performance side, how do I take full
0:22 advantage of my hardware, although that is not the focus of async. regard to Beanie, it's still worth just talking briefly about that so you know.
0:33 Alright, that's something else. Now let's talk about the I/O waiting side of
0:37 Async and Await. Check out this graph. Somewhat dated but still absolutely a
0:42 true statement. So this is basically Moore's Law, the red line, saying not the
0:51 speed of your computer but actually the number of transistors which is very
0:54 closely related to the speed. So you can see from the beginning until about 2008 2007.
1:03 The transistor count and the speed the single threaded performance count the blue line,
1:09 for example, as well as the clock speed, these all just went in lockstep faster, faster,
1:14 faster. If your code wasn't fast enough, you'll wait one year. Now your codes fast enough,
1:19 It's gone much, much faster because of course this graph is a logarithmic. But something weird happens around 2008. We start to hit limits.
1:29 Too much heat, too small of devices. Now we're still getting smaller devices, but we're like right up against that limit coming
1:37 up on three nanometer chips. But here's the thing. Computers still got more and more transistors and they got more capable, but they did so
1:46 by becoming multi-core. The computer I'm recording on now is a Apple Silicon M2 Pro. I think it has 10 cores and my little Mac mini.
1:58 Amazing, amazing machine. But it has 10 cores, not one. That means if I write a single threaded program, I get 1/10 of the power.
2:09 So in order to truly take advantage of this system, this hardware that I have, I need to use multiple threads and access those multiple cores.
2:21 Here's a simple program in Python. Look at this, Python 3.11, we're just saying, while true, just mess with some number here.
2:32 Take the number modulo by 997. I guess we could add one to it or whatever as well, but basically, it's just busy all the time.
2:41 However, if we pull up the process system information here, the server, even though it's going 100%, it's just using 7% of the system.
2:53 That is one of its 16 cores. Huh, that's disappointing. Again, if I wanna take advantage of all of this hardware, all the hardware here has to offer,
3:04 I can't do it with a single core, even 100% maxed out, it still is only in this case, you know, 7%, not very much at all.
3:13 So that's why we need multiple threading and concurrency to run in true parallelism across these different CPU cores.
3:22 Traditionally, Python has not been awesome at that. We have the GIL, which means threads are still kind of serialized
3:30 as long as they're doing threaded stuff. I have a whole course on async. There are several ways to escape the GIL and go faster.
3:36 We could use Cython, we could import some C libraries or Rust libraries that do this
3:42 down at a lower level, we can use multiple processes through multi processing. There are ways to take advantage of this.
3:49 None of those are the topic of this foundational one. And so I'm just going to leave you there with some ideas to think about on the performance
3:55 side and we'll move on to more like server side client server web API's mixed in with MangaDB for the rest of this chapter


Talk Python's Mastodon Michael Kennedy's Mastodon