When working in Go, it’s fun to dive right into utilizing concurrency because the language just makes it so easy! Very rarely have I needed to understand how the runtime stitches everything together under the covers. Still, there have been times when this information has been useful, and all of the things discussed in Chapter 2 are made possible by the runtime, so it’s worth taking a moment to take a peek at how the runtime works. It has the added benefit of being interesting!
Of all the things the Go runtime does for you, spawning and managing goroutines is probably the most beneficial to you and your software. Google, the company that birthed Go, has a history of putting computer science theories and white papers to work, so it’s not surprising that Go contains several ideas from academia. What is surprising is the amount of sophistication behind each goroutine. Go has done a wonderful job of wielding some powerful ideas that make your program more performant, but abstracting away these details and presenting a very simple facade for developers to work with.
As we discussed in the sections “How This Helps You” and “Goroutines”, Go will handle multiplexing goroutines onto OS threads for you. The algorithm it uses to do this is known as a work stealing strategy. What does that mean?
First, let’s look at a naive strategy for sharing work across many processors, something called fair scheduling. In an effort to ensure all processors ...