The fact that concurrency is different from parallelism is often overlooked or misunderstood. In conversations between many developers, the two terms are often used interchangeably to mean “something that runs at the same time as something else.” Sometimes using the word “parallel” in this context is correct, but usually if the developers are discussing code, they really ought to be using the word “concurrent.”
The reason to differentiate goes well beyond pedantry. The difference between concurrency and parallelism turns out to be a very powerful abstraction when modeling your code, and Go takes full advantage of this. Let’s take a look at how the two concepts are different so that we can understand the power of this abstraction. We’ll start with a very simple statement:
Concurrency is a property of the code; parallelism is a property of the running program.
That’s kind of an interesting distinction. Don’t we usually think about these two things the same way? We write our code so that it will execute in parallel. Right?
Well, let’s think about that for second. If I write my code with the intent that two chunks of the program will run in parallel, do I have any guarantee that will actually happen when the program is run? What happens if I run the code on a machine with only one core? Some of you may be thinking, It will run in parallel, but this isn’t true!