CHAPTER 6From CPUs to GPUs
Our AI is forever limited by the memory and processing speeds of our computers. As it improves, our AI improves. Period. This is most famously discussed by AI legend Richard Sutton in his published “Bitter Lesson”: “The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin.”
What he means here is that as compute power grows, the intelligence of our AI will grow. Like a cup filled with water, the bigger the cup, the more water it can hold, where in this analogy, the water is the “capacity” of the AI model. In the 1960s, we had a thimble. Today, we have containers the size of the Atlantic Ocean. This is largely due to Moore's law, from Intel founder Gordon Moore, which states that “every two years, the number of transistors on microchips doubles, for half the cost” (see Figure 6.1). Over the last 120 years, Moore's law has stayed steady. Recently, many have stated that this run is over as we have hit limitations in physics. So in a strict sense, yes Moore's law is over. However, we can restate with his intent: “Every two years, the processing throughput on a microchip doubles, for half the cost.” That is still delivering on plan with GPUs, albeit at a slightly slower pace.
Get AI for Retail now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.