CHAPTER 15Large Model Monetization Strategies—The Bigger Picture

Larger opportunities are on the horizon, and businesses should be getting ready to seize them. In this chapter, I'll explain a new opportunity to leapfrog competitors and how it impacts the maturity models. Curated data sets have more value locked in them now that AI operating system models are generally available.

The previous chapter explored the digital paradigm and integrating functionality into existing applications. In this chapter, I'll explain the AI product paradigm opportunities and provide frameworks to discover and seize them. The first question is, should every business invest in developing large generative AI models? Spoiler alert. The answer is no, and here's why.

What Are the Costs?

You're probably wondering, “How much does training one of these AI operating system models cost?” As it turns out, quite a bit. Will costs eventually come down? They already are. When Databrick open-sourced Dolly, its version of GPT, it showed that a smaller labeled data set could train a highly capable model. Training costs were orders of magnitude lower than GPT and Claude.

Optimization is well underway for AI operating system platforms. Google's 540 billion parameter PaLM cost around $27 million to train. Meta has a much smaller 65 billion parameter model (that sounds crazy to say, much smaller, but it actually is). Training costs about $4 million. Admittedly, this model isn't as capable as GPT-4 or PaLM.

However, ...

Get From Data To Profit now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.