Chapter 5. Operationalizing Generative AI Implementations
At this point, we have explored the evolution of generative AI and Azure OpenAI Service, the main approaches for cloud native generative AI app development, and AI architectures and building blocks for LLM-enabled applications with Azure.
In this chapter, we will explore the main considerations for going from implementation to production-level deployments. For this purpose, we will talk about advanced prompt engineering topics, related operations, security, and responsible AI considerations. All of these will contribute to a proper enterprise-grade implementation of cloud native, generative AI–enabled applications.
The Art of Prompt Engineering
Prompt engineering is one of those disciplines that has taken existing AI skills frameworks by surprise. Before OpenAI’s ChatGPT, no one could imagine that the ability to interact with AI models by using just natural written language would be one of the most precious skills for companies trying to adopt, test, and deploy their generative AI systems. If there is an equivalent of the famous “Data Scientist: The Sexiest Job of the 21st Century”, it is prompt engineering, with powerful examples such as the prompt engineer job at Anthropic in the US, with a base salary of $300K+.
It is also a highly evolving area. What started as a simple way to send instructions to models is becoming a sort of “art” that allows you to also contextualize, secure, and operationalize LLMs. It has a mix ...
Get Azure OpenAI Service for Cloud Native Applications now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.