Observability for Large Language Models

Book description

An initial release of a large language model (LLM) makes for a nice marketing moment, but value lies in the work you do to make something a true "1.0"-level product experience. In this report, Phillip Carter, who spearheads AI initiatives at Honeycomb, provides an introduction to using observability tools and practices that will help you improve modern LLM and AI products after they've been released.

MLOps professionals, SREs, software engineers, developers, and architects will learn not only the importance of OpenTelemetry, but also the methods of feeding observability data back into development. This report is also ideal for CTOs and other senior-level practitioners in your organization.

You'll explore:

  • Why observability is essential to taking an AI product from 0.1 to 1.0
  • Strategies for implementing good observability for LLMs using standards including OpenTelemetry
  • How observability isn't just a way to get better reliability in production, but how it can feed back into core development workflows

About the author:

Phillip Carter is a principal product manager at Honeycomb, leading the company's AI and OpenTelemetry initiatives. He is also a maintainer of the OpenTelemetry project, the de facto standard for observability instrumentation.

Product information

  • Title: Observability for Large Language Models
  • Author(s): Phillip Carter
  • Release date: September 2023
  • Publisher(s): O'Reilly Media, Inc.
  • ISBN: 9781098159740