Chapter 5. Definitions of Privacy

So far in this book, you have become familiar with how the privacy parameter ϵ bounds the privacy loss of a data release. However, pure-differential privacy (or ϵ -differential privacy) can be overly restrictive, resulting in a privacy analysis that adds more noise than is practically necessary. This chapter presents other definitions of differential privacy that are widely used in practice, as well as the mechanisms they enable.

You learned in Chapter 4 that the key criteria of a privacy measure is that it possesses immunity against postprocessing. This means that for a mechanism M ( · ) to satisfy differential privacy, it is impossible to manipulate a data release in a way that will increase the divergence between the probability distributions M ( x ) and M ( x ' ) , where x and x ' are adjacent data sets.

This chapter introduces the privacy measures behind other variants of differential privacy. These variants of differential privacy also improve utility or interpretability and preserve robustness against auxiliary information and support for composition.

This chapter discusses the following variants of differential privacy, which meet these criteria and are in popular use:

  • Approximate differential privacy: ( ϵ , δ ) or ϵ ( δ )

  • Rényi differential privacy: ϵ ¯ ( α )

  • Zero-concentrated differential privacy: ρ

  • Bounded range: η

  • Characteristic functions: φ ( t )

  • f -differential privacy: β

Get Hands-On Differential Privacy now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.