Chapter 6. Bad Descriptions
The limits of my language means the limits of my world.
Ludwig Wittgenstein
When we develop a semantic model, we define aspects of it that contribute to human-interpretability (element names, textual definitions, usage guidelines, and other documentation), as well as aspects that aim for machine-interpretability (relations with other elements, logical axioms, inference rules, etc.). As creators of semantic models, we place a lot of emphasis on the machine-interpretability aspects, and rightly so, but we often underestimate the importance and difficulty of creating semantic models that are clearly understood by humans. Conversely, as semantic model users we often underestimate the probability that we have actually misunderstood what a semantic model is really about, and we end up using it in incorrect ways. This is perhaps the biggest reason the semantic gap between data suppliers and consumers exists.
This chapter describes some common mistakes we make when we describe a semantic model’s elements via names, textual definitions, and other types of human-readable information, and provides tips and guidelines to improve the quality of these descriptions.
Giving Bad Names
My favorite quiz when I give lectures on semantic modeling or when I interview people for hiring is the following: Assume you want to model the customers of a company, and that these clients can either be physical persons or other companies. Which of the two semantic models in Figure 6-1 ...
Get Semantic Modeling for Data now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.