Even though Prometheus accepts an empty file as a valid configuration file, the absolute minimum useful configuration needs a scrape_configs section. This is where we define the targets for metrics collection, and if some post-scrape processing is needed before actual ingestion.
In the configuration example we introduced previously, we defined two scrape jobs: prometheus and blackbox. In Prometheus terms, a scrape is the action of collecting metrics through an HTTP request from a targeted instance, parsing the response, and ingesting the collected samples to storage. The default HTTP endpoint used in the Prometheus ecosystem for metrics collection is aptly named /metrics.
A collection of such instances is called a job ...