Accelerating APIs with continuous delivery

Create business value and add new functionality through an automated build pipeline.

By Daniel Bryant
January 19, 2017
Concrete Pipe Concrete Pipe (source: mainblick)

Much has been written about the web-based API economy, and there are clear benefits to an organization for exposing their services and offerings via an API. However, this goes deeper than simply opening up new consumers (and markets) and allowing “mashups” of functionality. A good public-facing API communicates its intent and usage in a much more succinct and effective way than any human-facing UI and accompanying user manual, and an API is typically easier to test, deliver and operate at scale. But in order to ensure the continual delivery of value via an API, the process of build, validation and quality assurance must be automated through a build pipeline.

Putting an API through the pipeline

The first step in any attempt to create business value or add new functionality is to ensure that a hypothesis has been specified, a supporting experiment designed, and metrics of success defined. Once this is complete we like to work closely with the business stakeholders to specify the user journey of an API via tooling like Serenity BDD and rest-assured using an ‘outside-in’ approach (i.e., defining external functionality before working on the internal components).

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

Next, we often define our API specification via a tool like RAML, or conduct “documentation-driven” development using something like Swagger, or implement Consumer-Driven Contract development using Pact or Spring Cloud Contract. Regardless of the approach, following good API design is a must.

All of these approaches to specification and testing can (and should) be embedded within an automated CD pipeline, for example using Jenkins and the associated plugins. It is worth mentioning, too, that automation is not a substitute for communication with customers and between teams (particularly with API design), but the pipeline simply automates the release process associated with the business value stream.

With the first pass of the API specification complete we begin implementation, and iterate (continually pushing our code along the build pipeline) until both the producer and consumer of the API are happy with the results.

Testing non functional requirements via APIs

Once we have run small-scale experiments and our API has proven that it can add business value we must then ensure it is ready for production load. Typically, we deploy our API behind an API gateway like NGINX, as this allows us to centralize cross-cutting concerns such as load-balancing, authentication and auditing. We load test REST-like APIs via Gatling (often in combination with Flood IO), and virtualize any external dependencies using a service virtualization tool like Hoverfly. Using service virtualization (sometimes called ‘API simulation’) allows us to remove the explicit dependency between the API under test and any coupled external services, and also to deterministically simulate latency and inject failure conditions.

We also recommend following the excellent OWASP REST security guidelines, and running security and penetration testing tools against your implementation. Tooling like OWASP’s ZAP (in combination with bdd-security) and Gauntlt are API-driven security testing tools, and these can be implemented within a typical pipeline, either via plugins or simple custom code that orchestrates the testing against our API.

It is essential to build and validate an API via an automated Continuous Delivery (CD) pipeline in order to achieve reliable delivery of functionality and value. This article has attempted to highlight some of the tooling and approaches that make this possible. The following podcast provides even more details on how to implement these approaches when using Java and Docker containers to deliver API-driven software.

Exposing metrics via (surprise, surprise) an API

The final stage of API delivery is to ensure that the functionality deployed to production is working as expected and delivering value. Essential to this is the exposure of key metrics, both technical (txns per second, latency) and business (functionality usage, revenue generated), and this should be made available by an API. In the Java world this is made easy via libraries such as Spring Boot Actuator and Codahale’s Metrics, and other language platforms have similar alternatives. Once we have collected the metrics using something like Prometheus or Graphite we can then analyze the data (potentially using a business intelligence tool) and determine our success and decide on future new hypotheses and functionality.
Automate, automate, automate…

This post is a collaboration between NGINX and O’Reilly. See our statement of editorial independence.

Post topics: Software Engineering
Share: