Service virtualization: Testing legacy, third-party, and cutting-edge systems

How this time-tested practice can empower microservices and containerized architectures, too.

By Bas Dijkstra
August 29, 2016
Illustration of interfaces Illustration of interfaces (source: Geralt via Pixabay)

For organizations wanting to rapidly and continuously deliver high quality software to answer to an increasingly demanding and competitive market, approaches such as Continuous Delivery, containerization, and microservices have become important parts of their software delivery life cycle. When attempting to deliver high-quality software at the required speed, however, test environment and dependency management can become a labor-intensive task for development teams and organizations. One solution to this challenge is the adoption of service virtualization (SV), which is proving useful not just for managing legacy systems but also for empowering new techniques and architectures.

What is service virtualization?

Service virtualization is a method to simulate the behavior of unavailable or difficult-to-access components in heterogeneous test environments.

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

This is done by creating virtual assets that emulate the behavior of these components and that development teams have full control over. The original components can then be swapped out for virtual assets in test environments, enabling teams to test their applications earlier (shift left”) and more often (“continuous testing”), as well as execute test cases that are difficult to set up using the “real” components.

Consider a hypothetical project where the development team is responsible for developing a social networking platform built under a microservices architecture. Typically, new features are added in very short iterations (sometimes multiple times per day) in order to respond to suggestions for improvement or bug reports from users. The team therefore decides to adopt Continuous Delivery to quickly respond to their users and ensure that quality does not suffer from these rapid changes. And since the organization does not want to deal with the burden of maintaining their own development or test and production environments, they decide to use the power of cloud computing for flexibility and scalability, combined with delivering their software components as Docker containers for automated deployment into every environment. Implementing service virtualization can enhance this team’s approach with additional benefits, which I’ll detail below.

SV for microservice architectures

Let’s look at how service virtualization plays a vital part in enabling the development team to deliver quality software at speed. Teams embracing Continuous Delivery need to be able to run automated and manual tests practically on demand. However, with distributed software, of which microservices applications are a prime example, having all dependencies required for test execution in place and correctly configured all the time is a daunting task. Required dependencies can be unavailable or access-restricted for a number of reasons, including being under development themselves, not containing suitable test data, or charging of access fees (this last reason is typically seen when dealing with third-party dependencies). Replacing these troublesome dependencies with virtual assets that are under the development team’s full control enables unrestricted, repeatable, and continuous test execution. This in turn is key to successful Continuous Delivery implementation.

Introducing service virtualization into the software development life cycle has benefits specific to a microservices architecture. Every increase in the number of components an application is composed of increases the chance that several of them might be under construction, without suitable test data or otherwise access-restricted at any given time. This makes test environment and test data management a complex task and might result in unwanted or unacceptable delays in application testing and delivery. Using service virtualization can give back control over these test environments to the teams that are depending on them.

SV for containerized architectures

The use of Docker as a means of containerization for effortless application distribution and deployment can be extended to service virtualization as well. Using containerized service virtualization means that the test environment becomes an artifact in the software development life cycle, just like the application itself. As a result, test environments can be created and provisioned on demand prior to test execution, used for integration and end-to-end testing purposes, and deprovisioned afterwards until the next deployment and test cycle. Having test environments as containers also allows for easy distribution of specific configurations between team members and teams for efficient debugging, defect simulation, and analysis purposes, since sharing containers results in everybody using the exact same configuration.

To learn more about how service virtualization can add value to any modern software development project, register for the free webcast, “Supercharging Mobile Performance with Service Virtualization,” hosted by Mirek Novotny, Senior Product Manager for Service Virtualization at HPE. Arm yourself with knowledge and leave with the opportunity to experiment with the free software, HPE Service Virtualization Community Edition.


This post is a collaboration between O’Reilly and HPE. See our statement of editorial independence.

Post topics: Performance
Share: