Software systems are at the heart of modern business. While varying greatly, they share common characteristics: they rarely work in isolation; they are all complex; and thorough testing is often very difficult or impossible. All this is compounded by networking and the concatenation of multiple devices, systems, architectures, and applications.
The combinatorial complexity inherent in this level of interconnectivity cannot be over stated. Even the simplest of metrics currently used to measure ‘complexity’ within code, cyclomatic complexity, illustrates this. Such metrics are wholly inadequate as they do not reflect the full measure of complexity that is operationally axiomatic as each of these systems also maintain numerous independent states. Further; as the number of possible states within the application layer increases, so does the degree of interaction, which sees an exponential increase in complexity.
This interlinking of discreet systems at scale has enabled an explosion of innovation and accelerated the rate at which applications and features can be delivered. But the increase in the speed of delivery has not been without cost! Indeed, the verification and validation of these systems is where this is most apparent, with huge increases in time and resources dedicated to validating integration between systems still failing to identify failure mechanisms.
The Coverage Gap
As a new feature is added to an existing system the potential for interaction with existing states rapidly increases; the impact is generally exponential as the size of the state and the number of features increases. Unfortunately, the means of increasing test coverage is linear as this invariably relies upon either manual execution or manually writing automated scripts. The underlying risk this gap poses (see Figure 1) is rarely acknowledged by those developing the applications.
There have been new developments utilising Machine Learning to attempt to address the difficulty in achieving test coverage. However, many of these approaches currently either focus down at the unit tier, or focus on simulating user interaction on a user interface .
At this state stage we do not have an adequate answer for this complexity gap, nor a solution at the integration level. However we are actively researching this area to see if we can contribute to solving this problem.
Responsible for the Technology strategy within IJYI, John is an evangelist for the adoption of DevOps, specifically how best to apply the tools we have at hand to improve delivery. Having worked within software development for the past 20 years, he brings a wealth of experience, technical expertise and passion both for technology and to see the next generation of developers flourish.
To further these goals he has recently undertaken taken PhD research at the University of Suffolk in to the field of Computer Science and Informatics, specifically looking at the further of application and testing within the enterprise.