When Complex Architecture Eats Up Your Testing Time
“Why are there so many defects in production if we’ve already tested the system?”
“We’re delayed again because of testing, and we really can’t push the go-live any further.”
“On paper we have a testing process, but in practice it just doesn’t work…”
Do these questions sound familiar?
In an average enterprise IT environment, architecture today is no longer about “a monolithic ERP and one or two interfaces”. At most large companies, the reality is microservices, API-first architectures, hybrid cloud (on-prem + AWS/Azure/GCP), dozens of integrations (ERP, CRM, billing, logistics, identity, mobile app, web portal) and 3–8 vendors working in parallel.
This doesn’t just make the system larger – it changes it qualitatively as well. Complexity grows in a network-like way: every new interface multiplies the number of potential failure points.
What’s the problem with this?
Fundamentally nothing – these modern architectures bring many advantages: fast business responsiveness, flexibility and vendor independence, scalability and more room for business experimentation. The problem starts when the new system is being introduced or upgraded, but the company’s testing practice – if such a thing exists at all – is still optimised for the “old world”. From this, the familiar phenomena follow almost automatically: integration defects in production, regression issues after every release, “if there’s time left” testing and chronic uncertainty around go-live decisions.
At TestIT we often see exactly these kinds of bottlenecks that significantly impact the success of IT projects. That’s why, as a form of assistance, we’ll walk through in more detail why enterprise IT projects typically slip, fail and become more expensive due to hidden weaknesses in the testing process – and what direction it makes sense to move in if you want to change this, or at least avoid the most common traps.
The Reasons That Contribute to IT Project Delays
First Reason: Many Applications, Many Interfaces, Little Testing Time
In enterprise environments it’s a very typical problem that several dozen business applications communicate with each other, with potentially hundreds of interfaces (API, batch, message queue) in operation, while mixed teams (in-house, nearshore, offshore, vendors) develop in parallel. At the same time, release cycles are getting shorter: quarterly or half-yearly releases are often replaced by monthly or even bi-weekly ones.
According to recent editions of the World Quality Report (Capgemini–Sogeti–Micro Focus), 60–70% of organisations feel that system complexity is growing faster than QA capacity and competence. This shows clearly in project scheduling: scope increases, integrations multiply, development delays have to be “swallowed” somehow, but the go-live date remains fixed because there is no business room to move it.
And what draws the short straw in these situations? The time allocated for testing. The planned six weeks of testing become three, often still on paper “with the same scope”. Out of those three weeks, one week is lost to integration firefighting and environment problems, one week to UAT, and in the best case one week remains for regression. This is obviously insufficient in an architecture with 100+ interfaces. No wonder Gartner and Forrester analyses show that in enterprise environments 60–80% of defects are integration and regression-related: the issues are not primarily in the individual modules themselves, but in how they fit together.
Second Reason: Expecting Knowledge from the Process Instead of from People
The knowledge itself needs to live in the heads of a few key people. The testing process should then be built on top of this. As long as this knowledge is not properly structured and documented, testing will be very difficult. ISTQB standards and ISO/IEC/IEEE 29119 both emphasise that testing works in the long run only if the process is repeatable, not if it is used as a form of firefighting.
In practice, however, the following picture is very common:
1. 3–5 key people truly understand the system and the business processes.
2. A significant part of the test cases and experience exists “in their heads” or in scattered Excel sheets and old Confluence pages.
3. Documentation of end-to-end processes is incomplete or outdated.
“Judy knows how to test this end-to-end, she’s been doing it for the last nine years” – in practice it can happen that Judit has been sufficient for testing so far. But for testing a complex system she definitely won’t be enough anymore, and it would be very risky to entrust this task to her alone. Not to mention the hidden dangers of the attitude “We don’t have time to document, the release has to go out first.”
What’s the problem with this? That in such a setup you simply can’t say precisely:
- what has been tested and what hasn’t
- you can’t set meaningful priorities
- you don’t know what can be safely left out if time runs short
- and you can’t quickly scale QA capacity, because the knowledge isn’t transferable at a systematic, process level.
Third Reason: Lack of In-house Testing Competence and Capacity
Unfortunately QA is still treated in many places as a kind of “last-minute check”, not as strategic quality management. The testing perspective on strategic IT decisions often takes a back seat. In other words:
- there is no unified, enterprise-level testing strategy and methodology, only project- and vendor-level QA “in small”;
- most QA work is carried out on the vendor side, with no strong test management role on the client side;
- business-side UAT maturity is low: often they just glance at the UI and consider the job done.
And when something goes wrong, defects “bounce around” between the parties, and there is no one to hold QA together as a whole.
Fourth Reason: Continuous Release Pressure, Low Time-to-Market
IT is now a core business component in every industry. For banks it’s the mobile app and online banking experience, for telcos the self-service portal, for manufacturers the digital supply chain and partner portals, and in B2B the API ecosystem. Management is right to demand fast releases and fast ROI – but we’re often reluctant to price in the time required for quality.
Publications in IEEE Software and several industry studies consistently show that:
- preventing a critical defect in an early testing phase costs “1 unit”,
- the same defect in the system or UAT phase costs “10–50 units”,
- if it’s discovered in production, it costs “100+ units” (downtime, SLA penalties, reputational damage, loss of customers).
Despite this, in the final weeks you often hear: “Just this once let’s skip the full regression, it’s only a small change anyway.” And these “small” changes are exactly what lead to serious production defects that render entire processes unusable.
So Where Exactly Does the IT Project Process Actually Break Down?
From the above it’s clear what’s missing – let’s list these once more:
✘ No Testing Strategy and No Unified, Enterprise-level Testing Methodology
Most organisations have some kind of project-level test plan, UAT checklist or perhaps a regression Excel sheet. What’s missing is a unified, enterprise-level testing strategy that clearly defines:
- what “done” means for different system and release types,
- which test levels are mandatory (unit, integration, system, E2E, UAT, non-functional),
- based on which metrics (coverage, defect escape rate, defect density) a release is considered acceptable,
- how internal QA and vendor QA work together and who is responsible for what.
If there is no central strategy for this, every project creates its own rulebook, with differing quality levels and expectations. In two consecutive projects, the meaning of “we’ve tested it sufficiently” can be completely different, and management makes every go-live decision “in semi-darkness” .
✘ No End-to-end Designed Regression Test Set
Regression testing typically swings between two extremes: either “we’d like to retest everything” (which is impossible in time), or “we’ll only check the directly affected module” (which is an illusion in an integrated ecosystem).
The reality should be a prioritised, risk-based E2E regression set that:
- covers the business processes most critical to operations, revenue and reputation (order-to-cash, billing, partner management, core banking flows, etc.),
- identifies cross-cutting functionalities (for example authorisation, pricing, invoicing) that span multiple systems,
- includes key non-functional tests (performance, security).
ISTQB and the World Quality Report both point this out – most organisations appear not to have a formalised, maintained regression catalogue. As long as this is missing, every release is a kind of “Russian roulette”. Testing is half-improvised, and at every major incident the question “Why didn’t we catch this?” inevitably resurfaces.
✘ No Real Place for Testing Within the Project
Projects are time-boxed and scope rarely shrinks. If testing is not firmly embedded into the project management framework (through gates and quality criteria), test time becomes the soft-scope element – this is what gets cut first. Testing happens in an ad hoc way, if there is time left.
A typical run of events looks like this:
- the test environment is ready late, so integration testing is delayed,
- performance testing is skipped – “we don’t have room for it now” or “this usually doesn’t cause issues”,
- business-side UAT is shortened or turns into a production pilot.
In such cases, decisions about what to skip are not based on structured risk analysis, but on quick, political, deadline-driven bargaining. The outcome is unexpected defects in production, firefighting after the fact, and all of this capped by a loss of trust towards IT.

The ROI of Test Automation: When Is It Worth It and How Do You Calculate It?
The ROI of test automation is a crucial factor for companies. Discover the key aspects that impact the return on investment, when automation truly pays off, and how to avoid the most frequent pitfalls that often…
So What Is Needed? End-to-end Test Management and QA Governance
In an enterprise environment, test management is a profession in its own right.
It’s not about “getting a few testers”, but about planning, directing and measuring quality at a system level, within a well-defined framework.
Among other things, this means:
- an end-to-end test strategy is created
- QA governance is in place
- quality becomes measurable
End-to-end test strategy | QA governance | Scalability |
starts from business goals and risks | there is a dedicated role (test manager / QA lead) who owns the QA rules | for example, defect escape rate (how many defects reach production) |
fits the architecture and the delivery model (waterfall, agile, SAFe, DevOps) | responsibilities between internal teams and vendors are clearly defined | regression coverage |
defines the mandatory test levels, quality gates and reporting requirements | quality is tracked based on standardised metrics | defect rate of business-critical processes |
SLA and OLA performance from a QA perspective |
✔ Test Automation Where It Brings Real Value
When planning test automation , the primary questions are what is worth automating, at what level (unit, API, UI, E2E), and for what purpose (integration, full regression, smoke, non-functional).
Test automation is particularly effective in the following cases:
- at API level in microservice- and integration-heavy environments, as it is less brittle than purely UI-based automation,
- a automated regression for critical business processesfor example in accounting or core banking base flows,
- automated test execution integrated into CI/CD, with quality gates.
Most test automation efforts drown in the maintenance of UI scripts, and only a small subset of organisations manage to leverage automation at a strategic, E2E level. The way out is to let automation start from test management, aligned with business priorities, and based on realistic ROI calculations. It’s not the technology but the strategy that decides what pays off.
✔ AI-assisted Testing – an Emerging but Already Tangible Trend
AI/ML-based testing support can show up in many forms: predictive defect detection, automatic test case generation, test scripts that adapt to UI changes, log analysis and anomaly detection. According to Gartner and Forrester forecasts, a significant part of QA tools will gain such capabilities over the next 3–5 years.
This can help maintain and optimise regression sets faster, identify untested areas and reduce manual, repetitive QA work. But it is important to understand that this does not replace the foundations. If there is no testing strategy, no E2E regression catalogue, no clear QA governance, AI will only support “smart chaos”. Real value appears where a stable QA process already exists, and AI accelerates it instead of trying to substitute it.
If You’re Unsure About the Effectiveness of Your Own Testing Process, Walk Through This Checklist…
Those who have already lived through a few painful production incidents or heavily delayed releases may start wondering how well their current testing process really fits their own enterprise environment.
From the inside, it is often difficult to give an honest, objective answer. This is why it’s worth bringing in an external, independent QA perspective that doesn’t look at things from a developer or (only) a vendor angle, but from a corporate and business risk point of view, to assess:
- Is there a unified test strategy, test methodology and QA governance?
- Does a prioritised E2E regression set exist for the most critical processes?
- How effective is current test automation, and where would it make sense to extend it?
- How well does the QA process fit agile/DevOps/hybrid ways of working?
- How are integration and non-functional risks handled?
How Can TestIT Help?
At TestIT , with the above approach we provide Test Management and QA consulting services that:
- assess current QA maturity,
- recommend development directions based on industry standards (ISTQB, ISO/IEC/IEEE 29119) and trends (World Quality Report, Gartner, Forrester),
- help establish a testing methodology or fine-tune the end-to-end test management framework,
- and support the creation of a realistic, business-wise defensible test automation strategy.
All this with many years of solid enterprise experience behind us: we have already assessed the systems of numerous market-leading companies and successfully carried out software testing and test management tasks for their complex IT landscapes.
If you have questions, get in touch with us – even a short, focused QA audit can make many blind spots visible.
FAQ
1. For What Size of Organisation Is It Worth Establishing a Separate Test Management / QA Governance Function?
It is definitely justified where multiple critical business systems (ERP, CRM, billing, core banking, logistics etc.) run in parallel, several interdependent projects and releases are running each year, and multiple vendors are working at the same time. In such an environment it’s worth giving QA its own “legs”: a dedicated test manager, QA lead and a clear governance framework.
2. How Should We Start If Until Now We Have Mostly Done Manual, Ad Hoc Testing?
You don’t have to reform everything at once. Steps that work well in practice:
- a quick Testing Survey / QA assessment (review of processes, tools, roles),
- selecting one or two critical business processes and building an E2E regression catalogue for them,
- pilot automation for the most stable and most important tests,
- gradually extending the approach to the entire enterprise system portfolio based on the experience gained.
3. Do We Really Need an External QA Partner If We Have Our Own IT Team?
Not for everything, but in certain situations it can have a very good return. An external partner:
- brings an objective view of structural problems that are hard to see from the inside,
- provides specialised knowledge (test management, non-functional testing, automation strategy, AI-assisted testing),
- shortens the learning curve by transferring proven methodologies and best practices.
The goal is not for the external partner to take over QA, but to strengthen and elevate internal quality assurance.


