Introduction
Technology due diligence is partly a fact-finding exercise. You are building a picture of what the business has built, how it is run, and what it would cost to maintain and grow. But experienced reviewers will tell you that the most valuable part of the process is not the checklist. It is learning to read the signals.
Some red flags are obvious. Others are subtle. A few are invisible without asking the right people the right questions.
This article covers the patterns that consistently indicate elevated risk in technology assessments. Not all of them are deal-breakers on their own. But each one warrants a closer look, and clusters of them in the same business should sharpen your attention considerably.
1. The CTO Can't Articulate Their Own Technical Debt
Ask any technical leader to describe the three things they would fix if they had the budget and time. It is a simple question, and honest teams always have an answer.
When the response is defensive, vague, or suspiciously polished, that is a signal. It usually means one of three things: the debt has not been seriously thought about, the team has been coached to present well rather than honestly, or the person leading the technical function does not have a clear enough view of what they have built.
You want a CTO who can say "our payments integration is held together with string and we know it" rather than one who responds with architectural diagrams and a roadmap slide.
Technical debt exists in every mature codebase. What matters is whether the team understands it, has a plan for it, and has been managing it actively rather than deferring it indefinitely.
2. No Monitoring, No Alerting, No Observability
If a business cannot tell you how their system performs in production, they do not actually know whether it is working.
Basic observability means: centralised logs you can search, meaningful alerts that fire when things go wrong, and some form of application performance monitoring. None of this is exotic. It is table stakes for any software business that has customers.
The absence of observability is often a proxy for how operationally mature the team is. It suggests things break and get fixed reactively rather than problems being caught early. It also makes post-acquisition integration significantly harder, because you are inheriting a black box.
Ask to see how they would diagnose a production incident right now. If the answer involves trawling through unstructured server logs or asking one specific person who holds all the knowledge, that tells you something.
3. Key Person Risk Is Severe and Unacknowledged
Every business has people who matter. That is normal. The red flag is when a disproportionate amount of operational knowledge lives in one or two individuals, and the business either has not noticed or does not consider it a problem.
Warning signs include:
- A single engineer who wrote most of the core systems and is the only person who can operate them
- A CTO who has not documented anything because they "don't need to, they know it all"
- Deployment or release processes that only one person runs
- One person managing all third-party relationships and credentials
The bus factor question is worth asking directly: if this person left tomorrow, what breaks, and how quickly could you recover?
Key person risk is not unusual in smaller businesses. What is concerning is when there is no awareness of it, no plan to address it, and no documentation that would reduce the dependency. That combination suggests the risk has been growing unmanaged for a long time.
4. The Deployment Process Is Manual and Infrequent
How software gets from a developer's machine to production tells you a lot about how the team operates.
A mature team deploys frequently (at least weekly, often daily), with an automated pipeline, in a way that does not require heroics. They have staging environments. They have rollback plans. Deployments are largely unremarkable.
A team with serious operational problems tends to deploy infrequently, manually, and with a degree of anxiety that is visible when you ask about it. "We do a big release every couple of months" should prompt follow-up questions. So should "deployments usually take a day to coordinate."
Infrequent releases accumulate risk. Large batches of change going out together are harder to test, harder to roll back, and harder to attribute when something breaks. They also tend to indicate a weak engineering culture around collaboration and continuous improvement.
5. Security Has Been Treated as Someone Else's Problem
In smaller tech businesses, security is often deferred. There are always more urgent things to build. The result is usually a pattern of known issues that never quite made it to the top of the backlog.
Red flags to look for:
- No recent penetration test, or a pentest that was completed but findings were never prioritised
- Admin accounts and production access shared across the team without a clear access management process
- API keys and secrets embedded in code or stored in documents rather than a secrets manager
- No MFA on critical systems
- No documented response to previous security incidents
The presence of known vulnerabilities is not always fatal. What matters is whether the team knows about them, takes them seriously, and has a plan to address them. If the response to security questions is dismissive or defensive, that is a concern.
GDPR compliance is worth probing specifically in any business that processes personal data. The cost of a serious data protection failure post-acquisition falls squarely on the acquirer.
6. The Architecture Has Grown Without Design
There is a meaningful difference between an architecture that was designed and then evolved, and one that accumulated organically over several years without any deliberate thought.
The latter tends to present as:
- Multiple overlapping systems doing roughly the same thing, never rationalised
- Integration patterns that are inconsistent across the product (REST here, webhooks there, direct database access somewhere else)
- A data model that reflects the history of the product rather than its current shape
- No clear ownership of different parts of the system
This matters for two reasons. First, it increases the maintenance cost of the product. Changes in one area have unpredictable effects in others. Second, it makes integration after acquisition significantly harder, because you cannot simply connect systems to something that has no coherent structure.
Ask to be walked through the architecture at a system level. A team that understands what they have built can explain it clearly. A team that does not usually talks about individual features rather than how things fit together.
7. Contracts and Licences Are Not in Order
Technology risk is not limited to the code. It extends to the commercial arrangements that underpin it.
Check these areas carefully:
IP ownership. Has all work by contractors and agencies been properly assigned to the company? IP that sits with a former agency or freelancer is a material risk.
Open-source licence compliance. GPL and AGPL components create obligations that affect how the software can be distributed and sold. This is not always well understood by the businesses using them.
Software licences. Are all licences properly scoped? Enterprise licence agreements sometimes include per-user or per-CPU terms that the business has grown beyond without adjustment.
Vendor lock-in. What happens if the main cloud provider or infrastructure partner is no longer viable? What are the exit costs?
These issues rarely kill deals on their own, but they create remediation costs that should factor into valuation. An incomplete IP assignment, for example, may require legal work to resolve before the business can be sold on.
8. The Team Talks About Technical Problems in the Past Tense
When engineers describe challenges their business has overcome, that is often a good sign. When every technical risk is framed as something that is "all sorted now" without any substantive explanation of how it was sorted, that warrants scepticism.
Healthy engineering teams are usually candid about current imperfections. They know what is not working well. They have a list of things they would fix if they had more time. They are honest about the trade-offs they have made.
When a team presents as perfect, or when the management presentation appears too consistent with the technical narrative, it is worth finding ways to probe beyond the prepared answers. Talking directly to engineers rather than only to the CTO often produces a more honest picture.
9. Cloud Spend Is Not Understood or Controlled
In a cloud-hosted product, infrastructure cost tends to grow with scale. That is expected. What is less acceptable is a team that does not understand their own bill or has no view on how costs will scale as the business grows.
Ask for a breakdown of current monthly cloud spend. Ask what it was twelve months ago. Ask what they expect it to be at 2x and 5x current scale.
Common problems include:
- Unoptimised database configurations running at many times the necessary cost
- Development and test environments running at full scale around the clock
- Reserved instance or committed use discounts never pursued despite consistent usage
- Untracked data transfer costs that inflate the bill unexpectedly
Cloud spend is frequently an area where meaningful cost reduction is achievable post-acquisition. But it is also an area where the projections built into a deal model can be materially wrong if the business has not managed it properly.
10. Nobody Can Find the Documentation
Documentation is easy to skip and hard to justify investing in. Most engineering teams end up with less than they should have.
The red flag is not the absence of documentation per se. It is the combination of weak documentation with other signs of low operational maturity.
If the system is complex and only one or two people understand it, and there is no documentation to reduce that dependency, the knowledge transfer risk is real. If runbooks do not exist for common operational tasks, every incident becomes a first-time problem. If new engineers take six months to become productive because there is nothing written down, that constrains your ability to scale the team.
Ask what a new engineer would read on day one. Ask where the architecture documentation lives. If the answer is "you would just ask the team," that is worth noting.
How to Respond to Red Flags
Finding red flags in a technology assessment does not necessarily mean walking away. It means understanding what the issues are, how much they cost to fix, and whether the deal economics still make sense once those costs are factored in.
A few principles:
Be specific about the remediation cost. A codebase with significant technical debt is not a reason to abandon a deal. It is a reason to quantify the work required, factor that into valuation, and confirm that the acquirer has the capability to execute it.
Think about the team, not just the technology. Technology problems can be fixed. Culture and process problems are harder. A team that has been building in the wrong direction for a long time is a more significant risk than a team with known but understood technical issues.
Consider what post-acquisition looks like. Some businesses are acquisitions where the technology is the asset. Others are acquisitions where the technology is an enabler. The tolerance for technical risk is different in each case.
When to Bring in External Review
Experienced deal teams develop good pattern recognition over time. But technology assessment requires both breadth (across architecture, security, operations, people, and commercial) and depth (the ability to go beyond the pitch and actually probe the codebase and team).
An independent review also gives your investment committee a cleaner basis for decision-making. Where significant issues are found, a properly documented assessment provides the evidence needed to support valuation adjustments or post-closing commitments.
TechDD provides buy-side technology due diligence for PE and VC transactions. If you are running a process and want an independent view before close, get in touch.
FAQ
What are the biggest red flags in tech due diligence?
The most common red flags include a CTO who cannot articulate their technical debt, no monitoring or observability in production, severe key person risk with no mitigation plan, manual and infrequent deployment processes, and security treated as an afterthought. Clusters of these issues in the same business significantly elevate risk.
Should red flags in tech due diligence kill a deal?
Not necessarily. Finding red flags means understanding what the issues are, how much they cost to fix, and whether the deal economics still make sense once those costs are factored in. Most deals with significant issues still proceed, provided the issues are identified early enough to be managed through valuation or post-closing commitments.
How do you identify key person risk in a tech due diligence?
Ask directly: if this person left tomorrow, what breaks and how quickly could you recover? Look for single engineers who wrote most of the core systems, deployment processes only one person runs, and the absence of documentation that would reduce dependency. The concern is not key person risk itself but the absence of awareness or mitigation.
What does poor operational maturity look like in tech due diligence?
Signs include the absence of centralised logging, no meaningful alerting, manual deployment processes, infrequent releases, and an inability to diagnose production incidents quickly. These are proxies for how the team operates day to day and are often predictive of broader engineering culture issues.
When should I bring in an independent technology due diligence review?
For complex or high-value targets, an independent tech DD engagement will surface issues not visible in management presentations or data room documents. It gives your investment committee a cleaner, more defensible view of technology risk and provides the evidence needed to support valuation adjustments or post-closing commitments.
