Essay Cover

The 400-App Portfolio: What SaaS Rationalization Actually Looks Like

The Spreadsheet That Started It

The first artifact of the engagement was a spreadsheet with 412 rows. The CIO had asked his team for a list of every SaaS application the company paid for. It took them six weeks to produce, which was the first finding of the project, though nobody wrote it down at the time. The second finding was that the spreadsheet was wrong. Within a month of receiving it, we had identified another 67 applications that nobody in IT knew existed, most of them procured with personal credit cards and expensed through the usual channels.

The final count, once the dust settled, was 481 distinct SaaS vendors across a company of roughly 3,400 employees. That worked out to one SaaS subscription for every seven people. The annual spend across those vendors was $14.2M, which sounds like a lot until you realize that the company had no idea what more than a third of it was for.

This is the case study. Eighteen months, a publicly traded mid-market company in a regulated industry (anonymized for obvious reasons), a SaaS rationalization project that started as a cost-cutting exercise and ended as something considerably more ambitious and considerably more humbling. What follows is what actually happened, not what the slides said happened.

The Turn

SaaS rationalization gets pitched in boardrooms as a cost play. Twenty percent savings. Consolidated vendors. Improved compliance posture. These pitches are not wrong, exactly, but they miss what the work actually is. SaaS rationalization is organizational archaeology. You are not cutting applications. You are excavating the accumulated sediment of every unsolved problem the business has ever had, every executive's pet initiative, every team that lost faith in IT and went around them, every acquisition that was never integrated, every compliance requirement that got solved by buying a tool instead of changing a process.

The applications are just the evidence. The real project is figuring out what the business was actually doing, as opposed to what the org chart said it was doing, and then deciding what it should do instead. That is a much harder project, and it is the reason most SaaS rationalization efforts quietly fail. They try to answer the cost question without answering the underlying organizational questions, and the tools grow back like weeds within eighteen months.

The Portfolio, Categorized

The first real work after the inventory was categorization. Not by vendor or by cost, those were table stakes, but by business function and by what I came to call "origin story," which was the reason the application existed in the first place. The 481 applications sorted into roughly six origin stories, and the distribution told us most of what we needed to know about where to look first.

The legitimate backbone (62 applications, $8.1M annual spend). The tools the company actually needed and was actually using. ERP, CRM, HRIS, the collaboration suite, the observability stack, the major line-of-business systems. This was the easy category. Not because it did not have waste, it had plenty, but because everyone agreed these tools belonged in the portfolio. The conversation about them was about optimizing contracts and consolidating overlap, not about whether to keep them.

Shadow IT with legitimate purpose (143 applications, $2.4M). The largest category by count. A team needed something, IT was too slow or said no, so the team bought it themselves. Sometimes the tool was excellent and solved a real problem. Sometimes it duplicated something IT already had and nobody knew about. Sometimes it was a former vendor's tool that a new hire brought with them and refused to give up. This category required the most nuanced handling, because aggressive consolidation here was the fastest way to lose the trust of the business units, and the business units were the customers of the entire exercise.

Abandoned projects (89 applications, $1.1M). Tools bought for initiatives that had been quietly shelved, leadership transitions that had changed direction, or pilots that had never graduated to production. Every one of these was paying an annual renewal for something that had no active users. Finding them was mostly a matter of looking at login data and asking a few questions. Killing them was easy. Explaining to the CFO that the company had been spending over a million dollars a year on software literally nobody was using was less easy.

Compliance theater (71 applications, $1.3M). Tools purchased to satisfy an audit finding, a regulatory requirement, or a board-level concern, and then never fully implemented. The license was active. The integration had never been completed. The feature that had justified the purchase had never been turned on. Every one of these represented an unresolved compliance obligation hiding behind a paid invoice, which is arguably worse than having no tool at all, because the company had convinced itself the problem was solved.

Personal preference proliferation (84 applications, $680K). Individual contributors and small teams who had bought the specific productivity tool, note-taking app, project management platform, or AI assistant they personally preferred. Each one was cheap. The aggregate was not. This category was politically radioactive, because every single subscription had a human defender who would explain, with real conviction, why their tool was different and essential.

Ghosts (32 applications, $640K). Subscriptions that were auto-renewing to vendors nobody at the company remembered buying from. In two cases the original champion had left the company years earlier and the tool had been paying itself forward ever since. In one memorable case, the vendor had been acquired twice and the company was paying an invoice to a holding entity for a product that no longer existed in any recognizable form.

What Got Cut

The headline number, eighteen months in, was that the portfolio went from 481 applications to 217. The annual spend went from $14.2M to $9.8M. The cuts broke down roughly as follows.

The ghost category died fastest and quietest. A single afternoon of cancellation emails eliminated $640K of annual spend and required no organizational negotiation whatsoever. The only complication was that two of the ghosts turned out to be integrated into active workflows that nobody had documented, which we discovered when things broke the following week. This became a recurring lesson.

The abandoned projects came next. Most of them went without resistance. A handful had quiet champions who emerged only once cancellation was imminent, and in three cases those champions made good arguments for reviving the original initiative, which meant the tool stayed but the scope of what it was for got rewritten. More than half of the abandoned projects, though, were abandoned because the underlying business need had also been abandoned, and cancellation was the only honest acknowledgment of that.

Compliance theater was the hardest of the easy categories. Every one of these tools had been bought to solve a real problem, and cancelling the tool did not make the problem go away. For most of them we either completed the original implementation (turning the tool from theater into actual compliance), replaced the tool with a capability already present in the backbone stack, or formally accepted the risk and documented it. The third option was the one nobody wanted to admit was possible, and it ended up being the right answer about a third of the time.

The shadow IT category was where the real work happened, and where the savings were more modest than the inventory had suggested. Of the 143 shadow applications, we consolidated or eliminated 78. The remaining 65 were absorbed into the official portfolio, because the right answer was not to kill them but to admit that they were real tools doing real work and bring them under governance. This was politically difficult because it required IT to publicly acknowledge that the business units had been right to go around them, and it required the business units to accept IT governance over tools they had considered their own. Both sides had to lose a little face for the program to work.

Personal preference proliferation was the category we consciously decided to handle last, and in hindsight we should have handled it never. More on that below.

What Survived (and Why)

The 217 applications that survived were not a cost-optimized portfolio. They were a negotiated portfolio. Every surviving application had a named owner, a documented business purpose, a renewal date on the central calendar, and an assigned budget line. More than half of them had been modified in some way during the project: renegotiated contracts, reduced license counts, consolidated across business units, or migrated onto enterprise pricing from departmental tiers.

The applications that survived fell into three types.

The first type was the tools that were clearly essential and merely needed to be better managed. The ERP, the identity provider, the CRM. Nobody argued these out. The work here was contract and license optimization, which produced about $1.8M of the total savings through the unglamorous mechanism of actually reading the contracts and actually auditing the license utilization.

The second type was the tools that survived a real fight. A field engineering team used a specialized scheduling application that had no close equivalent in the backbone stack. A marketing team had a content production platform that was genuinely load-bearing for a revenue-generating program. A finance team had built a decade of institutional process around a niche reconciliation tool. In each case, the rationalization playbook said "consolidate or eliminate," and in each case the answer was "no, this is actually the right tool, we are keeping it." These decisions required real judgment and were the cases where having a rationalization framework and knowing when to override it mattered most.

The third type was the tools that survived through sheer political gravity. An executive had championed the tool. A customer-facing team depended on it in ways that made change risky during a critical quarter. A recent acquisition was mid-integration and changing tools would have derailed the integration. These survivors were not the right answer on the merits. They were the right answer on the timing, and the project plan quietly noted them for revisiting in the next cycle.

What Would Be Done Differently

This is the section every case study fakes. Most retrospectives produce a list of minor tactical regrets that do not threaten the validity of the main narrative. That is not what happened here. Real hindsight is more uncomfortable than that, and if the case study is to be useful, the uncomfortable parts are the only parts that matter.

The personal preference category should have been left alone. Of all the categorical decisions made during the project, this is the one that produced the worst ratio of political cost to financial return. The aggregate spend was $680K. The aggregate savings after the campaign was $290K. The aggregate morale cost, measured in anonymous engagement survey comments and the departure of two senior contributors who cited the Notion-to-Confluence migration as a factor, was impossible to quantify and almost certainly larger than $290K. Individual contributors mostly do not care about the total SaaS bill. They care about whether their tools work. Taking away tools that work to save money that does not show up in any individual's budget is a trade that looks good on a spreadsheet and bad on every other dimension. In the next engagement I ran, the personal preference category was explicitly descoped from day one, and the outcome was better.

The inventory should have taken three weeks, not twelve. We spent the first quarter of the project doing inventory work that we later realized we could have accomplished in a fraction of the time by simply pulling expense report data, credit card transactions, and SSO logs in parallel from day one rather than sequentially. The thoroughness of the initial inventory became a form of procrastination. Every week we spent finding one more obscure application was a week we were not having the hard conversations about the applications we had already found. The lesson is that 90% inventory in three weeks beats 100% inventory in twelve, because the marginal applications you find in weeks four through twelve are almost never the ones that matter.

Compliance theater should have been the first category handled, not the third. We left it for later because it seemed complex. That was backward. Compliance theater applications are the ones most likely to have a regulator or an auditor knock on the door during the project, and the ones most likely to force an embarrassing emergency purchase if things go wrong. Handling them first gets the worst risk off the table and produces the most durable value, because the alternative to a compliance theater tool is usually an actual compliance process, and building that process takes time that cannot be compressed at the end of a project.

The savings number was the wrong headline. The CIO presented the project to the board as "$4.4M in annual savings," which was the number on the dashboard. The number that actually mattered, and that the CIO could not present because we had never bothered to measure it, was the reduction in data-exposure surface area. The company had gone from 481 places where employee and customer data could possibly live to 217. The number of vendors with access to production data had dropped by 62%. The number of unmanaged OAuth grants had gone from approximately 1,900 to under 400. These were the numbers the board would have cared about most, and they were not in the deck because we had optimized the project reporting around cost from the beginning. If I were starting over, cost would be one of four headline metrics, alongside data-exposure reduction, compliance posture improvement, and vendor concentration risk. The savings would still have been $4.4M, but the story would have been a different and more durable story.

Nothing was done to prevent the problem from recurring. Eighteen months later, the portfolio had started growing again. Not at the rate it had grown before, the procurement process we put in place did slow things down, but not nothing. The deeper work of changing how the organization decided to buy software had not been done. We had cut the weeds. We had not changed the soil. A SaaS rationalization project that does not conclude with a permanent shift in how the business procures, reviews, and retires software is a project that will need to be run again in three years, and the second run is always harder than the first because the organization has learned not to trust the process.

The Underlying Lesson

SaaS rationalization projects fail in two characteristic ways. The first failure mode is that they cut too aggressively and break the business. Everybody worries about this one, and because everybody worries about it, it happens less often than you would think. The second failure mode, the one that actually destroys most of these projects, is that they succeed on paper and fail in reality. The applications get cancelled, the spend goes down, the dashboard turns green, and then eighteen months later the portfolio has quietly regenerated and the CFO is asking the same questions for the second time.

The way to avoid the second failure mode is to understand that the portfolio is a symptom, not the disease. The disease is the set of organizational conditions that produced the portfolio in the first place, and those conditions are specific to each company. Slow procurement. Low trust between IT and business units. Absent vendor management capability. Compliance obligations solved by purchase instead of process. Acquisitions that were integrated financially but never operationally. Each of these is a generator of SaaS sprawl, and each of them will keep generating until somebody addresses it directly.

The case study the company eventually published, the one with the $4.4M savings number in the headline, was not a lie. It was a simplification. The real story was that the company had spent eighteen months discovering, through the evidence of its software portfolio, things about itself that it had not previously been willing to look at, and that the savings were the byproduct of finally looking. The savings were renewable. The looking was not. That was the part worth paying for, and it was the part that did not show up in the deck.

Case in a Sentence A portfolio of 481 SaaS applications consolidated to 217 over eighteen months produced $4.4M in annual savings and a 62% reduction in data-exposure surface, but the most durable lesson was that a SaaS portfolio is organizational archaeology, and the real work is not cutting applications but addressing the organizational conditions that produced them.