STMicroelectronics — FOCUS as a financial-systems migration
Treat FOCUS migration like a financial-systems migration: parallel run, reconcile, then cut over. Compare FOCUS output to legacy provider amortised data until the numbers reconcile. Separate ingestion from enrichment and allocation, so reruns after forecast or allocation changes don’t require re-extracting all provider data.
Takeaways to copy
- A small, well-connected, automated FinOps team can produce strong outcomes.
- Run new workloads in pay-as-you-go for at least 3 months before committing.
- External allocation keys beat tag-only allocation when re-orgs are frequent.
- Forecast accuracy can become a cultural mechanism — recognise teams with accurate forecasts.
Cultural framing: “FinOps is not cost killing.” The goal is balancing business need, service level, and optimal cost — not minimising spend at all costs.
GitLab — custom pipeline with metric-based allocation
GitLab needed a custom pipeline because of its environment: multiple cloud providers, marketplaces, AI features, runners, and a desire to remain cloud-agnostic.
Architecture: provider data → FOCUS converter → Snowflake → DBT transformations → Tableau. Allocation built from authoritative systems — Prometheus, Thanos, GitLab product data, Elasticsearch, internal warehouse. Customer-type dimension (free / paid / internal) added during enrichment. Unit economics published at general availability, not retroactively.
Takeaways to copy
- FOCUS is a specification, not a tool. You still need exports, converters, pipelines, warehouses, and BI.
- Cost allocation should be auditable and based on operational or product systems — not informal assumptions.
- Shared platform costs often require usage-based allocation, not just account- or label-based.
Zoom — treat adoption as a formal initiative
Zoom uses four cloud providers plus data centres and colocations. Pre-FOCUS: pulling reports from each provider, gathering vendor reports, combining in spreadsheets, manually normalising — major time sink.
Approach: dedicated FOCUS initiative tracked in Jira with quarterly OKRs. Audit existing FinOps practices first — find manual reporting, weak allocation, poor granularity, workflow gaps. Plan to extend FOCUS-style formatting to data centre, colocation, SaaS, and business-value metrics.
Targets they set
- Product/service/team allocation: ~95%
- Minimum infrastructure utilization: 75%
- Chargeback accuracy: 85%
- Anomalies visible in a single pane of glass: >90%
UnitedHealth Group — FOCUS for hybrid comparison
UHG uses FOCUS to compare data centre and cloud spend on an apples-to-apples basis. The strongest theme: network cost models differ radically.
- Data-centre networking is pipe/capacity-based and heavily shared.
- Cloud networking is usage/data-transfer-based with major egress, ingress, replication, and managed-service charges.
Network cost is hidden across many service categories
Don’t look only for obvious networking line items. Review:
- VPCs/VNets, load balancers, firewalls, API gateways
- Storage bandwidth and database replication
- SaaS / provider egress and ingress
- Cross-zone and cross-region movement
Takeaways to copy
- Migration estimates are usually wrong. Treat them as directional. Monitor actuals from day one.
- Communicate early when actuals diverge. Anomalies caught in the first week of the month leave time to fix or reforecast before finance escalation.
- Watch for the double bubble — the temporary overlap cost when paying for both on-prem and cloud during migration.
- Standardise on FOCUS but preserve source details for ad-hoc analysis.
European Parliament — brokered cloud, public-sector FinOps
The European Parliament does not contract directly with major hyperscalers. The European Commission acts as cloud broker, managing contracts and adding brokerage fees. The Parliament receives downstream chargeback and needed more granular data than the Commission’s dashboard provided.
Architecture: extract data from the Commission’s cost-control DB → Athena query service → convert to FOCUS → S3 → ingest into the Parliament’s cost-management tool via REST API → segment and report by persona, account, and cost centre.
Practical constraints they hit
- Initial monthly file dump was ~800 MB; the tool only accepted smaller files.
- Partitioning helped, but file-count limits created new constraints.
- Category-based partitioning created risk of uneven file sizes and incomplete ingestion.
- Batches from different sources had different timestamps; imports had to align to specific timestamps for point-in-time consistency.
Anti-patterns they warn against
- Building a fragile connector between two proprietary internal formats when a shared standard is available.
- Relying on web dashboards and manual processing when API automation is possible.
- Designing the data flow without validating ingestion size and file-count constraints.
- Partitioning data in a way that can silently ingest only part of the dataset.
- Giving internal stakeholders only aggregated institutional data when they need account, project, and cost-centre views.
Cross-cutting themes from all five
- Standards protect against bespoke integration debt. A custom connector solves today’s problem but creates tomorrow’s maintenance burden.
- Tool constraints are architecture constraints. File size, file count, and supported formats can shape the entire ingestion design.
- Brokered cloud needs broker data. Native cloud connectors alone don’t show total cost.
- Coexistence is the norm during transition. Plan for old and new datasets running in parallel.
- Alignment grows in importance with complexity. Governance and coordination matter more as more contributors and stakeholders are involved.
You’re ready — take the practice test.
50 multiple-choice questions covering every lesson. 75% pass threshold, instant scoring with rationale, and we’ll email you a copy of your results.
Open the Practice Test →