What I Learned at FinOps X Day Bengaluru 2026
I landed in Bengaluru on a red-eye, straight from exhibiting at the India AI Impact Summit in Delhi. Five days surrounded by AI conversations — $200 billion in AI investment commitments, 89 countri...

Nishant Thorat
Founder

I landed in Bengaluru on a red-eye, straight from exhibiting at the India AI Impact Summit in Delhi. Five days surrounded by AI conversations — $200 billion in AI investment commitments, 89 countries endorsing the AI Declaration, five lakh attendees buzzing about what's next. My head was still swirling with AI economy jargon when I walked into WeWork Salarpuria Symbiosis for FinOps X Day Bengaluru on February 21st.
And here's the thing — in Delhi, everyone was talking about building AI. In Bengaluru, the conversation shifted to something nobody wants to talk about: who's paying for all of it?
Three presentations that day hit me differently. Not the usual "here's our dashboard" demos or "we saved 20%" victory laps. These were honest, uncomfortable, and sometimes reinforcing each other's. The kind of talks that make you rethink assumptions you didn't know you had.
The FOCUS Promise vs. The FOCUS Reality: Sapna Gupta, Cargill
Think of FOCUS (FinOps Open Cost and Usage Specification) as the Esperanto of cloud billing. One standardized format that lets you compare billing data from AWS alongside Databricks alongside your SaaS vendors, all in the same spreadsheet. Beautiful idea on paper. FOCUS 1.2 landed in May 2025, version 1.3 dropped in December. The FinOps Foundation has been championing it hard. And I wanted to believe in it.
But here's where Sapna Gupta's talk at Cargill gave me a reality check — confirmed something I've been suspecting for a while now.
What Cargill Actually Had to Do
When Sapna talked about FOCUS adoption at Cargill, she wasn't talking about flipping a switch. She was talking about her team manually converting billing data from multiple vendors into FOCUS format. Not because they wanted to — because most vendors simply didn't provide FOCUS-compatible exports.
The major CSPs? Sure, they've announced support — but most of them aren't aligned to the most recent FOCUS version, and conformance packs have revealed major gaps in their implementations. If even the CSPs have significant ground to cover, imagine the long tail of SaaS vendors, data platforms, and specialized tools. Not even close.
Sapna pushed vendors for FOCUS-format billing data. The response? Crickets. Or worse — vague promises with no timelines.
The Chicken-and-Egg Problem Nobody Wants to Admit
If you ask me, FOCUS is caught in a classic chicken-and-egg trap.
Practitioners want vendors to support FOCUS so they can normalize their billing data. Vendors look at the adoption numbers and think, "Why should I dedicate engineering resources to support a format that only a handful of customers are actively asking for?" Meanwhile, practitioners can't demonstrate the ROI of FOCUS because not enough vendors support it yet.
Yes, there are vendors who have technically "adopted" FOCUS. Go look at the adoption page — impressive list. But here's what that page won't tell you — there's a massive conformance gap. Many vendors aren't keeping pace with new FOCUS versions. And why would they? Let's be honest about the incentives here. If you're a SaaS vendor with a hundred priorities on your roadmap and a handful of customers asking for FOCUS, dedicating engineering bandwidth to maintain compliance across version updates is a losing argument in every product prioritization meeting.
I don't think FOCUS will die. But I do think it's at serious risk of becoming one of those well-intentioned industry standards that everybody nods along to at conferences and nobody actually implements. A theoretical exercise that slowly fades into irrelevance while we all pretend it's still on track.
Unless something changes. Unless the FinOps Foundation and the practitioner community stop politely requesting and start demanding — real, sustained, "we're making purchasing decisions based on this" kind of pressure.
The ROI math for vendors just doesn't add up on its own. And hoping they'll do the right thing out of goodwill? That's not a strategy. That's wishful thinking.
What Needs to Happen
The FinOps Foundation launching a conformance certification program in 2026 is a step. But certification without consequences is just a badge on a website. What would actually move the needle? Procurement teams asking vendors, "Do you support FOCUS 1.3?" with the same seriousness they ask, "Are you SOC 2 compliant?" Make it a line item in RFPs. Make it a deal-breaker. That's the only language vendors understand.
Until that happens, teams like Sapna's at Cargill will keep doing the heavy lifting themselves — building and maintaining custom ETL pipelines to bridge the gap that vendors should be filling. And that, to me, is a failure of the ecosystem, not a success story of practitioner resilience.
Shifting FinOps Left: Trust Your People First, Then Build Frameworks — Swapnil Dubey, SLB
After Sapna's talk left me questioning an entire specification's future, Swapnil Dubey walked up and reminded me why I got into FinOps in the first place.
I had a conversation with Swapnil before his talk. And honestly, that 20-minute chat shaped my thinking more than most conference keynotes I've sat through this year.
The Real Talk: FinOps Is a Culture Problem, Not a Tools Problem
I've said this so many times it probably needs its own bumper sticker: FinOps is more about people and culture than tools and techniques. Swapnil's talk reinforced that belief in the best possible way.
His approach at SLB is built on two foundations that sound simple but are incredibly hard to implement: empower your teams, and believe that no one has bad intentions.
Think about that for a second. In most organizations, FinOps starts with a compliance mindset. "Engineers are overspending, let's put guardrails." "Teams aren't tagging resources, let's enforce policies." The underlying assumption? People are the problem.
Swapnil's "Design for Cost" framework does the exact opposite. It starts with the assumption that engineers want to do the right thing — they just need the right context and the right structure to make cost-intelligent decisions.
The Four-Pillar Framework
SLB's approach breaks down into four pillars. What I like — each one deliberately puts responsibility where it belongs, not where it's convenient:
Cost Intelligent Teams — Architects and tech leads define the defaults: budgets, right SKU and right size guidelines, performance plans, trade-off analysis between cost vs. security vs. reliability, and maintenance windows. The emphasis here is upskilling, not policing. Most FinOps teams I've worked with skip this entirely — they jump to dashboards and alerts without ever investing in the people who make the actual spending decisions.
Spend Smart Design — This is where it gets interesting. Product owners and architects collaborate during the design phase — not after deployment, not during the quarterly cost review, but at the architecture stage. They do "educated guessing" of initial costs as part of architecture decisions. Product owners get cost visibility early, not as a surprise invoice three months later.
Spend Smart Execution — Engineering teams implement with clear technical guidelines. Three principles that Swapnil emphasized: don't assume, deliver with a performance plan, and stick to defaults when you're uncertain. The guidelines cover everything from infrastructure SKU identification to multi-profile deployments to SRE workflows.
Cost-Aware Maintenance — Periodic revisiting of decisions, SKU alignment, right-sizing, RI and CUD acquisition, and reacting to anomalies and KPI breaches. This is where most organizations *start* their FinOps journey — reactively. Swapnil's point is that if you've done the first three pillars right, maintenance becomes a tune-up, not an emergency.
The RACI Matrix Worth Studying
The RACI matrix Swapnil presented — print it out, stick it on your team's wall.

Notice where the central FinOps team sits: they're consulted, not responsible. Architects own Cost Intelligent Teams. Managers are accountable for Design and Execution. Engineering teams handle the actual execution and maintenance work. The FinOps team advises and enables — they don't become the bottleneck.
In too many organizations, the FinOps team becomes the cost police — the people who send the scary emails and review every spend request. That doesn't scale. What scales is embedding cost awareness into the engineering culture itself.
But Let's Not Romanticize This
I'd be lying if I said every organization can pull this off.
SLB's approach works because SLB earned the right to trust its teams. They have the organizational maturity, the leadership buy-in, and the engineering discipline to make decentralized cost ownership work. Not everyone does. I've walked into organizations where engineers couldn't care less about costs — not because they're bad people, but because nobody ever gave them a reason to care. The culture rewards shipping fast. Period. Cost is someone else's problem.
You can't copy-paste SLB's framework into that environment and expect it to work. Each organization has its own dynamics, its own politics, its own history. And that context — not the framework, not the tool — determines how FinOps actually plays out.
So no, I don't think Swapnil's approach is universally applicable. But I do think it shows what's possible when you bet on your people instead of betting against them. And most organizations haven't even tried.
30% Reduction. No New Tools.
The numbers: 30% cloud cost reduction from January 2024 to January 2026. Non-production cloud footprint held completely flat through 2025. Not by buying another FinOps platform. Not by hiring a bigger FinOps team. By trusting engineers to make better decisions and giving them the structure to do it.
If that doesn't make you question your current approach, I don't know what will.
Databricks Cost Control at AT&T Scale: Sandeep Bodla, AT&T
Five days in Delhi, every conversation pointed to the same thing: the AI economy runs on data. High-bandwidth memory, faster AI chips, vast storage — that's the infrastructure layer. But above that sits the layer nobody's budgeting for properly — data lakes, data pipelines, data crunching at a scale that would have seemed absurd five years ago.
Databricks, Snowflake, Microsoft Fabric — these Data Cloud Platforms are becoming the backbone of the AI economy. And I'm convinced they'll be one of the largest cost pillars most organizations haven't fully reckoned with yet. We're all so busy talking about GPU costs and model training that we're ignoring the data layer underneath.
Sandeep Bodla's talk landed right in this blind spot.
The Double Billing Problem Nobody Talks About
Data Cloud Platforms like Databricks add a double billing layer. Think of it like renting a commercial kitchen inside a hotel. You pay rent to the hotel (that's your CSP — Azure, in AT&T's case). And you pay for the kitchen equipment and staff (that's Databricks — DBUs, compute, storage). Two bills. Two pricing models. Two sets of levers to optimize.
This wrecks the entire TCO conversation. Your Databricks bill tells you one story. Your Azure bill tells another. And nobody in the organization is connecting the two. A spike in DBU consumption maps to specific VM types, storage tiers, and networking costs underneath — but good luck finding someone who tracks that correlation.
I'm framing this around Databricks because that's what Sandeep presented, but let me be clear — the same problem exists with Snowflake, with Fabric, with every data platform hosted on a CSP. And I think this double billing layer will be the defining FinOps challenge of the next three years. Not reserved instances. Not right-sizing. This.
AT&T's Scale (and Why It Matters)
AT&T's Databricks journey started in late 2019. They migrated from on-premises legacy Hadoop — 50,000+ jobs per day, 22 million CPU hours per month, 1,300+ nodes, roughly 13 petabytes of storage — to Azure Databricks in just 2.5 years.
Today, they ingest hundreds of terabytes of data per day. They manage over 70 petabytes. Thousands of users. And their platform team tracked 340+ Databricks platform enhancements just in 2025 alone — not counting runtime changes.
At this scale, a 1% improvement isn't a rounding error — it's millions of dollars. So when Sandeep talks about optimization, he's not theorizing. He's talking about real money on real infrastructure that breaks if you get it wrong.
Shift Left to Save Right
Sandeep's framework follows a four-phase lifecycle: Architect, Build, Watch, Sustain. But the organizing philosophy — "Shift Left to Save Right" — is what makes it stick.
Sound familiar? Swapnil at SLB was saying the same thing — push cost decisions to the design phase, not the invoice review. Two completely different companies, two different scales, arriving at the same conclusion independently. The further left you push cost decisions, the less you spend on reactive optimization later. It's the same principle as catching bugs in code review instead of production. Except here, the "bugs" are $100,000 cluster configurations that nobody questioned.
Policy Guardrails That Actually Work
AT&T's six policy guardrails form the backbone of their Architect phase:

Cost Attribution — Every resource gets tagged. No exceptions. This sounds obvious, but at AT&T's scale, "untagged" isn't just an annoyance — it's millions of dollars that nobody can explain in a quarterly review. Showback and chargeback are mathematically impossible without it.
Defaults Override — Databricks ships with default configurations that are generous (read: expensive). And I suspect this is by design — generous defaults mean higher consumption, which means higher revenue for the platform. A simple example: Databricks lets you spin up All Purpose clusters for any workload by default. These clusters stay running and cost money even when idle. AT&T's override? Production jobs must use ephemeral clusters that spin down the moment the job completes. One config change, massive cost difference. Every organization running Databricks at scale should be doing this. Most aren't.
RI-Conscious Instance Types — Teams can only use VM types that align with their Reserved Instance plan. If you bought reservations for D-series VMs, you can't spin up E-series on a whim. Again, echoes of Swapnil's SKU alignment philosophy at SLB — different platform, same discipline.
Cluster Type Restrictions — No more cluster_type = "all". Interactive clusters are expensive, and if you don't explicitly restrict them, every data scientist will default to them because they're convenient. Convenience is the enemy of cost discipline.
Workload Type Enforcement — Jobs don't get attached to all-purpose clusters. All-purpose is for ad hoc exploration. Production jobs get ephemeral clusters. I've seen organizations where 80% of production workloads run on all-purpose clusters simply because that's how the first engineer set it up and nobody ever questioned it.
Spark Version Control — No running workloads on outdated or soon-to-be-deprecated Spark versions. This is the one people sleep on. Newer Spark versions aren't just about features — they ship with query optimizations and runtime improvements that can cut job execution time (and therefore cost) significantly. Running on an old Spark version is like paying for a car upgrade and then refusing to drive it.
The "Save Just by Showing" Insight
My favourite part — their Databricks Spend Compass — a showback notification system that emails cost trends per application. Cost trend for the last 30 days. Monthly breakdown for the last 6 months. SKU-wise distribution. Links to the detailed dashboard.
Their principle? "Save just by showing." When application owners see their own costs, behavior changes. No policy needed. No enforcement. Just visibility.
It's the gym membership analogy in reverse. If the gym sent you a weekly email saying "You paid $150 this month and visited twice — that's $75 per visit," you'd either start going more or cancel the membership. Either way, you'd stop wasting money. Of course, no gym would ever send that email — why would they poke a hole in their own revenue? Same reason Databricks or any CSP won't build this for you. AT&T had to build it themselves. And that tells you everything about where the incentives really lie.
What This Means for the AI Economy
As AI workloads scale, data platforms will quietly become the single largest line item that most organizations aren't actively managing. And by the time they notice, they'll be 18 months and several million dollars too late.
The principles AT&T applied — policy guardrails, resource management, workload tuning, showback visibility — aren't Databricks-specific. They're a playbook for any Data Cloud Platform. The organizations that figure this out now will have a massive competitive advantage. The ones that don't? They'll be the ones writing panicked Slack messages when the quarterly Databricks bill comes in 3x over budget.
So Where Does This Leave Us?
I walked out of FinOps X Day with more questions than answers.
Sapna showed me that FOCUS — something I genuinely wanted to bet on — still doesn't have the vendor muscle behind it. Swapnil reminded me that none of this works without getting the people part right first. And Sandeep laid out what's coming next — a data platform cost wave that most of us are sleepwalking into.
We've gotten decent at the basics. Right-sizing VMs, buying RIs, shutting down idle stuff. That was the easy part. What's ahead is messier. Vendors who won't standardize. Engineering cultures that resist change. A double-billing layer on data platforms that nobody's finance team fully understands yet.
After five days of AI hype in Delhi followed by one day of FinOps reality in Bengaluru, I'd take the questions over the hype any day.
Nishant Thorat is the founder of CloudYali, a cloud cost management platform governing $250M+ in cloud spend. He attended FinOps X Day Bengaluru on February 21, 2026, after spending five days at the India AI Impact Summit in Delhi.
Related Articles
AI Inference Cost Attribution: What AWS, Azure, GCP, OpenAI, and Anthropic Actually Give You
Every major cloud provider offers AI cost attribution — AWS Inference Profiles, Azure deployments, GCP labels — but each is incomplete in a different place. Here is what actually works.
What is FinOps? A Beginner's Guide to Cloud Financial Management
Cloud costs feeling out of control? FinOps brings finance, engineering, and business teams together to take charge of cloud spending—without slowing down innovation.
The Engineer's Guide to AWS Cost Optimization: 30+ Strategies to Cut Your Cloud Bill by 15-60%
You know that feeling when you open the AWS billing console and think, "Wait, we're spending how much?" You're not alone. I've seen it countless times—teams building incredible products while their...