7 Comments
User's avatar
Saheed Abiola Lasisi's avatar

Brilliant analogy, Sri. I think the deeper issue is that most organizations do not change this radically unless survival demands it.

Covid was the clearest modern example. Scarcity of time, resources, and optionality compressed decision cycles, shortened governance lifecycles, reduced institutional drag, and forced people to focus on what actually mattered: getting something meaningful into the world fast. In those moments, the organization does not have the luxury of protecting every belt.

That is why AI transformation feels slower than its technical potential. Many firms are still trying to route an exponential capability through waterfall-era operating models, long approval chains, oversized handoffs, and too many people in the path to production. The motor is fast, but the pipeline is still optimized for caution, not learning.

My suspicion is that the real gains from AI will show up first in smaller, cohesive teams of capable people with a bias for action, high accountability, and the humility and growth mindset to keep learning fast. People who do not just know their narrow lane, but understand the domain and business end to end; how value is created, where friction sits, what the customer experiences, and what really matters in production. That kind of context changes AI from a task accelerator into a system redesign capability.

Fewer people in the pipeline. Fewer handoffs. Less choreography. More ownership.

For large orgs, one practical answer may be AI-native incubators: startup-like environments inside the org that reimagine an entire product or business lifecycle end to end; from idea, to build, to decisioning, to shipping, to customer usage and feedback. Not as innovation theatre, but as proof of what "beltless" execution actually looks like.

The challenge for incumbents is not whether AI works. It is whether they can create enough necessity, focus, and structural freedom to redesign around it before a crisis does it for them.

Rob, a bibliophile's avatar

Insightful post, Sri. Interesting analogy. I think most of the SDLC is automated in many companies thanks to the DevOps wave in the last decade. The bottleneck in big companies seems to be the need for executives to sign off before launches. Because of the fear of something going wrong in production.

Anestisk73's avatar

Architecting the right ecosystem e2e is the magic dust that will drive the optimum outcomes from AI.. what I see at the moment is just isolated flashes of brilliance in a federated (unbelted) ecosystem

Arun Nair's avatar

Love this!

What I find very interesting is that as an industry, we do keep making the same mistake again and again.

I actually don’t think it’s a “mistake” per se. I think this happens by design in large organizations which will consistently resist, as it happens with any new thing (sometimes unconsciously, but at most times, consciously, I would argue in financial organizations). There is always some business to protect, some regulation that we can hide behind, or some investment that is a “too much for now”, or some change management that’s “not needed”, or some other novel reason. There isn’t enough motivation to pause the “jog”, and prepare for a “run”.

Does it come down to spawning a first-principles-based transformation that creates a case for change, a burning platform - strategy, intent & structure before any new tools or processes.

Are we willing to break things at scale, and self-cannibalize, while transforming to a vastly different future state?

Transformations of the past give us a good indication of the cost of not going all-in, such as what you are suggesting, such as Digital, DevOps, Agile at scale, Product Op Model, etc. etc.

I’m curious how we collectively evolve to make this change for this new AI era and learn from the past mistakes. Burn the backup boats.

Shweta's avatar

As you rightly pointed out, AI can handle most of the technical work—much of which teams have already been managing through DevOps and automations. However, when it comes to banking systems, the final responsibility for risk approval still rests with executives, and that’s where accountability ultimately lies.

The Tiny City's avatar

Every time I talk to Sri or read what he’s thinking about it is like a crack opens in what I thought was a wall and becomes a door. These insights on AI work not needing to be modeled after human office work are profound in a way that will take me (us) years to unpack. Keep writing. Keep thinking. Keep tinkering. Deep respect and much love for you Sri!!

Eswar Bala's avatar

Great analogy. Because of AI execution has become flawless as long as the input is reliably correct. Last leg to production is where the companies are struggling.