Every major leap in computing has not only redefined technology but also reordered the services industry around it. The microprocessor era made code the raw material of software products and services. The cloud era shifted the lever of value to workloads. Now, with the rise of large-scale AI models, we stand on the cusp of another transformation: the model era, in which context is the new raw material.
This shift is not cosmetic. It is as significant as the transition from mainframes to microprocessors or from data centers to the cloud. And it will reshape enterprise IT, products, services, and strategy for decades to come.
From Microprocessors to Models
The microprocessor revolution of the 1980s made computation personal and programmable. Code was the lever of value. If a bank wanted to automate loan approvals, every single exception case had to be hard-coded by programmers. If an insurer needed to adjudicate claims with dozens of policy variations, each variation had to be written into the system. The result was millions of lines of bespoke software, endlessly patched, migrated, and modernized.
Services firms flourished by providing the armies of coders required for this work. The economic engine was application development, system integration, and software maintenance.
Two decades later, the rise of the cloud reordered the playbook. Enterprises shifted from writing and maintaining local applications to migrating workloads to hyperscale platforms. The raw material was no longer code but workloads. Services firms built billion-dollar practices around migration, SaaS implementation, modernization, and managed operations. The pitch was speed, elasticity, and cost savings.
Now, with the rise of AI models, most visibly large language models (LLMs), the lever of value has shifted once more. Models are not deterministic like microprocessors, nor infrastructure platforms like the cloud. They are probabilistic and adaptive systems that generate output not from precise instructions, but from context.
Deterministic vs. Contextual Computing: From GIGO to CINO
To grasp why this matters, consider the difference between deterministic and contextual computing.
A microprocessor is deterministic: given the same input, it always produces the same output. That determinism is what made compilers, testing frameworks, and maintenance practices reliable. But it also made them brittle. If a scenario was not specified in advance, the system simply failed.
Models, by contrast, are contextual computing engines. They can interpret broad instructions, adapt to ambiguous inputs, and generate useful outputs by drawing on their training and the real-time context they are given.
In deterministic computing, changing an outcome required rewriting code. In contextual computing, changing an outcome requires adjusting context: the data, rules, workflows, and tacit knowledge provided to the model.
This is not a marginal improvement; it is a paradigm shift. Where deterministic engines lived by “garbage in, garbage out (GIGO),” contextual systems thrive on “context in, intelligence out (CINO).”
For example, a bank’s loan system built on deterministic computing would reject any application that didn’t match its pre-coded rules, even if the applicant was otherwise creditworthy. A contextual model, by contrast, can take in the broader context—repayment history, income patterns, exception-handling policies, and even how loan officers have made similar decisions in the past—and generate a recommendation to the loan officer that this applicant deserves further consideration (intelligence).
What Counts as Context?
For enterprises, context is not just data. It is the organizational DNA that captures how a company actually works. Context includes, but is not limited to:
Data: structured and unstructured information from enterprise systems, documents, APIs, and knowledge bases.
Rules: policies, regulations, and compliance requirements.
Processes: workflows, decision points, and escalation paths.
Personas: roles, responsibilities, and tones of interaction.
Collective wisdom: tacit knowledge about what has worked and what has not.
Execution patterns: how teams actually operate, the shortcuts, exceptions, and judgment calls that never appear in manuals but shape outcomes.
To make this concrete, consider the following examples.
For software engineers, context is the combination of code repositories, the tickets they resolve, the continuous integration pipelines they rely on, and even the Slack or Teams conversations they use to problem-solve bugs. It is not just the source code itself, but the collaborative fabric of engineering that determines how software is actually built and maintained.
For legal teams, context is embodied in the contracts they draft and redline, the regulatory clauses they reference, the precedent cases that shape their interpretations, and the negotiation trails across emails and documents. Legal outcomes depend less on any one document and more on how these pieces fit together in practice.
For customer support operations, context includes the standard workflows for handling service requests, the nuanced variations teams apply to different customer segments, the back-and-forth communications with customers, and the accumulated case history that reflects past resolutions. What makes support effective is not just the playbook but the lived experience of applying it in real time.
These examples illustrate a crucial point: context is not limited to structured data fields. It is the sum total of the knowledge, rules, interactions, and tacit practices that make each enterprise unique.
Consider again two insurers. Both write policies, but their claims processes, exception handling, and tacit rules differ. That uniqueness is their competitive advantage. Capturing it in the form of context is what makes AI systems useful at the enterprise level.
Without context, model outputs are generic and unreliable. With context, they become accurate, efficient, compliant, and cost-effective.
Why Context Becomes the New Raw Material
It is important to emphasize that this shift is not just about today’s large language models. LLMs are simply the most visible expression of the trend. Tomorrow, we may see very different architectures: multimodal models that integrate vision and speech, agentic systems that reason continuously, domain-specific transformers, or entirely new paradigms that have yet to emerge.
But the constant is not the model type, it is the role the model plays in the computing stack. In every case, the model becomes the central compute engine, the core around which products, platforms, and services are organized.
And in every case, the lever of value is context. Just as code once shaped outcomes in the microprocessor era, context now shapes outcomes in the model era. Every client environment is unique, and context engineering becomes the means of tailoring AI to that uniqueness.
This is why IT services firms, for example, far from being disrupted by AI, face a once-in-a-generation opportunity. Their new mandate is not writing millions of lines of code or migrating workloads, but engineering and managing context. Context is the lever that makes models enterprise-grade.
The IT Services Analogy
The parallels with earlier eras are clear. In the 1980s and 1990s, few enterprises wrote all their own software; they relied on IT services firms. In the 2010s, few migrated all workloads themselves; they turned to partners. In the 2020s and beyond, few will engineer and manage context pipelines alone. They will need services partners who can combine domain expertise, data engineering, and AI fluency.
In each era, the raw material changed—from code, to workloads, to context—and the services industry reorganized around it. Context engineering is poised to become the organizing principle of IT services for the next decade.
Why Enterprises Should Care
For enterprises, this is not just an industry story. It is a leadership agenda. Contextual computing affects compliance, operations, HR, supply chain, and customer service. It redefines not only how technology is deployed, but how organizations structure teams, allocate spend, and measure outcomes.
The shift to context forces leaders to ask: What makes our enterprise unique? How do we capture and encode our tacit knowledge? How do we govern and maintain our context pipelines as carefully as we once managed codebases and workloads?
The Call to Action
Models, whether today’s LLMs or tomorrow’s yet-unseen architectures, are the new microprocessors. They are the computing engines of the next decade, but unlike microprocessors, they are contextual. The implication is profound: the lever of value has shifted from code to context.
For enterprises, mastering context is the key to deploying AI that is accurate, compliant, and cost-effective. For services firms, mastering context engineering is the path to the next wave of growth.
History is clear. Every computing revolution has created a new raw material and a new services playbook. This time, whoever owns the context layer will own the future of IT services.
(Disclaimer: The opinions expressed in this column are that of the writer. The facts and opinions expressed here do not reflect the views of www.economictimes.com)
This shift is not cosmetic. It is as significant as the transition from mainframes to microprocessors or from data centers to the cloud. And it will reshape enterprise IT, products, services, and strategy for decades to come.
From Microprocessors to Models
The microprocessor revolution of the 1980s made computation personal and programmable. Code was the lever of value. If a bank wanted to automate loan approvals, every single exception case had to be hard-coded by programmers. If an insurer needed to adjudicate claims with dozens of policy variations, each variation had to be written into the system. The result was millions of lines of bespoke software, endlessly patched, migrated, and modernized.
Services firms flourished by providing the armies of coders required for this work. The economic engine was application development, system integration, and software maintenance.
Two decades later, the rise of the cloud reordered the playbook. Enterprises shifted from writing and maintaining local applications to migrating workloads to hyperscale platforms. The raw material was no longer code but workloads. Services firms built billion-dollar practices around migration, SaaS implementation, modernization, and managed operations. The pitch was speed, elasticity, and cost savings.
Now, with the rise of AI models, most visibly large language models (LLMs), the lever of value has shifted once more. Models are not deterministic like microprocessors, nor infrastructure platforms like the cloud. They are probabilistic and adaptive systems that generate output not from precise instructions, but from context.
Deterministic vs. Contextual Computing: From GIGO to CINO
To grasp why this matters, consider the difference between deterministic and contextual computing.
A microprocessor is deterministic: given the same input, it always produces the same output. That determinism is what made compilers, testing frameworks, and maintenance practices reliable. But it also made them brittle. If a scenario was not specified in advance, the system simply failed.
Models, by contrast, are contextual computing engines. They can interpret broad instructions, adapt to ambiguous inputs, and generate useful outputs by drawing on their training and the real-time context they are given.
In deterministic computing, changing an outcome required rewriting code. In contextual computing, changing an outcome requires adjusting context: the data, rules, workflows, and tacit knowledge provided to the model.
This is not a marginal improvement; it is a paradigm shift. Where deterministic engines lived by “garbage in, garbage out (GIGO),” contextual systems thrive on “context in, intelligence out (CINO).”
For example, a bank’s loan system built on deterministic computing would reject any application that didn’t match its pre-coded rules, even if the applicant was otherwise creditworthy. A contextual model, by contrast, can take in the broader context—repayment history, income patterns, exception-handling policies, and even how loan officers have made similar decisions in the past—and generate a recommendation to the loan officer that this applicant deserves further consideration (intelligence).
What Counts as Context?
For enterprises, context is not just data. It is the organizational DNA that captures how a company actually works. Context includes, but is not limited to:
Data: structured and unstructured information from enterprise systems, documents, APIs, and knowledge bases.
Rules: policies, regulations, and compliance requirements.
Processes: workflows, decision points, and escalation paths.
Personas: roles, responsibilities, and tones of interaction.
Collective wisdom: tacit knowledge about what has worked and what has not.
Execution patterns: how teams actually operate, the shortcuts, exceptions, and judgment calls that never appear in manuals but shape outcomes.
To make this concrete, consider the following examples.
For software engineers, context is the combination of code repositories, the tickets they resolve, the continuous integration pipelines they rely on, and even the Slack or Teams conversations they use to problem-solve bugs. It is not just the source code itself, but the collaborative fabric of engineering that determines how software is actually built and maintained.
For legal teams, context is embodied in the contracts they draft and redline, the regulatory clauses they reference, the precedent cases that shape their interpretations, and the negotiation trails across emails and documents. Legal outcomes depend less on any one document and more on how these pieces fit together in practice.
For customer support operations, context includes the standard workflows for handling service requests, the nuanced variations teams apply to different customer segments, the back-and-forth communications with customers, and the accumulated case history that reflects past resolutions. What makes support effective is not just the playbook but the lived experience of applying it in real time.
These examples illustrate a crucial point: context is not limited to structured data fields. It is the sum total of the knowledge, rules, interactions, and tacit practices that make each enterprise unique.
Consider again two insurers. Both write policies, but their claims processes, exception handling, and tacit rules differ. That uniqueness is their competitive advantage. Capturing it in the form of context is what makes AI systems useful at the enterprise level.
Without context, model outputs are generic and unreliable. With context, they become accurate, efficient, compliant, and cost-effective.
Why Context Becomes the New Raw Material
It is important to emphasize that this shift is not just about today’s large language models. LLMs are simply the most visible expression of the trend. Tomorrow, we may see very different architectures: multimodal models that integrate vision and speech, agentic systems that reason continuously, domain-specific transformers, or entirely new paradigms that have yet to emerge.
But the constant is not the model type, it is the role the model plays in the computing stack. In every case, the model becomes the central compute engine, the core around which products, platforms, and services are organized.
And in every case, the lever of value is context. Just as code once shaped outcomes in the microprocessor era, context now shapes outcomes in the model era. Every client environment is unique, and context engineering becomes the means of tailoring AI to that uniqueness.
This is why IT services firms, for example, far from being disrupted by AI, face a once-in-a-generation opportunity. Their new mandate is not writing millions of lines of code or migrating workloads, but engineering and managing context. Context is the lever that makes models enterprise-grade.
The IT Services Analogy
The parallels with earlier eras are clear. In the 1980s and 1990s, few enterprises wrote all their own software; they relied on IT services firms. In the 2010s, few migrated all workloads themselves; they turned to partners. In the 2020s and beyond, few will engineer and manage context pipelines alone. They will need services partners who can combine domain expertise, data engineering, and AI fluency.
In each era, the raw material changed—from code, to workloads, to context—and the services industry reorganized around it. Context engineering is poised to become the organizing principle of IT services for the next decade.
Why Enterprises Should Care
For enterprises, this is not just an industry story. It is a leadership agenda. Contextual computing affects compliance, operations, HR, supply chain, and customer service. It redefines not only how technology is deployed, but how organizations structure teams, allocate spend, and measure outcomes.
The shift to context forces leaders to ask: What makes our enterprise unique? How do we capture and encode our tacit knowledge? How do we govern and maintain our context pipelines as carefully as we once managed codebases and workloads?
The Call to Action
Models, whether today’s LLMs or tomorrow’s yet-unseen architectures, are the new microprocessors. They are the computing engines of the next decade, but unlike microprocessors, they are contextual. The implication is profound: the lever of value has shifted from code to context.
For enterprises, mastering context is the key to deploying AI that is accurate, compliant, and cost-effective. For services firms, mastering context engineering is the path to the next wave of growth.
History is clear. Every computing revolution has created a new raw material and a new services playbook. This time, whoever owns the context layer will own the future of IT services.
(Disclaimer: The opinions expressed in this column are that of the writer. The facts and opinions expressed here do not reflect the views of www.economictimes.com)
You may also like
Intensify drive to achieve PM's vision of drug-free India: Shah
'Mystery virus' at UK business conference sees 20 fall sick including CEO
Delhi issues guidelines for street dog management
Navi Mumbai Crime: 70-Year-Old Advocate Arrested For Forging Minor Certificate With Fake Judge's Seal In Ulwe Land Deal
Perishers - 17th September 2025