Happy 2026, everyone! I trust you all enjoyed a refreshing break and are entering this year with renewed vigor. The discussion surrounding the value of AI projects and agentic AI remains dynamic. I would like to share my perspective on this topic through two key dimensions:
- AI Agentic layers
- Kano Model for value
Using these dimensions, we can delve deeper into the complex landscape of how AI creates and, at times, destroys value. By exploring both the positive impacts and the negative repercussions, we can gain a better understanding of this dual nature of technology. This includes a careful examination of various anti-patterns for value destruction, which can inform best practices and help mitigate potential risks associated with AI deployment.
Quick Refresher on the Kano model

| Kano Category | What it mean and why it matters |
| Must-have (Basic) | Expected capability; absence of it causes failure, presence of it does not delight |
| Performance | Better Execution = more value |
| Delighters | Unexpected differentiation creates step-function value |
| Indifferent | no material impacts on outcomes |
| Reverse | Actively reduces value or trust |
7 Layers of Agentic AI

My definition of the 7 layers of Agentic AI are as follows:
| Agentic AI Layer | What it means |
| Experience and Orchestration | Integrates agents into human workflows, decision loops, and customer experiences. This is the business layer. Help accelerate decision-making. Decide when to override agents, e.g., an automated agent taking in returns from customers and deciding which returned merchandise deserves a refund and which does not. |
| Security and compliance | This is the most important layer, in my opinion. This makes sure that agents do not run wild in your organization. The right level of scope and agency is given to your generative AI agent. Includes policy engines, audit logs, Identity and role-based access, and data residency requirements. |
| Evals and Observability | The basis of explainable AI. It creates confidence in the outputs that the agent will generate. Agents operate in a non-deterministic way. Your tests must reflect non-deterministic reality to engender trust and reflect the proper upper and lower bounds of such non-determinism. This includes telemetry, scenario based evals, Outcome based metrics, Feedback loops etc. |
| Infrastructure | This layer makes agents reliable, scalable, observable, and cost controlled. Without this layer, AI pilots cease to be platforms. |
| Agent Frameworks | Transforms AI into a goal-directed system that can plan, decide, and act. This includes memory, task decomposition, state management, and multi-agent coordination patterns, to name a few. |
| Data Operations | Key elements of your agentic experience, data quality, freshness, data pipeline scale, etc., are all relevant here. This includes RAG, vector databases, etc. |
| Foundation models | The operating system of the Agentic experience that we are trying to develop |
Mapping the 7 layers to Kano Value
Layer 1: Foundation Model
Primary Kano Category: Must have Indifferent
Foundation models are now considered a standard expectation; possessing the latest GPT model is no longer a distinguishing factor. However, the absence of such technology can lead to negative consequences from your users.
Hence,
- The foundation model presence does not mean differentiation
- Absence means immediate failure
- Overinvestment in this space yields diminishing returns
Anti-Patterns
The anti-pattern for this value is when the model is the strategy. This fails on so many fronts due to the following:
- First, one must identify a model and subsequently determine a problem to address.
- This is analogous to selecting a car model prior to establishing the destination and the nature of the terrain to be navigated.
- Treating Foundation model benchmark scores as business value
- If you are driving on the rocks of Moab, Utah, having a 500-horsepower vehicle is not helpful
- Hard-wiring a single model to the system
- hitching your business to a single model and not having any leverage
- Ignoring latency and cost variability
- For the outcomes you want, do know that the cost variations you are willing to tolerate
- Assuming newer is better
- Does the newer model of the vehicle support the terrain on which you want to drive in.
Smell test
“If we change the models tomorrow, does the product still work?”
Layer 2: Data Operations
Primary Kano Category: Performance
Good data means relevant decisions, outcomes, and outputs. The critical elements here are:
- Accuracy
- Trust
- Decision Quality
Users can feel the data is bad, even if they do not know why.
The value in this space is linear with quality improvements and when there is a strong correlation to business outcomes. Like any good system, it is invisible when everything is working well and painful when broken. Poor data becomes a Reverse feature (hallucinations, mistrust)
Anti-Patterns
- Dumping entire knowledge bases into embeddings
- This is generally a common thought process that prevails in most organization when adopting AI
- No freshness or versioning guarantees
- Something hallucinates; it is usually because of the data.
- Ignoring access control in retrieval
- This is common in most cases Agents have unfettered access to data, which is quite problematic for the business overall
- Treating RAG as a one-time setup
- This needs to be validated in regular intervals, as the business terrain may change
- No measurement of retrieval quality
- “Let us all trust AI blindly” is never a successful strategy
Smell test
“Can we explain why the agent used this data”
Layer 3: Agent Frameworks
Primary Kano Category: Performance Delighter (Conditional)
Agents that can plan, act, and coordinate unlock:
- Automation
- Decision delegation
- Speed at Scale
These gains can only be realized with the right context windows and when constrained correctly; that is when the actual performance gains are achieved. Remember, agents are logical machines; they are neither credible nor emotional, which does make working with them challenging.
The mantra of starting simple and then focus on scale really does help here.
Anti-Patterns
- Starting with multi-agents systems
- If you do not have the basics right and multi-agent systems will compound the problem exponentially
- No explicit goals or stopping conditions
- Agents being unbounded means more risk to the business as the probability field is wider
- Optimizing for activity, not the outcome
- An agent denied a $5 return to a customer, this activity was done right, but the customer, who had a positive lifetime value over the last five years, churned because of the bad experience
Smell Test
“Can we explain what the agent is trying to achieve in one sentence?”
Layer 4: Deployment & Infrastructure
Primary Kano Category: Must-Have
No user ever says, “I love how scalable your agent infrastructure is.” . But they will leave when the agent fails to scale. This layer is the bedrock of all your agentic experience and has zero visible upside but has several downsides when ignored. This is just like cloud reliability in the early cloud days.
Anti-Patterns
- Running agents without isolation
- Agents can consume a lot of resources and become expensive very quickly. This is not just tokens, but also compute, storage, networking, and security; i.e., all of it.
- Not having any rate limits or quotas
- Goes back to the prior statement; please have your agents bonded. Not having any cost attribution is another challenge, and it is not amortized across your product portfolio.
- Scaling pilots directly to production
- This is when a small signal seems good enough for production, and then hell breaks loose. The cost of failure in production is high; please respect that and make sure to have all the appropriate checks and balances in place as you deploy these agents.
Smell Test
“What happens if this agent runs 100x more often tomorrow?”
Layer 5: Evaluations & Observability
Primary Kano Category: Performance Delighter (for Leaders)
Customers may not notice evals, but executives, regulators, and boards do. This layer enables faster iteration, risk-adjusted scaling, and organizational trust. The learning curve accelerates, increasing deployment velocity, and the side effect of all this is less fear-driven decision-making.
This area is important since once we get from the demo stage to the production stage, having explainable AI demonstrates a lot of value.
Anti-Patterns
- Static test cases in dynamic environments
- Check out my blog on Dynamic Evaluations. Although it talks about it in the context of security, it holds true in several cases, such as predictive maintenance of robots in an assembly line.
- Measuring accuracy instead of outcomes
- This is a trap we all fall into, because we come from a deterministic mindset and we need to move to probabilistic.
- No baseline comparisons
- Having some sort of a reference of something to understand the potential probability spread
- No production monitoring
- Monitoring production is the most important thing in AI; please do not ignore it
- Ignore edge cases and long-tail failure
- AI is probabilistic, so the probability of hitting an edge case is a lot higher than a deterministic system with a happy path. Please prepare for it.
Smell Test
“How do we know the agent is getting better or worse?”
Layer 6: Security and Compliance
Primary Kano Category: Must have Reverse if Wrong
This is another layer of the unsung hero, and is what makes news headlines when an agent compromises an organization. Agentic AI failures are public, hard to explain, and non-deterministic. Just like the data and infrastructure layer, there is no upside for security, but unlimited downside if you do not have security. If you are addressing the needs of the regulated market, this is an area that you need to focus on… a lot.
Security is the price of admission for enterprise systems; if you are not ready to pay it… then I would highly recommend that you do not play in this space.
Anti-Patterns
- Relying on prompt instructions for safety
- The same prompts that you rely on for safety can be used to compromise your security posture
- No audit logs
- Just like you need to know which user did what, the need is even more when a non-person entity has agency
- No agent identity
- Just like users agents need an identity, and user context awareness. The latter is needed to make sure agents identities honor the scope of the initial user that made the request
- Over restrictions on agents to point of uselessness
- You need to have an objective in mind and plan your security accordingly otherwise, the system becomes useless and is unable to support any decision making
- Treating agents like deterministic API
- Yes, even though we have Model Context Protocol, that does not mean have a determinstic system. The host still has to understand the data returned by the MCP server to deliver a probabalistic answer to the user who provided the initial context
Smell Test
“Can we prove what this agent did, and why?”
Layer 7: Agentic Experience and Orchestration
Primary Kano Category: Delighter
This layer captivates users, prompting remarks such as, “I can’t go back to my old way of working.” It transforms workflows, enhances customer experience, and accelerates decision-making. A strong adoption pull and non-linear ROI characterize this phase. Here, differentiation truly takes shape, as all the hard work invested in data, infrastructure, and security compliance pays off, making it increasingly difficult for competitors to replicate your success. Therefore, it is crucial to carefully manage the data you expose to other agentic systems; otherwise, your differentiation may be short-lived.
Anti-Patterns
- Assuming that chat serves as the sole interface for AI agents can be misleading.
- AI agents encompass various forms, including workflows and content aggregators. While the chat interface represents one of several manifestations, natural language input does not necessitate that chat be the primary interaction method.
- Removing human checkpoints too early in the process
- Reinforcement learning in the context of the business domain, can happen with help of humans. Just because agentic storage systems has ingested a lot of data does not mean it is business domain savvy
- Ignoring change management
- when you iterating fast you need to make sure that you have the appropriate fall back measures. Otherwise it is like watching a trainwreck
- Measuring usage versus impact
- With Web applications, usage meant that users were engaging with the system, with agents especially with multi-agent environment it is not usage but the impact of the agents to the business and the value it accelerates. This is where outcomes becomes even more imperative, it also the building block for outcome based pricing in the future
Smell Test
“Does this help people decide faster or just differently?”
Bring it all together
| Layer | Kano Category | Value Signal | Risk if ignored |
| 7. Experience and Orchestration | Delighter | Step-function ROI | No Adoption |
| 6. Security & Compliance | Must-Have | Market Access | Existential Risk |
| 5. Evals and Observability | Performance/Delighter | Faster scaling | Loss of trust |
| 4. Infrastructure | Must-Have | Reliability | Cost & Outages |
| 3. Agent Frameworks | Peformance | Automation gains | Chaos |
| 2. Data Operations | Performance | Accuracy & trust | Hallucinations |
| 1. Foundation Models | Must-Have | Baseline capability | Irrelevance |
It is very easy to fall into the trap of focusing just on the delighers (Layer 7) , while underfunding the must haves (Layers 4 – 6). When you do that your results of your AI agentic pilots look like this:
- Flashy demos
- Pilot Purgatory
- Security Vetoes
- Executive Distrust
They way Agentic AI moves from experimentation ROI Tranformation is :
- Fund bottom layers for safety and speed
- Differentiate at the top
- Measure relentlessly in the middle.
