• All Projects
  • Designing for Agentic AI in Insurance
  • AI Research
  • Sovereign UX Codex
  • Dashboards for AI-Powered Workflows
  • Tutoring Mobile App
  • Menu

.w.

  • All Projects
  • Designing for Agentic AI in Insurance
  • AI Research
  • Sovereign UX Codex
  • Dashboards for AI-Powered Workflows
  • Tutoring Mobile App

From Tool to Teammate: Designing for Shared Agency in the Agentic Era

December 09, 2025

Executive Summary

The Shift: The core of modern product strategy is the move from reactive GenAI tools (which require explicit prompts) to proactive Agentic AI systems (which plan, reason, and act autonomously). This transformation creates profound operational agility but introduces new design challenges around control and trust.

The Mandate: Success requires reimagining workflows with agents at the core, not just plugging them into old systems. Design must evolve toward shared agency, treating the AI as a collaborative teammate rather than a static assistant.

The Strategy: Product teams must develop a new UX language that uses Narrative Transparency—leveraging processing delays as opportunities to show the user what the agent is planning, thereby building calibrated trust and ensuring the human retains control and the ability to intervene.

For product leaders, the rise of Generative AI has proven its power as a feature. However, the next leap in business value will not come from smarter tools; it will come from Agentic AI—systems designed to function as goal-driven, autonomous virtual collaborators.

This represents the most profound conceptual shift in product design since the mobile internet. It requires leaders to stop thinking about users interacting with software and start thinking about users collaborating alongside an intelligent teammate. This transformation is necessary to unlock operational agility, but it demands a complete overhaul of design principles centered on shared agency and a new kind of human control.


I. The Paradigm Shift: From Reactive Tools to Autonomous Agents

The Agentic AI model fundamentally changes the nature of software interaction by enhancing four core capabilities: autonomy, planning, memory, and integration.

1. The Generative AI Paradox

Generative AI (GenAI) is often a reactive tool, requiring explicit human commands for output (a prompt, a question, a directive). While useful, this limits efficiency. Agentic AI breaks this paradox by combining these capabilities to automate complex business processes.

  • The Agentic Core: An AI agent is built to take initiative, reason, plan out a sequence of steps, and execute those actions on the user's behalf. This capability allows the system to move beyond simple content generation to become a proactive orchestrator of operational tasks.

2. Reimagining Workflows for Collaboration

Unlocking the full potential of this shift requires more than just plugging new agents into existing structures. Leaders must mandate the reimagining of workflows from the ground up, placing the agent at the core of the process. The User Experience (UX) must evolve from managing a static assistant to engaging with an intelligent, conversational teammate.


II. The Design Mandate: Engineering Shared Agency

When a machine takes autonomous action, the user’s sense of control and agency can be easily lost, creating profound uncertainty and distrust. Design must therefore focus on Mixed-Initiative Systems, where both the human and the AI can initiate action and contribute to the goal outcome.

1. Designing the New UX Language

The interaction model for agentic systems must move away from the simple command-and-response model to one that is conversational and adaptive.

  • Narrative Transparency: In traditional software, latency is a weakness. In agentic systems, delays—or periods of internal planning—become narrative opportunities. By showing the user what the agent is planning, why it is pausing, or how it is utilizing its memory to execute a goal, a frustrating wait is transformed into a trust-building moment that communicates the agent's internal state. This transparency is essential for building Calibrated Trust.

  • Explicit Control and Feedback: Users need to feel they retain ultimate agency. UX designers must provide human-friendly controls that allow users to easily tell the AI what to do, refine its actions, or step in and take over if needed. Providing clear feedback mechanisms for manual adjustments is essential for maintaining control and preventing a sense of helplessness.

2. Navigating the AI-First vs. HCAI Tension

This agentic shift introduces a strategic tension in system architecture that PMs must manage:

FrameworkCore PriorityDesign Implication

Human-Centered AI (HCAI)

Prioritizes human needs, experiences, and ethical considerations (Fairness, Transparency, Agency).

Mandates embedding ethical governance and multidisciplinary teams (design, social sciences) early in the process.

AI-First Design

Prioritizes optimizing systems (APIs) for consumption by autonomous AI agents.

Requires explicit clarity, predictability, and highly structured API frameworks, as AI agents cannot interpret external documentation or human nuance.

The reconciliation point lies in Shared Agency Design. The highly efficient AI-First foundation enables rapid autonomy, while the HCAI principles must be robustly enforced in the UX layer through governance, auditability, and clear boundaries that guarantee human control.


III. Conclusion: A Blueprint for Competitive Agility

The rise of agentic systems is not about replacing human work; it is about amplifying human capabilities through Superagency—the optimal deployment of AI to unlock new levels of creativity and productivity.

Leaders must embrace this new design mandate by:

  • Mandating new design patterns that prioritize transparency and shared control.

  • Investing in organizational maturity to create PMs who can speak the technical language of agentic frameworks.

  • Defining the new UX language that uses interaction to communicate intent and build calibrated trust, transforming delays into narrative opportunities for accountability.

The future competitive edge belongs to organizations that treat their AI agents not as smart tools, but as accountable, proactive teammates.


Sources

https://www.mckinsey.com/capabilities/quantumblack/our-insights/seizing-the-agentic-ai-advantage

https://www.boltic.io/blog/agentic-ai-companies

https://www.weforum.org/stories/2025/08/rethinking-the-user-experience-in-the-age-of-multi-agent-ai/

https://www.smashingmagazine.com/2025/09/psychology-trust-ai-guide-measuring-designing-user-confidence/

https://cltc.berkeley.edu/publication/ux-design-considerations-for-human-ai-agent-interaction/

https://www.aubergine.co/insights/building-trust-in-ai-through-design

https://aws.amazon.com/what-is/prompt-engineering/

https://medium.com/@prajktyeole/designing-the-invisible-ux-challenges-and-opportunities-in-ai-powered-tools-b7a1ac023602

https://blog.workday.com/en-us/future-work-requires-seamless-human-ai-collaboration.html

https://thenewstack.io/its-time-to-build-apis-for-ai-not-just-for-developers/

https://www.youtube.com/watch?v=1FhgHHrhC5Q

https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/superagency-in-the-workplace-empowering-people-to-unlock-ais-full-potential-at-work

https://cacm.acm.org/blogcacm/essential-skills-for-next-gen-product-managers/

Product Manager 2.0: The Rise of the Strategically Adaptable Leader

November 24, 2025

Executive Summary

The Challenge: The complexity and rapid evolution of AI systems are rendering traditional, specialized Product Manager (PM) skillsets obsolete. The most urgent organizational gap is the deficit in AI-Native leadership capable of bridging the technical, ethical, and business worlds.  

The Threat: Without leaders who possess Strategic Adaptability, organizations will continue to suffer from siloed AI projects, slow adoption, and failure to proactively embed compliance, leading to costly rework and reputational damage.  

The Strategy: Leadership must mandate a rigorous upskilling program focused on three domains: 1) Strategic Adaptability (moving from deep specialization to versatile domain knowledge), 2) AI-Native Technical Fluency (understanding APIs, data infrastructure, and agentic frameworks), and 3) Full Lifecycle Compliance (integrating risk and ethics into every stage, adhering to standards like the EU AI Act).  

For product leaders, the rise of artificial intelligence marks not just an evolution of tools, but a complete redefinition of the role of the Product Manager (PM). The PM has always been the central nervous system of the product—aligning engineering, design, and business goals. Now, they must also align AI models, ethical frameworks, compliance mandates, and data infrastructure.

The bar for PM excellence has been irrevocably raised. In an AI-driven world, the successful leader is no longer defined by deep functional specialization, but by their capacity for Strategic Adaptability and their fluency in the new language of AI-native architecture. Ignoring this shift means failing to scale AI and being outmaneuvered by competitors who treat organizational capability as a strategic asset.


I. The Strategic Deficit: Specialization vs. Adaptability

The complexity of modern AI product development—involving data scientists, ML engineers, compliance experts, and cross-functional design teams—demands a new kind of leader.  

1. Moving Beyond the Specialist Model

The traditional PM model, optimized for deep knowledge in a single vertical, is ill-equipped to manage probabilistic AI systems. The most successful product leaders must become versatile strategists:

  • Strategic Adaptability is the New Edge: The best PMs move away from deep functional specialization, learning quickly to juggle domains like business, data, design, AI, and engineering with ease. This ability to understand trade-offs and identify the precise leverage points where AI can deliver maximum impact becomes a competitive advantage.  

  • The New Priorities: In this amplified role, PMs must not only master the latest AI tools but also sharpen timeless leadership skills: strategic acumen, influential leadership to align cross-functional teams, deep product intuition, and sophisticated communication to bridge the technical and business worlds effectively.

2. The Necessity of AI-Native Technical Fluency

The core challenge for PMs is bridging the conceptual gap between user experience and the underlying AI model. The technical feasibility of any innovative design idea is directly linked to understanding the model’s data needs and architectural constraints.  

  • Speaking the Language of AI: PMs do not need to code, but they must achieve AI-Native Technical Fluency—a comprehensive understanding of APIs, data infrastructure, and AI architecture. They must know how models are trained and deployed and be able to evaluate the new generation of agentic frameworks, where multiple large language models collaborate autonomously.  

  • Managing Probabilistic Systems: Because AI is probabilistic, PMs must possess the acumen to manage failure states and ensure continuous learning. This requires designing product core workflows that include regular data collection mechanisms and user feedback loops to continuously improve the AI system's performance.  


II. The Governance Mandate: Full Lifecycle Compliance

As AI systems are increasingly deployed in high-stakes environments, the failure to embed ethical and legal compliance proactively creates extreme business risk.

1. Integrating Risk into Every Stage

Ethical risk management can no longer be a regulatory afterthought managed by compliance teams in isolation. It must be a mandatory phase of the product lifecycle, managed by the PM.  

  • Full Lifecycle Compliance: PMs must integrate risk management, legal compliance, and safety governance (adhering to frameworks like the EU AI Act or NIST AI RMF) into every stage of the product lifecycle, right from ideation to deployment. This proactive measure is necessary to avoid costly rework and mitigate the severe reputational damage associated with non-compliance.  

  • Ethical Fluency as a Safeguard: PMs must be fluent in Ethical AI Practices. This means implementing features that prioritize transparency, avoid discrimination based on biases in training data, and ensure ethical data collection to safeguard the company’s reputation and align with societal standards.  

2. New Roles for the Agentic Era

The rise of autonomous, Agentic AI—systems that take initiative, plan, and act on behalf of the user —is creating entirely new leadership roles focused on orchestration and maintenance.  

  • The Agent Operations Manager: An emerging discipline focused on managing the day-to-day performance, incidents, and required upkeep of deployed AI agents. This role demands expertise in operational platforms that manage agents and processes for mitigating model performance decline or unexpected disruptions.  

  • The Responsible Use AI Architect: Roles such as this require specialized knowledge in creating responsible AI safeguards, ensuring familiarity with machine learning architectures, and experience leading cross-team engineering efforts to embed ethics from the ground up.


III. Conclusion: Transforming Leadership for the AI Era

The Product Manager is being amplified, not replaced, by AI. The bar for leadership excellence is now defined by the ability to manage complexity, navigate uncertainty, and align ethical principles with measurable business outcomes.

To achieve sustained success, organizations must commit to systematic upskilling that produces strategically adaptable leaders who possess AI-Native Technical Fluency and champion ethical governance. This new generation of PM is essential for driving organizational maturity, minimizing the friction of the J-Curve, and ensuring the full promise of the agentic era is realized.


Sources

https://cacm.acm.org/blogcacm/essential-skills-for-next-gen-product-managers/

https://www.egonzehnder.com/functions/technology-officers/insights/how-ai-is-redefining-the-product-managers-role

https://ginitalent.com/top-skills-in-ai-for-product-managers/

https://uxdesign.cc/ai-product-design-identifying-skills-gaps-and-how-to-close-them-5342b22ab54e

https://www.mckinsey.com/capabilities/quantumblack/our-insights/seizing-the-agentic-ai-advantage

https://www.washingtonpost.com/business/2025/10/29/ai-new-jobs/

The New Fluency: Why AI Literacy is the Next Corporate Mandatory

November 12, 2025

Executive Summary

The Challenge: The primary barrier to achieving full AI maturity is not technology, but human talent and leadership readiness. Executives estimate about 40% of their workforce will need reskilling in the next three years, yet only a fraction of companies are investing meaningfully.  

The Threat: Lack of AI literacy—the foundational knowledge of AI ethics, capabilities, and data—creates organizational friction, leading to siloed projects, internal resistance, and a workforce functionally unable to utilize complex agentic systems.  

The Strategy: Leadership must mandate widespread AI literacy through mass, tiered training to accelerate staff adaptation. This includes making Prompt Engineering a core operational competency for all roles and upskilling Product Managers in Strategic Adaptability and AI-Native Technical Fluency to bridge the technical, ethical, and business worlds.


For executive and product leaders, the economic promise of artificial intelligence is clear: massive productivity gains, streamlined operations, and new revenue streams. Yet, despite massive capital investment, most organizations are struggling to convert successful AI pilots into scalable, enterprise-wide success stories. Only about 1 percent of leaders feel their companies are truly mature in AI deployment, where the technology is fully integrated into workflows and drives substantial business outcomes.  

The reason for this widespread failure to scale is not primarily technological—it is organizational.

New research confirms that the introduction of industrial AI follows a predictable, painful pattern: the J-Curve trajectory. AI adoption leads to a measurable, temporary decline in performance before stronger growth is realized in output, revenue, and employment. Navigating this dip requires executive foresight, strategic patience, and treating organizational change as a critical investment.  


I. The Looming Talent Deficit

The shift to AI fundamentally changes job roles and competency requirements across the entire organization. The failure to strategically address this change is creating a massive talent deficit that directly limits scalability.

1. The Urgency of Reskilling

For organizations to compete, they must immediately improve the AI literacy of their entire employee base. Executives estimate that a staggering 40 percent of their existing workforce will need reskilling over the next three years—learning entire new skill sets to perform new jobs.  

However, the intention to train is not translating into action. While nearly 90 percent of business leaders believe their workforce needs improved AI skills, only 6 percent report having begun upskilling in "a meaningful way." This massive gap between intent and execution is the primary bottleneck preventing organizations from accelerating past the J-Curve valley.

2. Beyond Automation: The Human Edge

The purpose of training must be to hone the skills that machines cannot replicate and improve the technical ability to collaborate with AI tools. In hybrid human-AI teams, the human worker remains essential for critical functions:  

  • Ethical and Contextual Judgment: Only humans can consider the ethical implications, weigh up real-world context, and make decisions that align with social values.  

  • Critical Thinking and Data Literacy: Data literacy, critical thinking, and the ability to work alongside AI tools will be as valuable as traditional domain expertise. Continuous learning is non-negotiable for employees to stay relevant as roles evolve.  


II. Strategic Upskilling: The Mandate for Fluency

To bridge the deficit and minimize the organizational friction of the J-Curve, leadership must mandate comprehensive, tiered training that targets both specialized and general competencies.

1. Making Prompt Engineering a Core Competency

For the vast majority of employees who will interact with Generative AI daily, Prompt Engineering—the systematic process of guiding AI solutions to generate high-quality, relevant outputs—is rapidly becoming a mandatory operational competency.  

  • The Technical Necessity: Generative AI models are highly flexible, but they require detailed instructions to produce accurate and relevant responses. Prompt engineering involves using the right formats, phrases, and structures to ensure the AI's output is meaningful and usable, transforming the user from a passive receiver to an active guide.  

  • The Business Impact: By systematizing this skill, organizations ensure employees can effectively harness AI capabilities, leading to efficiency gains, such as customer care representatives using GenAI to answer questions in real-time.  


2. The New Leadership Skillset for Product Managers

The traditional Product Manager (PM) role is being amplified, requiring a new, rigorous skillset that goes beyond general management and focuses on mastering the technology and its ethical implications.  

  • Strategic Adaptability: The most successful PMs must move away from deep functional specialization to embrace Strategic Adaptability. They must be versatile, learning quickly to juggle business, data, design, and AI domains to identify leverage points where AI can deliver maximum impact. This ability to constantly reassess priorities and align them with business objectives is a competitive advantage.  

  • AI-Native Technical Fluency: PMs do not need to code, but they must achieve AI-Native Technical Fluency—a comprehensive understanding of APIs, data infrastructure, and how models are trained and deployed within emerging agentic frameworks. This knowledge allows them to "speak the language" of AI systems and effectively align cross-functional teams, including engineers, data scientists, and compliance experts.  

  • Ethical Fluency: PMs must be fluent in Ethical AI Practices, ensuring systems respect laws and moral principles. This involves prioritizing transparency, considering privacy regulations like GDPR, and implementing features that explain how AI decisions are made to maintain accountability and safeguard the company's reputation.  


III. Conclusion: Transforming Talent into a Competitive Asset

Exemplary companies treat mass upskilling not as a training cost, but as a strategic mechanism to accelerate past the organizational drag of the J-Curve.

  • Leading by Example: Companies like Accenture have made massive commitments to training, positioning their ability to "train and retool at scale" as a core competitive advantage. Accenture has trained over 550,000 employees in the fundamentals of Generative AI, while IBM provides comprehensive training pathways covering machine learning, deep learning, NLP, and mandatory AI ethics.  

  • The Agentic Future: The full potential of Agentic AI—systems built to take initiative and act autonomously—requires managers who can effectively orchestrate these agents. This transition requires leadership to "put the 'M' back in manager" by shifting focus from functional disciplinary skills to applying knowledge across domains.  

By implementing continuous learning and demanding strategic fluency from their talent, leaders can close the skill gap, minimize internal friction, and ensure their workforce is equipped to achieve the "superagency" required for sustained success in the AI era.  


Sources

https://www.ibm.com/think/insights/ai-upskillinghttps://www.ibm.com/think/insights/ai-upskilling

https://www.eitdeeptechtalent.eu/news-and-events/news-archive/the-future-of-human-ai-collaboration/

https://www.library.hbs.edu/working-knowledge/solving-three-common-ai-challenges-companies-face

https://cacm.acm.org/blogcacm/essential-skills-for-next-gen-product-managers/

https://aws.amazon.com/what-is/prompt-engineering/

https://www.graduateschool.edu/courses/ai-prompt-engineering-for-the-federal-workforce

https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/superagency-in-the-workplace-empowering-people-to-unlock-ais-full-potential-at-work

https://mitsloan.mit.edu/ideas-made-to-matter/productivity-paradox-ai-adoption-manufacturing-firms

https://www.mckinsey.com/capabilities/quantumblack/our-insights/seizing-the-agentic-ai-advantage

https://cacm.acm.org/blogcacm/essential-skills-for-next-gen-product-managers/

https://www.egonzehnder.com/functions/technology-officers/insights/how-ai-is-redefining-the-product-managers-role

https://ginitalent.com/top-skills-in-ai-for-product-managers/

https://www.crn.com/news/ai/2025/accenture-s-3b-ai-bet-is-paying-off-inside-a-massive-transformation-fueled-by-advanced-ai

https://newsroom.accenture.com/news/2024/accenture-launches-accenture-learnvantage-to-help-clients-and-their-people-gain-essential-skills-and-achieve-greater-business-value-in-the-ai-economy

https://skillsbuild.org/college-students/course-catalog

https://www.mckinsey.com/capabilities/people-and-organizational-performance/our-insights/the-organization-blog/rethink-management-and-talent-for-agentic-ai

https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/superagency-in-the-workplace-empowering-people-to-unlock-ais-full-potential-at-work

Prev / Next

Articles

Featured
Dec 17, 2025
Institutionalizing Integrity: The Framework Blueprint for Accountable AI
Dec 17, 2025
Dec 17, 2025
Dec 11, 2025
The Recourse Requirement: Why Auditable Errors Build Trust Faster Than Perfection
Dec 11, 2025
Dec 11, 2025
Dec 10, 2025
The Hybrid Mandate: Where Human Oversight Becomes Operational Efficiency
Dec 10, 2025
Dec 10, 2025
Dec 9, 2025
From Tool to Teammate: Designing for Shared Agency in the Agentic Era
Dec 9, 2025
Dec 9, 2025
Nov 24, 2025
Product Manager 2.0: The Rise of the Strategically Adaptable Leader
Nov 24, 2025
Nov 24, 2025
Nov 12, 2025
The New Fluency: Why AI Literacy is the Next Corporate Mandatory
Nov 12, 2025
Nov 12, 2025
Nov 10, 2025
The J-Curve Trap: Why AI Adoption Requires Strategic Patience
Nov 10, 2025
Nov 10, 2025
Nov 8, 2025
The Integrity Leak: How AI Bias Creates Reputational and Legal Exposure
Nov 8, 2025
Nov 8, 2025
Nov 7, 2025
Beyond the Black Box: The Urgency of Designing for ‘I Don’t Know’
Nov 7, 2025
Nov 7, 2025
Nov 6, 2025
The Four Pillars of Failure: Why Your AI Investment is Facing a Trust Crisis
Nov 6, 2025
Nov 6, 2025