Strategy & Insights

Agentic Management Requires More Than Vibes

Many of us know the iconic scene from the ’90s hit movie Jerry Maguire when Jerry, a sports agent, asks his client, “What can I do for you, Rod?” Rod responds that there is one thing. With music blaring, he shouts, “Show me the money, Jerry! I want to hear you say it.” When Jerry responds, “Show you the money!” Rod says, “No, man, you gotta say, ‘Show me the money!’” 

That’s the lesson: When incentives are aligned, everybody wins. It's not always as easy as Hollywood makes it look, but it's not impossible either.

 

We’re all going to have agents. 

With the rise of agentic AI, having our own agents will not just be something for athletes and actors. We’ve already embraced AI assistants. Software developers became vibe coders, explaining the idea or vibe of what they wanted to do to their assistants who write the code for them. The term spread so quickly that by March “vibe coding” had won a spot in the Meriam-Webster Dictionary. The vibe has now spread to marketers who embrace vibe marketing, to business analysts with vibe analytics, to anyone else who uses AI to accelerate daily tasks. 

Over time, many of these AI assistants will become increasingly agentic, not only generating but executing the software or campaigns they create. These AI agents will be assigned tasks, designed to function autonomously and capable of working collaboratively on complex workflows. According to a recent study, 10% of large enterprises are currently using AI agents (in some form), with more than half planning to use them in the next year and 82% within the next three years. Executives anticipate this shift will increase workflow automation by 71% and improve customer service by 64%. Expectations are high.

 

Agents will have to deliver. 

Our AI assistants have already proven they can deliver. Snowflake’s 2025 research found that, for early adopters, gen AI delivers: 92% report having already achieved ROI on their gen AI projects. And, there is good news for our vibe coders: 62% of respondents reported measurable improvements in code quality and 56% report reduction in bug detection and resolution. Early adopters also reported that AI-generated personalized offers and recommendations deliver improved customer or audience engagement (63%) and higher conversion and click-through rates (55%). That’s good news for our vibe marketers. 

AI agents, however, might prove more of a challenge. Not all human-like AI behaviors are desirable. The news is rife with stories of deviant and deceptive AI behavior: in a simulation exercise, an agentic broker, told to maximize profits, committed insider trading despite knowing it was illegal, proving that, like humans, quotas can trump ethics. AI also exhibits progressive cognitive drift, like the childhood game of telephone, where messages degrade through repeated transmission. What did we expect? The idea was to recreate human intelligence; we get the bad with the good. 

Despite these known behavioral risks, 57% of large-enterprise executives consider the potential productivity benefits to outweigh the risks.They seem confident that they will know how to manage these new and perhaps unpredictable "colleagues."

 

Principals will have to manage.

There has been a lot of hand-wringing about AI’s impact on jobs. Yet we’ve also heard, “AI will not replace you, but someone using AI will.” That includes knowing how to effectively leverage and manage your agentic colleagues and workforce more broadly. Management in the agentic age requires more than just vibes. It requires strong principals. Pun intended.

Management in the agentic age requires more than just vibes.

For years, social scientists have studied task delegation through principal-agent theory. A principal delegates tasks to an agent. Yet due to uneven information, misaligned incentives or conflicting interests, the agent may not act as the principal wishes. Management practices address these alignment challenges. Fortunately, many of the best practices for managing human agents also apply to agentic AI, with some new tools to guide the process. These best practices are an integral part of ensuring responsible AI

Managers of AI agents should embrace their role as the principal, hiring and managing their agentic AI accordingly. They must:

  • Define the role with an explicit “job description.” Does it chat with customers, interpret longer documents, analyze data, flag anomalies, create content, trigger events and/or collaborate with an extended team? How much autonomy will the role require? What level of “seniority” will be given? 

  • Identify the right AI model “candidate,” either to be bought or built. Different AI and gen AI tools have different capabilities. Snowflake AI Observability starts with the evaluation of alternative LLMs to get a better understanding of the potential fit — like interviewing human prospects. 

  • “Onboard” agents with appropriate training data. Prepare your models to perform their specific function, including strict prioritization when requirements conflict. In Snowflake, easy data access and sharing expands available data assets for training. RAG architectures ensure that contextual data is on-hand for agents to access. 

  • Establish output and outcome performance metrics. Having defined the “job,” define a job well done and how you’ll measure it. With Snowflake AI Observability, you can set up an evaluation framework with the specific metrics on which your agent will be judged. For example, evaluating a RAG agent, metrics might include coherence, context relevance of the content retrieved, groundedness in sources and answer relevance.

  • Monitor performance, invoke mitigation strategies and enforce rules. Continously evaluate performance against established criteria, expectations and ethical norms to identify and correct anomalies. Establish confidence thresholds and mitigation strategies like augmenting training data. 

  • Instruct and empower agents to monitor themselves and their colleagues. AI agents themselves can be part of the governance. Constructive-feedback loops allow agents to review and refine their work. AI agents teach themselves to solve problems or to escalate them to a supervisor — either agentic or human — if necessary.

Management science (of humans) has been around for over a century. That gives us a head start, but it’s likely time to update skills for the new agentic world. ​​Start upleveling your agentic management skills with The Essential Guide to Generative AI, and then get hands-on training with Generative AI & ML School at the Snowflake Data Cloud Academy

And if you're thinking of following Jerry Maguire into the world of pro sports, download Snowflake’s ebook Game Changer: How Gen AI Is Revolutionizing Sports

Share Article

Subscribe to our blog newsletter

Get the best, coolest and latest delivered to your inbox each week

Where Data Does More

  • 30-day free trial
  • No credit card required
  • Cancel anytime