Agentic AI

Kate Kellogg

Kate Kellogg is an MIT Sloan professor whose research examines organizational governance, accountability, and implementation challenges of deploying agentic AI systems in knowledge-work settings.


title: "Kate Kellogg" type: person tags: [#human-ai-interaction, #autonomy, #safety] created: 2025-01-30 updated: 2025-01-30 status: stub

Kate Kellogg

MIT Sloan professor studying how organizations implement agentic AI in knowledge work, with emphasis on governance, accountability, and the practical challenges of real-world deployment.

Overview

Kate Kellogg is the David J. McGrath Jr. Professor of Management and Innovation at the MIT Sloan School of Management. Her research focuses on helping knowledge workers and organizations develop and implement predictive and generative AI products to improve decision-making, collaboration, and learning. Within the agentic AI space, Kellogg is particularly focused on the organizational, governance, and accountability dimensions of deploying autonomous AI systems — areas she argues are often underestimated relative to the technical challenges.

Kellogg's work is grounded in empirical field research, including a landmark study deploying an AI agent in a clinical oncology setting that revealed the dominant cost of real-world agentic AI is not model tuning but unglamorous data engineering and stakeholder alignment work.

Contributions to Agentic AI

  • Co-authored a 2025 paper on deploying an AI agent to detect adverse events in cancer patients from clinical notes, revealing that approximately 80% of project effort was consumed by data engineering, stakeholder alignment, governance, and workflow integration — not prompt engineering or model fine-tuning.
  • Research highlighting the importance of robust permission-based systems as AI agents gain access to diverse enterprise datasets and systems, framing cybersecurity as an underappreciated agentic risk.
  • Advocacy for governance boards at the organizational level and delegation of specific safety monitoring responsibilities to designated individuals.
  • Emphasis on shared, robust metrics as both an implementation prerequisite and a persistent challenge: without them, organizations cannot prove value or detect whether systems are introducing new risks.
  • Warning against treating monitoring as a one-time project cost rather than a permanent operational expense.
  • Articulation of accountability gaps: organizations must clearly delineate responsibility when autonomous agents err or cause harm, especially in minimal-supervision workflows.

Affiliation

  • Institution: MIT Sloan School of Management
  • Title: David J. McGrath Jr. Professor of Management and Innovation

Key Works

Source Material

  1. Agentic AI, explained — MIT Sloan Ideas Made to Matter — Primary source for Kellogg's research findings, governance recommendations, and implementation guidance.

Related Pages

See also: Agentic AI, AI Safety and Alignment, Human-AI Collaboration, Sinan Aral, John Horton

Open Questions

  • What specific governance board structures have proven effective for agentic AI oversight in enterprise settings?
  • How do accountability frameworks for agentic AI errors differ across sectors (healthcare vs. finance vs. retail)?
  • What does a robust, gaming-resistant metric suite for agentic AI value look like in practice?

Page type: person | Status: stub