Future of Work with AI Agents: Auditing Automation and Augmentation Potential across the U.S. Workforce
⏱️ Reading Time
45 min
📅 Publication Date
June 6, 2025
👥 Authors
Yijia Shao, Humishka Zope, Yucheng Jiang, Jiaxin Pei, David Nguyen, Erik Brynjolfsson, Diyi Yang
LLMs
Tool
Open-Source
Abstract
The rapid rise of compound AI systems (a.k.a., AI agents) is reshaping the labor market, raising concerns about job displacement, diminished human agency, and overreliance on automation. Yet, we lack a systematic understanding of the evolving landscape. In this paper, we address this gap by introducing a novel auditing framework to assess which occupational tasks workers want AI agents to automate or augment, and how those desires align with the current technological capabilities. Our framework features an audio-enhanced mini-interview to capture nuanced worker desires and introduces the Human Agency Scale (HAS) as a shared language to quantify the preferred level of human involvement. Using this framework, we construct the WORKBank database, building on the U.S. Department of Labor's O*NET database, to capture preferences from 1,500 domain workers and capability assessments from AI experts across over 844 tasks spanning 104 occupations. Jointly considering the desire and technological capability divides tasks in WORKBank into four zones: Automation "Green Light" Zone, Automation "Red Light" Zone, R&D Opportunity Zone, Low Priority Zone. This highlights critical mismatches and opportunities for AI agent development. Moving beyond a simple automate-or-not dichotomy, our results reveal diverse HAS profiles across occupations, reflecting heterogeneous expectations for human involvement. Moreover, our study offers early signals of how AI agent integration may reshape the core human competencies, shifting from information-focused skills to interpersonal ones. These findings underscore the importance of aligning AI agent development with human desires and preparing workers for evolving workplace dynamics.
Why It Matters
A first-of-its-kind large-scale audit of worker desires and AI agent capabilities across various occupational tasks. It moves beyond a simple automation dichotomy, introducing the Human Agency Scale (HAS) to quantify preferred human involvement. This research offers actionable insights for prioritizing AI agent development that aligns with human needs, revealing critical mismatches between current investments and areas with high potential for productivity and societal gains.
Key Findings
✓ A novel auditing framework and database, built on worker preferences and AI expert assessments,
✓ Identification of four task zones (Green Light, Red Light, R&D Opportunity, Low Priority) to guide AI development,
✓ Revelation of a disconnect between worker desires for automation and current LLM usage patterns,
✓ Insights into how AI agent integration may shift core human skills from information processing to interpersonal competence.
