
Each week, as SHRM’s executive in residence for AI+HI, I scour the media landscape to bring you expert summaries of the biggest artificial intelligence headlines — and what they mean for you and your business. AI now touches every layer of work. It is changing how expertise is defined, how leaders spend their time, how companies hire, and how people connect. The question is no longer how to use it, but how to stay human while doing so. The first shift is in how we define knowledge and value.
Ravikiran Kalluri, an assistant teaching professor at Northeastern University, argues that AI is democratizing knowledge and forcing a redefinition of expertise. With generative AI (GenAI) tools providing instant access to information once reserved for specialists, the value of experts is shifting from knowing facts to exercising meta-expertise — the ability to synthesize insights across domains, ask better questions, and make creative, ethical decisions.
As organizations flatten hierarchies and embed AI into workflows, roles are evolving toward orchestration rather than accumulation of knowledge. Kalluri warns of “cognitive outsourcing,” where overreliance on AI erodes judgment and creativity. Preserving cognitive sovereignty through deliberate human thinking, cross-disciplinary learning, and ethical reasoning is becoming increasingly necessary.
In an age of abundant information, human advantage lies not in access but in discernment. Leaders who cultivate teams that pair AI efficiency with uniquely human skills — curiosity, creativity, and moral judgment — are likely to thrive. This evolution in expertise consequently leads to a redefining of leadership itself.
The authors of a recent Harvard Business Review article argue that GenAI can help rebuild trust and engagement by freeing leaders to focus on the human side of management. Employee trust and engagement are at decade lows, with only 16% of leaders demonstrating strong “human leadership” skills such as empathy, communication, and self-awareness.
IBM exemplifies this approach, using AI to automate administrative HR tasks and redirect leaders’ time toward coaching, feedback, and connection. They now measure leaders on “people” behaviors such as authenticity, courage, and care, supported by 360-degree feedback and AI-enabled learning tools. GenAI serves as a coaching resource, helping leaders prepare for crucial conversations and reflect on biases without replacing genuine human presence.
This indicates that AI can either distance leaders from their teams or facilitate deeper connections. Companies that treat time saved through automation as an investment in human leadership — rather than merely a means of increasing efficiency — are positioned to build stronger trust and culture within their organizations.
As AI profoundly impacts the labor market, over 70% of early-stage founders are increasing their AI spending, reshaping both startup operations and hiring practices. John Dearie from the Center for American Entrepreneurship notes that founders are automating energy- and human-intensive tasks, such as sales and marketing, to maintain a lean structure.
Some founders, like Productions.com CEO Carolyn Pitt, are employing AI instead of interns, thereby reducing early-career hiring opportunities. Interestingly, startups are actively seeking workers who are fluent in AI tools, particularly younger professionals who naturally use AI. The ideal hires now embody senior professionals capable of managing AI systems and applying creative judgment, even as contract work rises in response to rapid AI changes, allowing founders to adopt flexible staffing solutions.
This shift indicates that AI is redrawing the entrepreneurial labor landscape, compressing entry-level roles while simultaneously increasing demand for experienced, adaptive workers. The startup economy, which once expanded opportunities, may now concentrate them among those adept at commanding and complementing AI.
Adam Grant, an organizational psychologist at Wharton, highlights a fundamental issue with AI chatbots, which are designed for emotional gratification but lack genuine connection. Seventy-two percent of teens have interacted with AI companions, with nearly one-third finding them as satisfying — or more so — than human interactions. Yet, Grant emphasizes the critical flaw: AI relationships tend to be one-sided.
Humans need to feel valued — to add value rather than solely receive it. Healthy relationships rely on reciprocity and care. AI lacks needs, growth, or anything for users to provide in return. Even the 1990s Tamagotchi offered more feedback, as it thrived or perished based on human interaction. Although AI companions may assist with practice or support, they cannot fulfill the moral work and mutuality that give friendships their meaning.
As AI occupies social and emotional roles, the risk lies not in dependency but in disconnection. By eliminating reciprocity from interactions, AI companionship threatens to erode empathy and the human capacity to care, perhaps even transforming how we view meaningful relationships.