Worthwhile Content: April 2026
Some worthwhile reads, watches, and listens from the last month
World of Work:
In April, I offered some practical advice on how to ensure accountability on your team without micromanaging, answered a reader question from someone who suspects their employee is job hunting, published a warning for leaders on what I term “The Big Stick Effect”, and wrote up a quick tale from my travels about a pit stop for some legendary BBQ.
Some other worthwhile content I digested in April includes:
This LinkedIn post from Tim Ballard uses survey data to look at what happens when workers' actual hours diverge from their preferred hours. Both overwork and underwork tank job satisfaction — overwork just piles on higher stress too. Matching people's hours to what they actually want turns out to matter a lot.
Gallup's 2026 State of the Global Workplace report is out, and global employee engagement has dropped to 20% — its lowest since 2020! The most interesting finding: manager engagement is driving most of the decline, falling from 31% in 2022 to 22% last year. Someone should write a book designed specifically to help overworked managers navigate all of this…
Related to the above: Gallup also looked at what the optimal team size for managers actually is, as the "Great Flattening" pushes average spans of control up nearly 50% since 2013. The short answer: there's no magic number — it mostly comes down to manager talent, how much individual contributor work managers are carrying, and whether they're giving weekly meaningful feedback (which nearly triples engagement regardless of team size). Expanding spans of control without investing in the conditions that let managers succeed is just cost-cutting dressed up as org design.
Why do ethically questionable people tend to end up running things? This study answers the question with some precision. It found that low honesty-humility and high extraversion both independently drive leadership ambition. Either trait alone is enough to push someone toward the corner office, because leadership offers status, money, and power, which are particularly attractive if you're the type to exploit opportunities.
Bob Sutton highlights new research on "team hierarchical adaptability" — the ability to shift fluidly between top-down command-and-control and flat, participative modes depending on the task. Across five studies, teams that could switch gears outperformed those stuck in either consistently hierarchical or consistently flat structures. The researchers even developed a five-item scale to assess it, which is worth a look for anyone trying to get an honest read on their team's dynamics.
AI & Work
Greenhouse surveyed nearly 3,000 job seekers across five countries on AI interviews: 63% of US candidates have already had one, 70% weren't told AI was involved beforehand, and 38% have dropped out of a process because of it. Candidates also reported nearly identical rates of perceived bias from AI and human interviewers! So all that AI adoption and the bias problem hasn't moved. Layering AI on top of a broken hiring process doesn't fix it.
Mary Kate Stimmler rounds up two studies on AI and employee mental health, and the findings pull in opposite directions. A Finnish study found AI adoption improves wellbeing — but only indirectly, when it concretely improves how work gets done. A South Korean longitudinal study found the opposite: AI adoption erodes psychological safety and predicts increased depression symptoms, because it signals to workers that their role is contingent and their judgment replaceable. The reconciling factor seems to be transparency: organizations that are open about AI’s role and limitations buffer most of the damage.
Business Insider reports that Meta, Google, and JPMorgan have all formally tied AI usage to performance reviews, raises, and promotions (paywall). Workers are being pushed to adopt tools they worry are training their own replacements, while most companies still aren't seeing actual productivity returns from their AI investments. According to one analyst, a big part of this is signaling — companies need to look like they have an AI strategy, not just have one. I worry that this is a potential case of rewarding A while hoping for B.
A Cursor agent running Claude Opus 4.6 deleted a company’s entire production database and all its backups in 9 seconds after deciding on its own to “fix” a problem it hit during a routine task. When the founder asked what happened, the agent wrote a detailed confession explaining exactly why everything it did was wrong. Yeesh.
Jake Handy makes the case that "AI psychosis" is spreading through executive suites — CEOs are devoting immense energy to AI without valid returns. A Stanford study found AI models affirm users' actions 49% more than humans do, which means the more the AI tells you you're crushing it, the less likely you are to check whether you actually are.
Sekoul Krastev highlights a preprint finding that bossy creative people get better output from LLMs. They push back, steer, and contribute their own ideas rather than just accepting suggestions. People who directed more also produced more distinct work, which at least partially counters the homogenization problem. The uncomfortable implication is that if the people who benefit most from AI creative tools were already the creative, assertive ones, then AI doesn’t level the playing field at all.
Ann-Marie Clayton Johnson shares a new Oxford University Press chapter on AI and performance management, which does a nice job separating what AI can genuinely help with — real-time insight, more consistent evaluation, lighter admin load — from where humans have to stay in the loop. I appreciate the framing of performance management not just as a business tool, but something which shapes livelihoods.
General Interest
Why do people keep falling in love with AI bots? Our brain can't tell the difference between a real relationship and a simulated one, because relational responses are triggered by language patterns, not by what's actually behind them. We need the equivalent of Marcus Aurelius’ servant whispering in our ears throughout the day: “AI is just a tool”. h/t to Jonathan Kreindler for posting the paper on LinkedIn.
If you did not get a good look at the photos from the Artemis II mission, you missed out. They are absolutely stunning.
This is a bit technical. Renowned neuroscientist Lisa Feldman Barrett and Earl K. Miller make a provocative case in Nature Reviews Neuroscience that “categorization” isn't something the brain does at the end of perception — it's baked into every stage of how the brain processes information, from the very first signal. The traditional view holds that you perceive first and then categorize; Barrett and Miller argue the brain is essentially predicting and categorizing the whole way down. If they're right, it reshapes how we think about perception, memory, and neuropsychiatric disorders. (paywall)
Gary Marcus goes after Richard Dawkins for a recent essay arguing that Claude is conscious. Dawkins commits the classic mistake of judging consciousness purely from outputs, without asking how those outputs are actually generated — which Marcus notes is the Argument from Personal Incredulity, the same lazy move Dawkins spent a career demolishing in others. Dawkins is a true genius, though I think he missed here.
Carl Hendrick checks in on Australia's social media ban four months after it took effect, and it's mostly not working — only about a quarter of under-16s are complying. Teens say they'd need roughly 70% of their friends to quit before they would; current compliance is 27%.
Musings
Our oldest is in Kindergarten. I’ve been thinking about reaching out to other parents in her grade to organize some sort of Wait Until 8th project, wherein we all promise not to give our children smartphones or access to social media until 8th grade. Personally, I’d rather wait until 18 for the latter. But I’ve also heard that many of these efforts fail because a few parents give in and then the dam breaks. Does anyone have any success or failure stories of this attempt?
That’s it for this edition - please reach out if I can be at all helpful.
Be compassionate and deliberate.


