How Not to Be a Team in the AI Age
By Avik Mukherjee | Apr 16, 2026 · 7 min read · Updated Apr 16, 2026
Recently, two people on our team, including me, independently worked on the same problem overnight. Both had good intentions and were trying to move things forward. When we shared our updates in the group chat the next morning, neither of us knew what the other had been doing, and by then both of us had already formed our own positions.
The conversation that followed was tense in that low-grade way that's hard to name. Not a fight. Just two people talking past each other, each trying to say something they couldn't quite reach. At some point someone just said "leave it" — and that was that.
What struck me afterward was not the overlap itself — that kind of thing happens. What struck me was how normal it had become. Nobody conspired to exclude anyone. There was no miscommunication in the traditional sense. People were just working, heads down, assisted by their tools, producing things. And the discussion that should have happened at the start happened at the end, when positions were already set.
That pattern is what this post is about.
What Changed (And What We Didn't Notice)#
The shift happened gradually, the way most things do. AI tools got better. Everyone adopted them. Work started happening faster. On the surface, this looked like progress.
But underneath, something was rerouting.
Before, when someone hit a hard problem, they'd ask a colleague. That conversation had side effects. You'd learn how they think. They'd learn what you were building. Someone in earshot might connect a dot you'd both missed. The problem got solved, but more importantly, context got shared.
Now, the first instinct is to ask the AI. Which is reasonable — it's fast, it doesn't judge, it's always available. But the conversation never happens. The colleague never learns. The context stays local.
Over weeks, this compounds into something uncomfortable: a group of individuals who happen to share a codebase, rather than a team that shares understanding.
The "Paste and Expect" Problem#
Here's a specific pattern I kept seeing.
Someone would use an AI tool to generate something — a design doc, a migration plan, a component, a set of tasks. Then they'd share it with the team, often with very little preamble, as if the output spoke for itself.
And technically, it usually did. The output was coherent. It looked considered. It had all the right sections.
But the person sharing it often hadn't gone through the process of forming their own opinion about it. They hadn't wrestled with the tradeoffs. They'd just... curated.
And so when questions came up — why this approach and not that one? what happens in the edge case? did we think about X? — the answers were thin. Because the thinking hadn't really happened. The AI had done the thinking, and the human had forwarded the memo.
This is subtle and easy to miss. The output looks like ownership. But there's no mental model behind it.
The Invisible Membership Problem#
What made that kind of situation hard wasn't that anyone was wrong. The intentions were right. The work was real. The problem was structural — both people had been operating in their own loops, producing things rather than discussing things, and by the time they compared notes, the window for easy alignment had already closed.
I started feeling uncertain about whether I was actually part of the team or just adjacent to it. Not because anyone excluded me. But because the usual signals of membership had gone quiet.
Being on a team used to feel like something. You knew the ongoing conversations. You knew who was stuck on what. You had a rough sense of what everyone was thinking, not because you read a doc, but because you talked. Membership was something you felt through continuous low-level contact with the people around you.
AI-heavy async workflows can quietly hollow that out. Decisions get made in tools. Context lives in prompts that nobody shares. The group chat is full of outputs, not process. You can follow all the right channels and still feel like you're watching from outside.
I don't think anyone on my team intended this. I don't think I'm alone in experiencing it.
What I Think Is Actually Happening#
We optimized for output speed, and in doing so, we removed most of the friction that used to serve a social and epistemic function.
The friction of explaining your problem to a colleague was annoying. It was also how you refined your thinking, how you transferred context, how you built shared mental models. We didn't remove the friction because it was useless — we removed it because AI made it optional. Those are different things.
Teams are not just coordination mechanisms. They're also how individuals stay calibrated to each other. When that calibration breaks down, you don't notice it in the sprint metrics. You notice it when a decision gets made that three people would have pushed back on if they'd known about it, or when someone spends two weeks building something another person already tried and abandoned.
The irony is that AI is supposed to make teams more capable. And in raw throughput terms, it often does. But capability without coherence is just parallel work happening to share a repo.
Two Things Worth Trying#
I'm not going to prescribe a whole framework here. But two things have made a noticeable difference for me.
Talk about the thinking, not just the output. When sharing AI-assisted work, add a sentence or two about where you agreed with it, where you didn't, and what you're still unsure about. It sounds small, but it signals that a human made a judgment call — and it gives your teammates something real to respond to.
Deliberately leave room for conversation that isn't about closing a ticket. Not a standup. Not a retro. Just occasional unstructured time where people can say what they're actually thinking about. The insights that fall out of those conversations are usually the ones that don't fit into any async format.
The Longer Question#
I'm still sitting with this one: what does it mean to be a good teammate when AI can do most of what used to constitute "being helpful"?
I think the answer involves being more present, more opinionated, and more curious about how other people are thinking — not just what they're producing. That's harder than sharing a well-formatted AI output. It's also, I think, what actually makes a team.
There are conversations where you feel something clearly but can't find the words fast enough, and by the time you try, the moment has passed. So you just say "leave it" — not because you don't care, but because you've run out of ways to reach what you actually meant. I've been in a few of those lately. Writing this is my attempt at a slower version of the same conversation.
TL;DR: AI tools are making individuals faster but teams slower. When everyone works in their own assisted loop — generating, curating, shipping — context stops flowing between people. The conversations that used to transfer understanding get skipped. Decisions happen in isolation. Membership starts to feel uncertain. The fix isn't to use AI less. It's to stay deliberate about the parts of teamwork that AI can't replace: sharing your thinking, not just your output, and leaving room for the kind of unstructured conversation where alignment actually happens.
If this resonated or you've felt something similar, I'd genuinely like to hear about it. Find me on GitHub, X, Peerlist, or LinkedIn.
Feedback welcome. Call out mistakes. I'd rather be corrected than stay wrong.