The new challenge for IT leadership: teaching people to think with the machine, not because of it
The new challenge for IT leadership: teaching people to think with the machine, not because of it
Recently, I was talking with a client’s VP of Operations and, at a certain point, the topic naturally turned to artificial intelligence.I mentioned that, within my technology area, I made available to the team not just one, but three different AI tools for day-to-day use. He was visibly surprised. The reaction caught my attention—as if it were something bold, almost experimental.
But in practice, there’s nothing experimental about this anymore. Artificial intelligence is already deeply integrated into our workflow. What’s still new—and perhaps the real challenge—is the cultural shift that comes with this integration.
Three tools, three purposes—and one goal: learning alongside AI
When we started testing AI tools, the initial idea was simple: figure out what truly works for our kind of work. Over time, it became clear that each tool has its own personality, and the value lies in understanding how they complement one another.
Today we use, for example:
• GitHub Co-Pilot, which provides direct support for CI/CD actions and commands, helps speed up pipelines, and whose integration with our development IDE helps a lot when we still need to put our hands directly on the code.
• Anthropic / Claude, the preferred companion for code development, with excellent results on more structured tasks;
• Gemini, which supports the team in functional solution design, data analysis, and report generation;
• and Memex, a market solution that integrates some of these tools into a single console running locally on the computer—more friendly and organized—something that enables “vibe-coding,” a more fluid and collaborative way to develop.
Forcing a single tool would be like asking everyone to write with the same pen. What matters is that the text—or in our case, the outcome—comes out better.
What interests me most: who doesn’t use it
Periodically, I pull reports on AI-tool usage. But contrary to what many imagine, I don’t do it to reward those who use them the most. I do it to understand who still doesn’t use them—and why.
That is, perhaps, the most revealing metric of a team’s digital culture.
When someone doesn’t use AI today, it isn’t due to lack of access. It’s due to insecurity, fear, skepticism, or simply not knowing. And each of those reasons says a lot about where we still need to evolve.
When I talk to those who avoid using it, I hear recurring patterns:
• “I’m afraid of becoming dispensable and/or irrelevant and being replaced.”
• “This is going to make lots of mistakes and I’ll waste time fixing them.”
• “I don’t know how to write the right prompt.”
But behind it all lies a cultural issue. Working with AI requires a different kind of reasoning—more literal, more declarative, more explicit. You can’t rely on “this should already be implied” anymore.
The machine only understands what you’re able to explain clearly.
And at that point, AI ends up exposing not only the technical level, but the quality of the user’s thinking. Personally, I don’t believe that uniquely human trait will be replaced anytime soon…
When the impossible becomes an example
In the first half of 2025, we faced a challenge that seemed out of reach: building from scratch a new solution for processing inbound files. The team told me it was practically unfeasible, that it would require a large team and months of work. The most optimistic estimates pointed to 1,200 hours of development. Besides, being pragmatic, the team was already at 100% allocation with other demands.
To prove otherwise, I decided to run an experiment. I took the challenge on myself—solo—and used Memex as the foundation for development. In the end, I delivered the complete functional concept in approximately 240 hours of work.
I didn’t do this to compete with the team, but to show what’s possible when you learn to work with AI, not just use it.
That real example broke a huge barrier. Resistance began to fade. And, interestingly, those who became most engaged afterward weren’t the beginners, but the more experienced professionals—the ones who had been more skeptical.
Sometimes what a team needs isn’t another training session.
It’s an example that breaks the impossible.
The paradox of resistance
One point that intrigues me—and, talking with peers, I see it in almost every organization—is that those most resistant to AI tend to be the most talented. They’re people with deep technical knowledge, years of experience, sharp analytical ability.
But that’s precisely why they resist. Because they know what goes wrong. Because they take pride in what they’ve mastered. And because, deep down, they realize AI is not messing with their tasks, but with their cognitive comfort zone.
The truth is that AI won’t replace those who are good. It will replace those who stopped learning.
And that’s the provocation I most often repeat to my team. What sets the relevant professional apart from the obsolete one today isn’t accumulated knowledge—it’s the willingness to relearn and adapt.
The results that really matter
The productivity gains are evident: faster deliveries, higher-quality reports, less rework. But the deeper impact goes beyond that.
Today, users from other areas of the company—without a technical background—are starting to create their own reports, dashboards, and models. Without depending on the IT team. That frees the technology team to focus on product innovation and strategic challenges.
But there’s an even more powerful side effect: the feeling of autonomy. Seeing people who used to say “this isn’t for me” now realizing “I can do this” is transformative.
The democratization of creating is AI’s true legacy.
From carbon to silicon
I often compare AI tools to junior programmers fresh out of school: full of enthusiasm, solid theoretical knowledge, but still inexperienced.They make mistakes. They hallucinate. They repeat old errors. They try to follow—often with stubbornness that makes you want to scream—paths that aren’t always the best.
In other words—nothing new on the front.
The difference is that now we’re dealing with apprentices of silicon, not carbon.
Leadership’s role isn’t to replace these “artificial juniors,” but to teach humans to supervise them intelligently. AI will make mistakes too. But it fails fast—and that’s great, as long as there’s someone who can learn from the mistake even faster—and help correct it.
In practice, what changed wasn’t the type of error, but the type and profile of the apprentice.
We’re still developing talent—only now they don’t necessarily learn from their own mistakes, but from other people’s errors.
Lead through curiosity, not control
Today’s real challenge for technology leadership is cultural. It’s not just about adopting new tools, but about cultivating curious minds, willing to explore and reinvent themselves alongside AI.
Measuring who uses a tool well is important. But, at least for me, what was even more revealing was asking:
“Who still doesn’t believe, why don’t they—and what can I do to change that?”
Modern IT leadership isn’t about controlling technology adoption.
It’s about creating an environment where thinking with the machine is natural, collaborative, and—above all—human.
And, in the end, that’s what AI teaches us most:
that technology doesn’t replace people—it simply reveals who is still willing to evolve.




