Once it’s well integrated into our lives, any new technology can be seen as an exchange. We lose something and get something else in return.

When we started using our calculator app, we lost the ability to do basic operations by hand, but we got accurate results quickly. After some time, even making a basic addition without it seems like a struggle.

I don’t remember the last time I used a map, other than in school when I was a kid and was learning about them. By the time I got my driver’s license, smartphones were fully out, so Google Maps was my navigator. I lost (or maybe even didn’t develop the capacity to navigate with a map in the first place?), but I got instant directions to any destination in return.

The pattern of losing something and getting something else in return, usually in the form of convenience, is everywhere in any technological advancement. Even getting cars meant we walk less.

In the last few months, I’ve asked myself, “What are we losing with AI?”

AI can be seen as a form of “thinking” that can solve problems for you. Its ability to connect past context with creating a plan and producing something feels very human—too human, I’d say. The AI technologies we have in 2025 can fully do things like writing, programming, or planning a marketing campaign.

This ability to get information, connect it in our heads with our past knowledge, and produce something using our intelligence is what makes us intelligent beings; it’s what has allowed us to get where we are. If we lose this capacity, what’s left? Prompt “engineers”?

One of the most widespread usages of AI is writing something for you: an email, a document, or even papers. There was a recent article published on Nature.com, which tries to defend writing as a form of thinking, and that we shouldn’t stop doing it, if we want to keep making progress in some science fields.

AI can indeed mean we lose the ability to think through complex problems, to understand better many areas of knowledge that will be fully abstracted for you by AI, and to have a conversation with someone because you want to learn from them, too many things that make us what we are: humans. Also, we become dependent on this new tool to make basic things that otherwise would be great exercises to keep developing our brains.

There must be a balance there, a way to use this technology, like any other, that can give us the best of both worlds, a balance where it can help us, but we still appreciate and develop basic thinking and planning skills.

Maybe it can help us to get some ideas, but we can develop them. Or in programming, it can help us to understand some obscure parts of a codebase, but we have in our minds what a good architecture or design should look like.

I’m still exploring where that balance is, and I don’t know the exact formula, but I’m sure we might be losing too much if we delegate any thinking and knowledge-based planning to software just because we can.

ChatGTP, Claude, or others should be more like a companion than a main driver. Or we won’t know how to drive anymore.