GesellschaftSukriti Bhattacharya

Sukriti Bhattacharya

R&T Associate | Trustworthy AI | Responsible Data Science & Analytics Systems | IT for Innovative Services Department | Luxembourg Institute of Science & Technology

An intelligent future is about systems that support your brain in the right way, and ‚right‘ is subjective. AI comes into play here because it has the potential to be non-biased—if it is non-biased, it’s truly intelligent. If we can depend on this type of AI, then our future is indeed bright.
But, we must remember that AI is what we make of it. It’s a product of our input—automation governed by human decision-making. To trust AI, we must first trust the intelligence we impart to it. For instance, if your online activity reflects a certain bias, like following a specific political party, AI will cater to that bias. It will show you what you want to see, not necessarily the full picture. This can isolate you from reality.
We need to be intelligent as humans to discern the information AI provides us. We must recognize that an entity can’t always be good; it has a dark side too. If you say, „Google is intelligent because it only shows positive things about my preferred party,“ you’re encountering a bias. AI might reinforce this, showing only good things about what you support, and only bad about your opposition. That’s why human intelligence is crucial—it’s about being smart enough to accept or reject AI’s output. 
So, in essence, even as AI evolves, human interference is necessary. Without it, we lose the very foundation of intelligence. It’s not just about AI taking over; it’s about how we, as humans, use AI to complement and enhance our intelligence.
Just as you need a license to drive a car, you need education to use technology, including AI. You can’t simply take a gun and shoot just because you’ve seen it done in movies; you must learn how to use it properly. This applies to AI as well. There are many AI tools online, in various applications, but using them blindly is not intelligent. Claiming an AI tool is infallible because ‚AI is everything‘ is a mistake. Intelligence means being thoughtful and cautious, like when you learn to drive. You go through a process, you learn it properly, and then you drive without causing harm. Similarly, to use any AI model effectively, you need to be mature enough to critically assess what it’s doing and make informed decisions about accepting or rejecting its outputs.
Mature enough means you understand the trends but don’t just follow blindly. Like when ‚data scientist‘ became a trend and everyone on LinkedIn changed their title overnight. Now, with AI booming, claiming you’re part of this community means you need to do your homework. Just like preparing for an interview, you do research to understand the background and context.
Before using AI, you should know all about it—the good and the bad. You don’t have to be a rocket scientist, but you should understand how it works, how it gives you answers.
When you get an answer from an AI model, you should question it—why is it always right? Is there a bias? Can I trust it completely? This is the kind of intelligence you need: the ability to question and understand, not just accept AI outcomes at face value.
My vision for an intelligent future starts with self-intelligence. It means being mature enough to discern and rely on certain outputs, and not being blind or biased. In the near future, most of us might become lazy due to society’s influences. Students now use AI to write essays, often not even reading what’s been written before submitting it.  As researchers, we lose interest in reading extensive papers and ask AI to summarize them, just like we prefer short TikTok videos over watching a two-hour movie. This will be our future if we aren’t responsible. 
If we rely solely on what’s readily available, without delving into the deeper consequences, human intelligence will surely deteriorate. We’ll become like robots, dependent on ready-made, easily accessible solutions without considering their implications. We’re moving away from human interaction, to the point where we might not even need friends. We spend hours alone, surfing the internet, which is now mistaken for intelligence.
In the past, we were told to go outside, meet people to learn about the world—that’s how you became mature and intelligent, through interaction and learning. But now, using AI tools, we’re not really learning; we’re consuming ready-made ‚food‘ without the self-intelligence to judge if it’s good or bad. Everything seems fine because we’re not making an effort to understand.
So, to me, the future seems dark if we continue this way. People will over-rely on technology, and our brain cells will stagnate. Neurons work when your brain is active, but when you feed it rubbish, or you don’t understand what you’re consuming, it’s all waste. Our future is insecure unless we as individuals decide what to accept, what to reject, and how much we should trust AI.

Back To Top

human ist das Leitmedium für strategische Entscheider:innen und alle, die mit KI arbeiten und leben werden. Wir stellen den Menschen in den Mittelpunkt. Für eine lebenswerte Zukunft aller.

Folgen Sie unserer Einladung zum Dialog – werden Sie Teil einer spannenden Reise…