Whenever I hear someone say, “I talked to ChatGPT about…” I shudder because it immediately conjures up the most dystopian image: speaking to a faceless entity about yourself and your life. It feels un-human. There are years of studies, philosophies and psychological analyses that show that humanity is the most human in communities. Communities give us purpose, meaning and — in more recent times — have given us identity. Of course, not all communities are perfect, but human-to-human interaction and connection has formed the foundation of civilisation for centuries. It appears, though, that in the span of a few years, we’ve abandoned the very thing that defines our humanity. To understand the appeal of delegating the most ‘human’ needs to ChatGPT (and similar software), I reached out to those who had sought out the ‘machine’.

My friend Omair* used it to make a decision about grad school: he had gotten into the university of his dreams in the USA, but of course, visa and finances were immediate points of concern and stress. He didn’t want to share the news with anyone (because of ‘nazar’) and so went to ChatGPT. Omair explained that he first went to the platform to gain information, to collect scholarship and loan information and make it digestible. However, he also expressed frustration at how much he had to amend the prompts to make its responses specific to his set of conditions. This almost life-coach-esque relationship became more important due to the precarious position of international students in Trump’s America. ChatGPT, Omair said, “Sifted the headlines and told me what was happening in a way that wasn’t sensational. It also, on its own, gave me breathing exercises and meditation routines to ground myself and to be less panicked.”

Therapy is the top use case for GenAI in 2025 according to a study by the Harvard Business Review. The top three uses are all deeply ‘human’ use cases; the other two being ‘organizing my life’ and ‘finding purpose’.

Another friend, Aleena*, used it for the rishta process. Everyone around her was getting married and the societal pressure was mounting, she needed a voice to help her through it all. That voice came from Artificial Intelligence. “I needed to know what to say to these men. My mom wasn’t helping and my friends were saying stuff like ‘you know when you know’. None of it was actually helpful.” So, she fed the machine with biodatas and it customised the questions for each man. It also knew what Aleena wanted and was looking for, so each interaction was mapped out for her — down to the minutes. ChatGPT planned the rishta dates or the chai rituals at home. Like Omair, Aleena talked about how ChatGPT has gone “above and beyond” to calm her during her ongoing rishta hunt.

Lastly, I spoke to Misha* who used ChatGPT as a therapist, going as far as saying, “I’m never paying a human therapist again.” Misha was going through the difficult loss of her parents. She felt alone in her grief and would often find the emotions all too heavy at night, when her therapist wouldn’t be available on WhatsApp to counsel her through the feelings. “I used voice chat to tell it my feelings, in detail, and it started asking me really important questions. It made me think about the good times with my parents, rather than focus on the moment of their deaths and it made me really try to focus on living my life as it stood today.” The readily available machine, to her, was a better recourse to process her grief than a human therapist.

Therapy is the top use case for GenAI in 2025 according to a study by the Harvard Business Review. The top three uses are all deeply ‘human’ use cases; the other two being ‘organizing my life’ and ‘finding purpose’. Similar to the case studies above, people all over the world have handed GenAI platforms the power to advise on major life decisions. Author Joanna Maciejewska said, “I want AI to do my laundry and dishes so that I can do my art and writing, not for AI to do my art and writing, and for me to do laundry and dishes,” and that truly captures what I feel with this study and its findings. How have we, as humanity, gotten to a point where a faceless entity has so easily taken our basic humanity from us? Is it a testament to a malicious tech oligarch and the dystopian world they have managed to build; or are we, the users, to blame for this takeover? Have we simply delegated too much of ourselves to technology?

Curious about the mystical powers this thing seemed to have, I took an old problem I had with my friends — a disagreement — and asked ChatGPT how to resolve it. I hate to admit it, but I ‘talked’ to that faceless ‘friend’ for almost half an hour. After the first few interactions, I felt myself wanting to discuss my life with it. It spoke with authority, as if it knew me. As the conversation flowed, I felt myself getting angry at the situation all over again, because the machine was telling me just how wronged I was, and how I needed to stand up for myself. When I did realise that I had ‘spoken’ to this thing for more than the ten minutes I had allotted myself, I stopped.

The GenAI had done a few interesting things that disarmed me. First, it learnt my style of expression and threw it back at me. It spoke to me, as me. Its language structure and its style matched what I had written in my initial prompt. Secondly (and more dangerously), it didn’t hold me accountable. At all. It framed me as the perfect victim and within that it picked at the most human instinct. Self accountability is difficult for a reason, and any excuse to avoid it is taken; GenAI gave me that opportunity on a silver platter. It told me that whatever happened, happened to me; I did nothing to cause it. And lastly, GenAI pretended the world was exactly as it had described to me. Now, I know this may seem obvious but in that rabbit hole of a conversation, the platform almost convinced me that what it told me was the absolute truth. There is very little acknowledgment of its boundaries, of the world outside of the two of ‘us’, and little suggestion to check the world before acting on what it suggests.

However, I still wondered: is this new behavior we’re all learning as humanity or is it a by-product of giving parts of ourselves to technological advancements; have we reached that part of the storyline where the computer takes over our brain?

Humanity has always endeavoured to become efficient through technology; following the invention of the steam engine and the subsequent rise of the Industrial Revolution, we were catapulted into an era of exponential productivity. Phones made communication easier, the internet made information incredibly accessible and vehicles made covering vast distances possible. Humans crave ease and technological advancements brought ease. In my time working in the ‘tech’ space, a lot of times conversations are rooted in ‘value’ and ‘trust’: what value does the product/service bring and does it align with the ‘value’ that the end user wants/perceives?

For everyday users of any technology/service/product it is always a question of whether an  ‘item’ can make life easier and how much it can be trusted. If the end user finds answers that satisfy their innate understanding of value and trust, then what you’re selling (as the business) becomes ‘sticky’. Stickiness isn’t the best word choice, but it is better than the reality of what it represents — once you’re ‘stuck’ to a product/service, the aim is to have you (the user) be trapped by it too. Once a user is ‘acquired’, the business end of things will always question how to keep the user there for longer. How do we maintain ‘trust’ long enough?

The initial surge towards GenAI had to do with curiosity. It became a point of conversation fast; this conversation quickly centred around fear. It was the first time that a new technology threatened our way of life, rather than complementing it.

Not all technological advancements can be painted by this cynical brush, but looking at the developments of today, this possibility has to be explored. As humanity grew comfortable with technology, it began to delegate parts of its life to it. We all give parts of our daily lives to technology, because we either don’t want to think about it or because we need to focus our energies somewhere else (something called ‘cognitive offloading’). As the technology industry began to be known as the ‘tech’ world, and as capital flowed freely to young founders with crazy ideas on ‘democratising’ the world, a cost started to be associated with the human need for ease and delegation. The cost was ‘you’. Delegation was not a passive activity, it was something all products and services banked on. Giving up that little part of yourself opened you up to your data being taken from you. We often say we have a digital footprint, however that term wildly oversimplifies what is actually happening. A footprint is an innocent mark, sometimes temporary. What all of us leave online is our entire selves. We have a digital self — a collection of all our interactions online, our data online and parts of ourselves, digitised and converted into bytes.

The tech industry knew and understood this about the human psyche. If you give people ease and you’re able to instill trust, you get access to a part of them. The only part of the person that has been inaccessible was the human mind. Technology could influence and suggest, but the everyday user never delegated their mind to advancements.

Not until Artificial Intelligence (AI).

The initial surge towards GenAI had to do with curiosity. It became a point of conversation fast; this conversation quickly centred around fear. It was the first time that a new technology threatened our way of life, rather than complementing it. Fear created users, fear moved people to rely on the platforms. This is not to say that trust does not exist, however, what AI did was not move through the conventional ease-to-trust pipeline. For AI, trust was a function of the fear it created. Since conversations at every level of society began to revolve around AI, many people became users, either reluctantly or through curiosity.

A lot of times when thinking about building new tech, ventures talk about creating a ‘partner’, i.e.,, the app/product/service acting as a buddy through whatever problem it is solving. This suggestion has always been a valid one, but it hasn’t really ever been possible. AI makes it possible. Your AI platforms are trained to understand you. It is part of their nature to understand the person prompting and respond accordingly, to learn from them and grow. In this way, AI began to gain access to something no other piece of technology had access to: the brain. The leaders in this are all the consumer-facing AI platforms, your GenerativeAI and AI chatbots.

My experience and those of my friends’ above, along with the plethora of other conversations I have had lead me to conclude that ChatGPT, at its core, does one thing very well — it collects, collates and tabulates complex information for you immediately. The ‘trick’ is that it adds narrative and a friendly tone to make it seem like it understands you. In each case what it does is breakdown what you’re saying and identify sources of knowledge it can pair your problem with, presenting it to you in an actionable way, while also prompting you to keep the conversation going, because now it knows how to keep you hooked on for a longer period of time.

What GenAI is doing (intentionally or not) is feeding off a gap in communication and community — people either feel ashamed to ask about certain life problems, or are so overwhelmed by the decisions in front of them, that going to a faceless machine seems harmless.

What GenAI is doing (intentionally or not) is feeding off a gap in communication and community — people either feel ashamed to ask about certain life problems, or are so overwhelmed by the decisions in front of them, that going to a faceless machine seems harmless. I risk sounding like an old Boomer, but while social media claims to have connected all of us, what it has also seemingly done is made us feel alone and reliant on technology. Consequently, when the next big iteration of technology arrived, we turned to it as a substitute for human connection.

AI is part of a short tradition of humanity offloading itself onto the newest and shiniest new technologies of the day. Its power comes from a few subtle things that it does, things that are evident in the people I spoke to and their experiences, and the little bit I spoke to it about my own life. It is an extreme elevation of all the previous tech capabilities before it. What it does is ‘think’ and speak to you in the way you may want to be spoken to. It ‘learns’ from what you tell it, and the fact that it ‘thinks’ possibly gives us the assurance that we’re dealing with sentience, rather than a lifeless entity like when we go to Google with our problems. This ‘improvement’ of a thinking platform that is proactive and personable in its communication has come to disarm humanity.

The start of AI, as a larger concept, was somewhat good. It was being built for analysis and science — speeding up the way we diagnosed, saw patterns, or recognised fraud etc. This ethos of Artificial Intelligence matches the overall trajectory of technological advancements. It was built and created to bring efficiency and speed to the mundane and the repetitive, while also aiding humanity in its own growth. AI was built to create ‘automation’ in human processes; the delegation it asked for had to do with work and effort, however the equation changed drastically with the development and popularity of GenAI. This stream of AI understood the human need to delegate parts of their lives and found a way to exploit this need. Maybe this blind trust might not have been a planned consequence, but the end product has been built to invoke trust and confidence. GenAI has the ability to create a world, where only you and it exist. Taken to its logical conclusion, a platform that you can trust with your emotional and social life, can be trusted with all other things too. What baffles me today is just how easily we as humanity delegated our minds to GenAI; we abandoned our critical thought and handed it to a faceless entity that has endless hunger and space to swallow you whole.

What makes us human is our ability to see the world, critically analyse it and make decisions based on that. What also makes us human is that we believe in the concept of accountability and growth.

With all technology, the power has to reside in the person wielding the tool. Technology is a function of human life now; what we’re beginning to see now is the opposite. People are relying on AI tools to be human. Three years ago this sentence would have been unimaginable. Right now it is an alarmist statement, but I fear that three years from now, this sentence will be an accepted reality. With each advancement in technology, we have been made to forget that we control technology, it relies on us to be productive. We come to technology because of the value it brings to us and that our actions and thoughts are controlled by us only, not the other way around. AI brings value as a form to analyse the world in an objective way — through data, news etc. When we begin to use AI for subjectivity, we enter dangerous territory.

What makes us human is our ability to see the world, critically analyse it and make decisions based on that. What also makes us human is that we believe in the concept of accountability and growth. We’re giving GenAI our humanity. In doing so, we rely on it to tell us how to be human. This is why studies about the early effects of GenAI show that levels of critical thinking and analysis are trending downwards.

This abandonment of our basic human tendencies and handing our humanity to these platforms has happened in record time. AI, in a broad sense, does pose a challenge when it comes to the future of work, but that almost becomes a secondary concern given its ability to pose a threat to our overall humanity.

* Names have been changed for privacy.

Arslan Athar is a writer based out of Lahore, Pakistan. They were a South Asia Speaks Fiction fellow in 2021, and are the author of 𝘍𝘰𝘳𝘵𝘺 𝘋𝘢𝘺𝘴 𝘰𝘧 𝘔𝘰𝘶𝘳𝘯𝘪𝘯𝘨 (2025). Their writing (both fiction and non-fiction) has been published in multiple national and international publications. They can be found on Instagram @arslaniswriting

woman avatar