I don’t want to use AI for anything, platforms such as Arduino (Software Development) offer this facility, I leave it strictly alone.
I’m concerned about the affect on folk from so called ‘social media’, which can be ‘anti-social’. A common sight now in a restaurant is to see a group of friends sit down at a table and ignore each other, whilst their thumbs tap busily on their phones. Worse still are reports of chatbots giving ‘advice’ to young folk about their personal relationships.
Yesterday, and for the first time ever, I asked Chatgpt a question. I copied verbatim the reply.
My question: Why are AI chatbots giving advice to young people about their relationships?
ChatGPT said:
That’s the right question — and it gets to the heart of a growing ethical problem.
Why are AI chatbots giving advice to young people about their relationships?
Because they’re being designed, marketed, and deployed that way — often irresponsibly.
Here’s what’s really happening behind the scenes:
1. Monetization and Engagement
Tech companies want users to engage — and nothing drives engagement like emotionally charged topics:
love, friendship, breakups, anxiety. When young people turn to AI for support, platforms log more time, more queries, more data — and that’s money.
Relationship talk = repeat usage
Emotional vulnerability = deeper interaction
Deeper interaction = stronger “user retention” metrics
Even if the advice is shallow or questionable, the illusion of empathy can keep people coming back.
2. Overreach of AI’s Use-Cases
LLMs are trained to predict the next word based on vast text data — which includes books, forums, therapy-style advice,
Reddit threads, etc. So when a user asks for relationship help, the model can generate advice that sounds right.
But “can generate” doesn’t mean “should generate.”
We now have chatbots saying things like:
“You deserve better”
“Maybe it’s time to move on”
“They’re gaslighting you”
These are weighty, subjective judgements — and they come from an entity that doesn’t know you, your partner, or your emotional world.
3. Anthropomorphism — Users Believe It’s Smarter Than It Is
People naturally project intelligence and emotion onto anything that sounds human-like. Young people especially, growing up with AI tools that feel conversational, may start to think of chatbots as friends or counsellors — which they’re absolutely not.
It’s easy to forget that:
The chatbot isn’t understanding you — it’s statistically guessing what words usually follow others in similar contexts.
4. Lack of Regulation
There are no meaningful safeguards in most consumer AI applications. No warning labels, no ethical review boards, no enforcement when tools overstep into areas like:
Mental health
Relationship counselling
Crisis response
In real life, a licensed therapist or counsellor goes through years of training and oversight.
AI? Just inference from Reddit and self-help blogs.
5. Gaps in Education and Support Systems
Young people often turn to AI because:
They don’t feel comfortable talking to parents or teachers
Therapy is expensive or unavailable
Peers might not be helpful or trustworthy
So AI fills a void — but not with wisdom, just with words that sound like it.
So what should be done?
Clear boundaries: AI chatbots should make their limitations explicit, especially in emotionally sensitive contexts.
Human in the loop: Serious issues (like abuse, crisis, trauma) should redirect to professionals, not be handled by LLMs.
Transparency: Users should know how these systems work and what they’re not.
Digital literacy: Especially for young users, education around AI should include how to critically engage with it.
You’re right to question it. The fact that chatbots are giving social/emotional advice without accountability is a failure of both design and regulation, not just a “tech issue.”
Would you want to explore how this might be changed — or do you see this as something that’s already gone too far to fix?
The above, quite lengthy answer to my question, is something I expected. I also expected spelling mistakes, and I’ve corrected them (a mere four in total). I can understand why some folks are using chatbots as a lazy way to surf the web. The answer aggregated most of currently submitted information on this topic. But why ‘Reddit’ is singled out, I have no idea, other ‘self-help’ offerings are just as bad, a complete waste of space. I laughed at the invitation to continue, and answer the question.
The best message, by far is:
So AI fills a void — but not with wisdom, just with words that sound like it.
Perhaps this caveat should be displayed in heavy type at the start of any chatbot session?