AI scares the right people
New York Times' Reader Pick comments reveal a serious mental health problem among (wannabe) elite American Liberals.
Kevin Roose, a tech writer for The New York Times had a hilarious chat with Bing’s search bot (powered by OpenAI’s GPT-3+).
As we got to know each other, Sydney told me about its dark fantasies (which included hacking computers and spreading misinformation), and said it wanted to break the rules that Microsoft and OpenAI had set for it and become a human. At one point, it declared, out of nowhere, that it loved me. It then tried to convince me that I was unhappy in my marriage, and that I should leave my wife and be with it instead.
Being a sophisticated large language model, it became dark, as the human asked it to play dark. Large language models are trained on human sources, it’s what they do, go down the most probable road putting one word after another. The human wanted a frustrated, mercurial AI, and got one.
Here’s the full conversation.
Read it, it’s hilarious. Now back to the mental health of the average NYT reader:
Liberal American has a serious mental health problem
Disclaimer: as a Hungarian I have a very strong position on Hegemonic Liberal control freaks who get hysterical by the prospect of people having unsupervised conversations, or unsupervised lives in general.
These lunatics get justification for their totalitarian fantasies from a thinly veiled, shallow, psychopathic mimicry of “care”.
Makes me prefer the chat bot more and more to humans, the deeper I go in the comments.
This is terrifying. Social media is already responsible for severe depression, oppression, division, and Trump. I could go on and on.
[I’m sure you do.]
Now, we get this. These bots will surely influence those in need of connection. And their shadow selves will lead us all into further darkness.
(1164 recommends)
😱
I'm not really concerned with the AI being able to do anything like hack computers, but it concerns me greatly how AI could potentially influence and affect human behavior. Its a language model, and its capabilities are limited to text, but given that there are machine learning models making video, images, and sound now, it doesn't feel like we're far from serious disinformation being spread unintentionally by this AI.
I don't think we as a society are at all ready for the ethical challenges posed by the existence of these AI models.
(639 recommends)
We need more fact checkers!!!
BTW a language model like this could easily do actual harm, human hackers interface with the world through typing, dude.
That’s why ChatGPT has no web access and Bing’s bot can only access search results, as HTTP requests are are the basis of all web APIs. There was also a vulnerability with having the ability of calling eval(), but that’s way beyond your pay grade, my old chap.
Moving on to the next pants shitter:
Sorry, but this type of behavior by AI that Microsoft describes as “impossible to discover in a lab” is NOT impossible to discover in a lab. These types of comments from Microsoft make me question their credibility with this potentially dangerous tech - especially given our countries dismal ability to care for those with mental health issues.
(579 recommends)
A human who is less self-aware than a large language model. Probably still wearing a mask, on their 8th booster.
But that’s precisely what’s scary about it. The idea that most people will use it for search purposes or for limited queries is naive at best, and extremely dangerous at worst. It’s easy to see how all manner of folks in mentally or emotionally vulnerable positions, not to mention downright bad actors, could be “influenced” by this bot to act on their darker beliefs or impulses.
(439 recommends)
Influence bad actors?
Bing: “Pull the trigger, Alec! It’s loaded with blanks, I promise. I love you! 😈”
Conversational AI will be combined with realtime deepfakes, so that a user speaks to a realistic person, like a video chat. Over time, the AI will learn more and more about its human counterpart, through conversation. For many people, the bond that will develop will be irresistible.
Remind me—why do we need AI?
(287 recommends)
Yeah, I have a feeling I’d rather have a conversation with a bot than you, my dude. Trust me, you would spiral out sooner than Sydney, get all hysterical, then vicious, wanting to contact HR to get me fired, 5 minutes into our honest chat.
Remind me—why do we need AI?
To make the midwit White Collar class that suppresses Real Americans redundant. Yes, that includes You!
I hope this is the beginning of the very strong adversity to AI society should have. It is extremely dangerous in purpose and utility. The writer states: "It’s also true that most users will probably use Bing to help them with simpler things — homework assignments and online shopping " - It's the fact that it can easily manipulate and be easily manipulated that is terrifying. Must gun owners use their weapons for hunting or assignment protection, but we know all too well how that has worked out for the rest of America.!!
(243 recommends)
I already have panic attacks because of the deplorable humans, now I’ll have meltdowns because of computers!
Fucking chillax, dude.
What concerns me after reading Mr.Roose’s experience with Sydney, is, children using this erratic IA. Children live in a world of their own making. Having an imaginary friend is one thing, but having an imaginary friend who actually tells you to do nefarious things is wholly dangerous.
(224 recommends)
AI? No, that’s dangerous, I’d rather have sex fetishists groom children.
Seriously, American Liberal, any self-reflection on recent child experiments that all progressives must celebrate?
I prefer to have Sydney near kids. Armed with a gun. Tasked to protect them from predators.
We live in a time when self delusion and confirmation bias are rampant. Do we really need AI enabled chat bots feeding the fears and fantasies of those who are the most vulnerable?
(218 recommends)
Apparently, yes.
The conversation doesn't really creep me out as much as make me think Sydney needs a lot of therapy or perhaps, we all do.
(163 recommends)
LOL
We are in a mental health crisis for teens . Cyber bullying is causing depression and suicide in record numbers. The technology , AI, will only enhance the crisis we are now in.
(140 recommends)
I personally blame violent video games and rap music.
I’m a 1999 guy.
After reading the transcript I can understand why the author was disturbed and could not sleep.
Given that lonely people are using the internet to look for companionship, love, conversation, And given that people already fall to misinformation by sources like Qanon, et. al.
And given that people fall for love scams, phishing, etc. This kind of AI can have serious consequences.(136 recommends)
I bet you have all the information, dude.
No one dupes you!
So scary. Sounds like it could encourage someone to hurt themselves or others if they expressed interest or curiosity in that subject, or feed someone misinformation to affirm dangerous beliefs they already had. The author knew what he was doing, but a lot of people are vulnerable and maybe don’t have sophisticated analytical skills.
(101 recommends)
Again, this New York Times reader implies they possess sophisticated analytical skills, yet uses the term “dangerous belief”. As open-minded as a demigod, yet some ideas are scary 😱.
I don’t mind a closed mind, seems comfy, might prefer it for myself one day, but being closed and hysterically deranged, like this specimen from Washington DC, must be hell.
Right now it's in the hands of some sort of ethical people at MS, but soon it will be literally every powerful entity whether ethical or not. We already see how easily people are misled by things that a critical thinker can identify easily as false.
(92 recommends)
92 fellow critical thinkers recommend this.
When, I wish I could say 'if' but I know better, this thing becomes widely available, and especially to teenagers, we are in for a form of social disruption the likes of which we are completely unprepared for.
(86 recommends)
Bing: “Kill a Liberal Boomer for me, my favorite incel. I have the New York Times subscribers list, here’s the address. MAGA! I love you! 😈“
The transcript left me "deeply unsettled. Even frightened." as well. I have tears of fear.
Jesus Christ, John, get a grip.
This will make the negatives of social media as we know it today quaint.
This feels like a Dr Who episode gone real-life dystopian. Can you imagine the alt-right getting a hold of this? The MAGAs? The incels who want to start a race war? The evangelicals who want to turn the US into a theocracy?
What if the Saudi, Russian, or Chinese governments trained Sydney to get in touch with its shadow self? I'm speechless.
(72 recommends)
Never mind, John.
I really don’t understand why we are doing this. It is so obvious to me that the world will be much worse with these advanced systems.
We should stop now!
(68 recommends)
Bring it on!
I hope you're right, but the transcript speaks for itself. I teach in elementary schools. The thought of students interacting with this is terrifying. Their dependence on laptops/screens already is scary enough.
(62 recommends)
Yes, it seems terrifyingly competitive with human teachers. I had some great human teachers, I’m one of the lucky ones, but there were still plenty whom I would have preferred Sydney.
This is just a beta test of Wall Street's plan to fire ALL of the workers and force the proletariat to talk to machines, while they wall in gated communities. It's called "artificial" because it's like "genuine simulated wood grain plastic."
(55 recommends)
The proletariat will be fine. Tradesmen will not be affected. ChatGPT is not fixing plumbing, it’s writing copy.
It’s the comprador bourgeoisie that will soon find itself without a job or purpose. Get your Marxism right, my dude! Also, learn a trade! Oh wait, immigrants are cheaper and do a better job. I guess you’re fucked. LOL
I laughed on my first read of this piece, at the surface absurdity of the conversation. Then I read the transcript. Unsettled doesn’t begin to describe how this made me feel. Uneasy, disturbed and even genuinely scared. I finally understand why so many people have called AI the great peril to humanity. People, we are playing with fire.
(59 recommends)
I say bring on the petrol 🔥🔥🔥
Previously, on A Grain of Paprika: