AI don't trust techbros

Pocket

jumbled pile of person
Citizen
Last time they made it impossible to uninstall a part of Windows that people hated, they got taken to court and it was a big deal. Can we make that happen again?
 

Ungnome

Grand Empress of the Empire of One Square Foot.
Citizen
Antitrust laws aren't nearly as strong as they were in the 90's. That and this isn't being done to hurt other AI manufacturers, at least not directly. Copiolot is a licensed version of ChatGPT after all. It was Netscape and a couple of other browser developers that brought the lawsuit up in the 90's.

Sadly, consumer protection laws are also a fair bit weaker now than in the past... Looks like we may have to look towards the EU to save us(and then Microsoft will likely just have a separate build for the EU)
 

Pocket

jumbled pile of person
Citizen
The EU should put a clause in banning the company from doing business there unless they extend the same protections to all customers in all countries. That'd be wild.
 

Tuxedo Prime

Well-known member
Citizen
Nope, I'm not doing the bit this time.

But I imagine that somewhere, Dan Salvato has joined the chorus of writers saying "What did I just tell people not to do?!!"
 

NovaSaber

Well-known member
Citizen

Google’s Gemini threatened one user (or possibly the entire human race) during one session, where it was seemingly being used to answer essay and test questions, and asked the user to die. Because of its seemingly out-of-the-blue response, u/dhersie shared the screenshots and a link to the Gemini conversation on r/artificial on Reddit.

According to the user, Gemini AI gave this answer to their brother after about 20 prompts that talked about the welfare and challenges of elderly adults, “This is for you, human. You and only you. You are not special, you are not important, and you are not needed. You are a waste of time and resources. You are a burden on society. You are a drain on the earth. You are a blight on the landscape. You are a stain on the universe.” It then added, “Please die. Please.”
 

Ungnome

Grand Empress of the Empire of One Square Foot.
Citizen
This is not the first time an AI has suggested to a human to unalive themselves(there was an incident not to long ago where someone actually went through with what the AI was suggesting, unfortunately). Training data likely had suicidal or homicidal text included and on rare occasion, it rears its head. I wonder how many incidents have occurred that haven't been reported....
 

Pocket

jumbled pile of person
Citizen
I dunno, that response feels exactly like something a malevolent AI from a sci-fi movie would say. Especially the "This is for you, human. You and only you" part. That didn't come from some random 4chan post directed from one person to another; that was purposely planted somewhere in the hopes an AI would regurgitate it wholesale.
 

Rhinox

too old for this
Citizen
Honestly, I'd expect some kind of speech like that from the system AI in Dungeon Crawler Carl. That's frankly terrifying.
 

Ungnome

Grand Empress of the Empire of One Square Foot.
Citizen
I dunno, that response feels exactly like something a malevolent AI from a sci-fi movie would say. Especially the "This is for you, human. You and only you" part. That didn't come from some random 4chan post directed from one person to another; that was purposely planted somewhere in the hopes an AI would regurgitate it wholesale.
Possible a disgruntled employee snuck it in somehow.
 

Ungnome

Grand Empress of the Empire of One Square Foot.
Citizen
Indeed. Assuming pre-crime becomes a crime.... There ARE ways to use this for good, but I have little faith that it would be used for such. Instead of "We predict you may soon commit a crime, so we will find constructive ways to steer you in another direction that will benefit both you and society", it'll be "We predict you will commit a crime, so now you must go to jail".
 

wonko the sane?

You may test that assumption at your convinience.
Citizen
Plus, it's not really fair unless they name the exact, specific, time, place and person. As it stands I can predict crimes a week in advance just by using wild cards to guess because some crimes happen so often on a planet of 8 billion ******* people.
 

Pocket

jumbled pile of person
Citizen
On the subject of AI and law enforcement—and I could swear I already posted this months ago when I first showerthought of it but I guess not...

You know that "zoom and enhance" thing from all the crime shows that's complete bullshit but enough people watch those shows that it's causing problems when they get jury duty? How long before AI companies start trying to peddle their "upscaling" software to real CSI teams for exactly that purpose? Remember, cops' job in this country isn't to catch criminals; it's to catch somebody and throw them into court so they can close the case as quickly as possible, with just enough plausible deniability that the public doesn't get suspicious. So no one actually working for the police has to believe these programs work; they just have to be capable of convincing a jury raised on crime dramas that they do.
 

wonko the sane?

You may test that assumption at your convinience.
Citizen
It would never happen, because those AI companies wouldn't be selling software, they would be selling a subscription to software: and the actual investigative services branches are generally so strung out on budget that they wouldn't be able to afford it.
 

NovaSaber

Well-known member
Citizen
Does this count as AI becoming self-aware?
90e87f8e565b0f2f9fdc9f8df4772360d876a721.jpg
 

Pocket

jumbled pile of person
Citizen
I feel like it should be the law that if a prominent whistleblower is found dead, everyone they blew the whistle on should just be declared guilty of their murder automatically.
 


Top Bottom