AI don't trust techbros

abates

unfortunate shark issues
Citizen
It's all very simple.
sisters.jpg
 

NovaSaber

Well-known member
Citizen
How the hell is X's context bot, which was trained on Twitter data of all things, consistently better at fact checking than Google is at fact finding?
Because Google's training data is probably the entire (search-engine-accesible part of) the internet, or at least a portion of it that includes the Onion, conspiracy sites, and Reddit shitposters named "fucksmith".


GoogleAI.png
 

Paladin

Well-known member
Citizen
someone needs to bite the bullet & use an AI generator to write a Harry Potter story about how awesome trans people are.
then Rowling & her lawyers will burn the technology from the face of the Earth out of spite.
 

Pocket

jumbled pile of person
Citizen
Because Google's training data is probably the entire (search-engine-accesible part of) the internet, or at least a portion of it that includes the Onion, conspiracy sites, and Reddit shitposters named "fucksmith".
I think you underestimate how much of Twitter is shitposting, conspiracy theories, and people who think they're the Onion. (I mean, hell, the sheer volume of disinformation on there is the whole reason the fact-check bot exists.)
 

Ungnome

Grand Empress of the Empire of One Square Foot.
Citizen
It's a shame the AI part has outpaced the robotics part, for sure. Automation COULD bring us a Star Trek utopia, but there's no monetary profit in that, so...
 

wonko the sane?

You may test that assumption at your convinience.
Citizen
The first thing a non-skynet AI work force does is completely crash the economy. Robots don't need money or luxury goods or food: but they'll have all the jobs. The rest of us need food, clothes, housing and such: and will have no money. The rich don't spend their own money, so once peoples savings dry up: the economy craps out because there won't be any money to spend on the crap the robots are making.
 

Pocket

jumbled pile of person
Citizen
The bill would require companies that develop AI tools to allow users to attach content provenance information to their content within two years. Content provenance information refers to machine-readable information that documents the origin of digital content, such as photos and news articles. According to the bill, works with content provenance information could not be used to train AI models or generate AI content.
I wonder what that entails. Something you'd have to put in the EXIF data? Would people basically have to delete all their content and reupload it with this stuff added to be protected?
 

Ungnome

Grand Empress of the Empire of One Square Foot.
Citizen
In theory a simple script to edit the files enmasse would work, assuming the format in question even has the option to embed said data. Of course you would also have to ban AI from using legacy data that has been floating around since before the regulation was put in place. In practice I don't really see a good way to enforce it without FORCING AI companies to open up their black boxes for public scrutiny and somehow I don't think Congress will actually implement THAT part.
 

Pocket

jumbled pile of person
Citizen
The trouble is if they've been uploaded to literally anything other than a personal website, making such changes en masse would be either prohibitively difficult (DeviantArt) or outright impossible (X) depending on whether the site has a no-questions-asked image replacement option. We might see some of them start offering a built-in tool to apply the tag to selected pieces or one's entire gallery, but ironically the most anti-AI art site I know of is FurAffinity, whose owner would probably take 80 years to figure out how to code such a thing.

A better way to handle this would be to simply treat no-AI-allowed as the default, similar to how copyright law treats no-unauthorized-use as the default, and make artist incorporate a special tag into pieces they do want to let AI data farms scrape.
 


Top Bottom