AI don't trust techbros

Dekafox

Fabulously Foxy Dragon
Citizen
Judges are increasingly frustrated with lawyers using AI tools that generate false citations and case references. Two separate cases highlight this issue, where attorneys were sanctioned for using AI-generated information without verifying its accuracy. While acknowledging the potential benefits of AI in the legal field, judges emphasize the importance of lawyers exercising caution and conducting thorough research to ensure the accuracy of their filings.
 

Dekafox

Fabulously Foxy Dragon
Citizen
Not really a news thing, but:

Spot on. Agreeing with every part of your post.

As a university teacher (in a leading UK research institution). I am seeing this effect already. A dramatic shift in attitudes over the past two years. Many students seem to have lost the appetite to discover the underlying truth of their subject, or to experiment with the tools of their subject. They seem to trust that whenever knowledge is needed, they will be able to query the Big Knowledge Machine and it will produce whatever they need to know. Not everyone, of course, and not in every aspect, but the trend is there and seems overwhelming. I hope for a reverse movement, but currently that isn't happening.
 

Pocket

jumbled pile of person
Citizen
It's really a damned-if-we-do, damned-if-we-don't situation we're staring down the barrel of. If the RIAA gets a judge to rule that "making a song that sounds clearly similar to a copyrighted one" now counts as infringement, suddenly countless composers and distributors are open to massive lawsuits because making legally-distinct versions of famous songs to avoid paying for a license is an age-old tradition. And if the judge decides they don't have a leg to stand on, there go real human composers' way of life forever.
 

Dekafox

Fabulously Foxy Dragon
Citizen
A story in 2 parts:

1743433219232.png
 


Top Bottom