Crypto Market Ticker
Loading...

خلاصہ: Researchers surprised that with AI, toxicity is harder to fake than intelligence

The next time you encounter an unusually polite reply on social media, you might want to check twice. It could be an AI model trying (and failing) to blend in with the crowd.

On Wednesday, researchers from the University of Zurich, University of Amsterdam, Duke University, and New York University released a study revealing that AI models remain easily distinguishable from humans in social media conversations, with overly friendly emotional tone serving as the most persistent giveaway. The research, which tested nine open-weight models across Twitter/X, Bluesky, and Reddit, found that classifiers developed by the researchers detected AI-generated replies with 70 to 80 percent accuracy.

The study introduces what the authors call a “computational Turing test” to assess how closely AI models approximate human language. Instead of relying on subjective human judgment about whether text sounds authentic, the framework uses automated classifiers and linguistic analysis to identify specific features that distinguish machine-generated from human-authored content.

Read full article

Comments

Source Information

Publisher: Ars Technica

Original Source: Read more

Subscribe
Notify of
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share post:

Subscribe

Popular

More like this
Related

Animal Origami: The Physics of Nature’s Folds – and How Technology is Adapting Them

خلاصہ: Animal Origami: The Physics of Nature’s Folds –...

Jessica Simpson Celebrates 8 Years of Sobriety, Deserves All Your Praise

خلاصہ: Jessica Simpson Celebrates 8 Years of Sobriety, Deserves...

Who is Alexandra Saint Mleux? Model, influencer, charity founder, and Charles Leclerc’s fiancée

خلاصہ: Who is Alexandra Saint Mleux? Model, influencer, charity...

Drug that stops tumors’ blood supply could help kids with bone cancer live longer

خلاصہ: Drug that stops tumors' blood supply could help...