DU Community Help
Related: About this forumI believe DU needs crystal clear TOS regarding the posting of AI generated material
Go read this article at Digbys blog, and then please share your opinions.
https://digbysblog.net/2025/05/26/the-end-is-near-2/
(Google has just launched a new AI tool that is terrifyingly realistic)
Clear TOS on the posting of AI generated material, especially videos, is critical to maintaining the integrity of DU and to prevent it from devolving into just another clickbait disseminator of misinformation and disinformation.
IMO, AI generated material should be clearly labeled as such; if AI material is posted without transparent disclosure, it should be hideable. Material of ambiguous origin should alertable, and subject to moderator approval before posting (I know that is onerous, but I cant see any other way to prevent a deluge of fake AI garbage from flooding the site, even if it takes a week to review and approve videos of ambiguous origin- maybe make videos of ambiguous origin viewable only by star members?)
My apologies if I have overstepped any boundaries; my intention was to bring this topic to the attention of the mods and the greater DU community for further discussion.

hlthe2b
(109,926 posts)my immediate response was "Trepidation."
I started to voice some qualified spew on its "being a start and not a finish" to inquiries or research or... But I just can't get there. I've tested against some well-known published medical research findings and found both accurate and bizarrely misunderstood--or just plain wrong-- results come back
So, knowing what harm the above can promote (speaking to YOU, RKF Jr and your ilk--as well as the laziest reporters), my response is still "TREPIDATION."
cbabe
(5,036 posts)EarlG
(22,943 posts)Lets say we have a rule that says No unlabeled AI generated content. You get called to a jury where someone has been alerted for potentially breaking the rule. Now you have to judge whether the post was written with the help of AI or not.
How do you do that?
gab13by13
(28,266 posts)EarlG
(22,943 posts)I have written at length about how it is not possible to crowdsource fact-checking using DU Juries, and determining whether something is AI or not is an extension of that problem.
There may be a conversation to be had about posting AI slop videos -- particularly if they are deepfakes which are fooling people into believing they are true. If something posts a video *that can actually be determined to be an AI fake* then we should have a way of shutting it down before it spreads. That could be done by locking the thread.
But even then, the current practice of "everyone in the thread simply tells the OP that the video is fake" is quite effective. Not only does it provide a much faster result, it's a public action, so it reminds everyone who reads the thread to be careful when posting.
Still, there's no way to know if something that someone has *written* contains AI. But, I'm still not too concerned about that just yet. Here's an excerpt from a previous post I wrote about this:
So for most people, it's a choice. Do you choose to believe that other DUers you're interacting with here are writing their own posts because they have something to say and they want to participate in the community, or do you choose to believe that other DUers aren't really people, and that you should be on high alert at all times because you might be talking to an AI?
I choose to believe the former. I think people participate here because they like using their brains and enjoy the process of engaging in written communication.
Bear in mind that this is still an anonymous discussion forum. Anyone who posts here has always been able to pretend that they're something they're not, or have somebody else write their posts for them -- if that's what they want to do. But even though that has been possible for the last 25 years, nobody worried about it. It's only now that AI makes it easy that people are worrying about it. But for the vast majority of DUers, what would be the motivation for using AI to write their posts? Why come to a discussion forum and NOT communicate using your own thoughts and words? What's the point?
Fiendish Thingy
(19,396 posts)Written content can be challenged by demanding the author provide evidence to support the claims made.
Video content is the evidence, unless/until someone can convincingly disprove it.
In the meantime, as the saying goes, a lie travels around the world before the truth can get its pants on.
Fiendish Thingy
(19,396 posts)
InAbLuEsTaTe
(25,156 posts)Fiendish Thingy
(19,396 posts)The video in question wouldnt be posted until mods cleared it.
I know that would be onerous, but Id rather see a post be held in limbo for days, even a week, rather than spread misinformation. (There was a unverified video posted yesterday claiming Moscow was under attack by Ukrainian drones, and that Putin and other elites had been evacuated)
Other suggestions:
Make questionable/unverifiable video viewable only by star members.
Make questionable/unverifiable videos unviewable directly on DU, only a link to the video would show.
Im sure other larger sites are developing their own protocols around AI - maybe there is some good guidance out there that wont require labor intensive solutions.
Respectfully EarlG, One thing Im sure of, AI cannot be ignored without serious negative consequences, and DU cannot proceed in this new AI world in business-as-usual mode.
forgotmylogin
(7,854 posts)It gives you a percentage chance of whether the phrasing is AI or human generated. It's not completely reliable, but if we paste something in and it's 90% sure, that's enough to know AI did most of the work.
You get a feel for it. AI responses are commonly structured like a book report or documentation with a thesis and conclusion that usually a human doesn't bother with. It also will sound like "business-speak" and often weirdly globalizes like "We can all work together to help each other better understand the specific uses of customized fishing lures."
https://surferseo.com/blog/detect-ai-content/
https://selzy.com/en/blog/how-to-detect-ai-writing/
The small forum I moderate basically made the decision that an AI cannot agree to the terms of service nor the code of conduct as a human can, nor is there a way to moderate any existing user, so an account that posts AI prose as it's own content masquerading "as a human" is not allowed because there's no "person" we can moderate and correct behaviors. Thus the blanket rule. If AI content is posted, it needs to be quoted and formally cited as a source, or may be summarily removed.
highplainsdem
(56,199 posts)about some that their testing found worked better: https://www.zdnet.com/article/i-tested-10-ai-content-detectors-and-these-5-correctly-identified-ai-text-every-time/
There's still a caveat:
Even though my hand-crafted content has mostly been rated human-written this round, one detector (GPTZero) declared itself too uncertain to judge, and another (Copyleaks) declared it AI-written. The results are wildly inconsistent across systems.
RockRaven
(17,305 posts)have illustrated, many users are not able to identify what is AI generated and what is not -- and that is even in the subset of cases where the AI-ness is obvious to some. In other cases, AI origin of the content is unidentifiable even to the savvy.
So I am not sure how one could write a rule which is workable enforcement-wise. How does one ask jurors if a post violates a rule when most jurors are unable to tell better than chance guessing?
IMO, there should be a social norm of not posting AI generated stuff, including self-deletion when other users inform an OP that they have (presumably unknowingly) posted AI stuff.
Uncle Joe
(61,791 posts)Thanks for the thread Fiendish Thingy
highplainsdem
(56,199 posts)Javaman
(63,890 posts)But AI is the devils spawn.
It will destroy our society
I pledge never to use that artificial bullshit.
Im an artist and a writer, AI is a cavity upon society, soon to rotten the entire mouth.
In 10 years, will be a mere shadow of who we once were, because, as a society, will be dumbed down to the point of idiots as a whole, all due to AI
chowder66
(10,582 posts)Silver Gaia
(5,089 posts)I don't have time for a full reply right now, bit I have a lot if thoughts to share as well as experience in dealing with AI-generated content in higher ed. I will be back.
patphil
(7,863 posts)I can see a time when AI has it's own social media, it's own movie stars, it's own TV personalities.
It's own candidates for office? Probably not, but a real candidate might use it to generate commercials that show themselves saying and doing things they never did, and also their opponents saying and doing things they never did.
Truth will get very subjective.
Recorded evidence may be inadmissible in a court of law unless there is definitive evidence proving the evidence is based on truth. And even that evidence would have to be proven, and so on.
What do we do as life becomes more and more an illusion? When does the illusion become the reality?
Fiendish Thingy
(19,396 posts)See the examples at the Digby link I posted.
Lithos
(26,538 posts)I work in Data Engineering and am adjacent to AI in many ways, including Gen AI used in business and marketing. I have a very recent Master's in Interactive Intelligence (AI). AI can be used in very overt manners (thinking the deep fakes that Trump likes to post on occasion), or in subtle ways you would never know unless you were in the pipeline.
What most people do not realize is that everyone thinks of AI as Artificial Intelligence when the vast majority of use is Augmented Intelligence - a way to augment what a person could do.
When I use Grammarly, Office 365 Copilot, or Adobe Firefly, I am using AI. Similarly, when I use Kagi or Perplexity to search, I am using AI. Many media companies use AI to automatically crop, convert, resize, and enhance their videos. Almost all Captioning on YouTube is done directly by AI.
Given its use, why it is unenforceable is that the deep fakes and outright hallucinations which I care about come not from the individuals, but from governments and from mainstream media companies. And no matter the rules you have here, it is still "news" because of the source. What is needed are critical thinking skills and sufficient light to expose these things. It's what happened pre-AI days (ex, Collin Powell's speech to the UN concerning Iraq) and what needs to happen now.
L-
Fiendish Thingy
(19,396 posts)Like the brand new tool Google just launched.
Are you saying its impossible to enforce a rule against posting AI generated videos?
I mean, the most extreme example would be to block all videos from unverifiable sources/origins.
Lithos
(26,538 posts)Mostly because AI is so prevalent and used in ways that most people do not realize. Yes, I'm sure you're referring Veo3, but also to things such as the video this CNN analysis is editorializing: https://edition.cnn.com/2025/02/26/world/video/trump-ai-gaza-video-breakdown-digvid
I know it is very weird to watch a giraffe on a motorcycle in Manhattan, but as entertaining as Veo3, its real use will not be in original creations, but in subtly modifying existing videos. The real danger is in creating what people want to believe, or what you want them to believe. There have already been cases where people's voices and even their "Zoom" presence have been impersonated for nefarious means (thinking wire fraud). I would focus more on ensuring the provenance is good and assume there are algorithms wanting to influence your opinion. This is true for all things which are digital (photos, voice, text) and not just videos.
So, it's impossible to enforce because even major sources are getting caught up in things. I would suggest that what you might want is a rule asking people who post to do so from an official source (ie, a YouTube channel from a major entity) and not just a random TikTok or χ-itter post.
L-
scipan
(2,809 posts)I don't know how well those AI detectors work but I've seen them rate something as "94% chance" or such. Not talking about hiding posts, just having information about it in the thread if someone calls on it.
My Bluesky account automatically scans each poster's profile and lets me know if there's AI in the profile pic. Twitter has bots like threadreaderapp.
Fight fire with fire so to speak.
I imagine it would be a big software change. And it might slow things down having something that scans every word written on du. Just a thought.
Amaryllis
(10,382 posts)BumRushDaShow
(153,290 posts)Amaryllis
(10,382 posts)BumRushDaShow
(153,290 posts)
Fiendish Thingy
(19,396 posts)Its what you agreed to follow when you created an account at DU.