e.g. the ability to distinguish if images or videos are AI generated to some reliable degree.
Being able to detect like >50% of entirely AI generated photos and videos would resolve this market YES. If it's unclear, resolves to a poll. Should be pretty obvious though I hope
Update 2025-12-02 (PST) (AI summary of creator comment): The creator considers current AI detectors to meaningfully exist for current GenAI, as there are tools that mostly work and AI content is often detectable to the eye at >50% accuracy. This suggests the bar for "meaningful existence" in 2028 is similar - tools that mostly work with >50% detection rate.
Update 2025-12-02 (PST) (AI summary of creator comment): The market is fundamentally about: can you trust what you see online in 2028? Will people be able to authenticate images/videos that need verification?
Proposed test for resolution: Looking at the first 20 images/videos in a social media feed (Twitter/Facebook/etc.), can we determine if they're real or AI-generated? If we can discern in either direction, resolves YES.
"Meaningfully exist" clarification: If only one platform (e.g., Facebook) has functioning detection but other major platforms (Twitter/YouTube/etc.) don't use these tools and it's impossible to tell on those platforms, then detection may not "meaningfully exist" (since people are still being fooled daily).
Creator notes the criteria is intentionally vibes-based but is open to stricter criteria if proposed.
Update 2025-12-02 (PST) (AI summary of creator comment): Assessment window changed to December 2028 (previously end of 2028)
Scope confirmation: Market is specifically about photo and video detection only, not text detection. Creator notes that detection could occur externally (e.g., through third-party forced disclosures) and traders should not assume manual checking is required.
Update 2025-12-02 (PST) (AI summary of creator comment): Text detection is out of scope: The creator confirms that text-based AI detection is not relevant to this market. The market focuses specifically on photo and video detection only, as text signatures/watermarks can be easily replicated by AI and text data lacks meaningful metadata for authentication purposes.
People are also trading
Just to be clear then @Gen -- this market is only about photo and video, not about text? If so maybe the title should reflect that and be a bit more specific.
And separately, request to update the market title to say "end of 2028" since the answer to the question may change over the year but it's resolving at the end of the year
@hrothgar I changed it to “in December 2028” as that will be the testing / assessment window. Good note, thanks
Also yes, because of how text data is processed I think it is unlikely to ever be reliably detectable. Photos, videos, and audio, are fundamentally different. I have updated the title — I think it’s a bit worse to explicitly focus on those because my original vibe was that detection could occur externally (e.g. by some third party forcing disclosures) and I don’t want people to be confused and think they need to be able to manually check.. I might change my mind and remove it from the title again later, idk, it doesn’t change the criteria/goal
I’m really just figuring it as I go but I have a good internal view of what I’m resolving based on and I hope I have articulated that elsewhere in the comments
I think this will be tricky to resolve, especially in terms of what "50% of AI generated photos and videos" means. Like is this 50% of what comes out of frontier models? What if all the frontier models are watermarked and yet social media is full of AI generated content that's indistinguishable from real video (because hardly any of them are from frontier models)? Like there's this massive system in place to watermark and detect AI generated video but despite this the internet is flooded with AI generated content.
@Dulaman So AI detection would exist in a meaningful sense, it's just that it doesn't stop the tsunami of undetectable AI content
@Dulaman Well, I think for the most part this market is supposed to emobody the question: can you trust what you see online in 2028? For example, will I watch the news, or see something online, and not know if it's authentic?
Like imagine something controversial like Jan6 happens in 2028 - will all of the footage be questioned? Will it all be authenticated? Will AI edits be watermarked?
If there was a thing, an image or video, that we really needed to authenticate as an AI image/video, will we be able to do it?
Maybe a realistic test would be: looking at the first 20 images/videos that appear in a feed [twitter/fb/whatever], will we be able to determine if they're all real/AI? If we can discern in either direction then this resolves YES. The "meaningfully exist" part is added because it is obviously very vibesy. If facebook has functioning detecting and marks all AI / non AI authenticated things, but nobody else uses it, then yes it exists but if twitter/youtube/etc. all don't use these tools and it's impossible to tell, then it's probably not that meaningful that it exists (people are still being fooled daily)
I don't have a greate criteria for it, I'm ok leaving it up to vibes, but also if people want to ideate a more strict criteria I wouldn't be opposed to updating it
@Gen In general I'm in favor of more strict/testable resolution criteria but I'm not sure what that looks like for this market (yet)
"Will AI detection exist?" and "Can i trust what i see online?" do however seem like rather different questions to me (eg maybe really good detection methods will exist but for some reason aren't very accessible or used by eg social media or news outlets)
@Dulaman well, you don’t necessarily need to be able to trust your eyes to detect AI, but trust the news etc and that these places which historically have been capable of authenticating or confirming things are still able to do so with AI in the mix
I will try and figure out some better criteria. I’m also open to making a new market with different criteria that are less vibesy (imo that’ll be worse, but we’ll see)
@Gen I think we're going to start to see things like hardware signatures and blockchain ledgers for these around 2028, specifically to mitigate AI disinformation in high-value media networks.
@Dulaman in that sense you'll also get "AI detection" if people can see that it's lacking a signature etc. Similar to the watermarks. I think we'll move towards a world where AI detection will mean this sort of thing as opposed to meaning statistical models applied to the data
@Dulaman yeah, I agree, this is what I meant elsewhere when I was saying it doesn’t make sense to even consider text here. You would never really apply a signature to text in a way that isn’t replicable by AI, because that data can be very easily reproduced and is generally processed initially with little or no metadata
@Gen you can sign a hash and put that on the blockchain though. That way you prove that you were the one who created that text at that timepoint, without revealing what the text is
@Dulaman Sure, I can prove to you tomorrow that I wrote something today, but as soon as you see what I write you can trivially reproduce it, rehash, and redistribute it, and nobody would ever know if I wrote it myself, generated it with AI, stole it from you, or if you generated it with AI (before hashing it).