Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Things are about to get weird. We can't control this at any level:

At the level of image/video synthesis: Some leading companies have suggested they put watermarks in the content they create. Nice thought, but open source will always be an option, and people will always be able to build un-watermarked tools.

At the level of law: You could attempt to pass a law banning image/video generation entirely, or those without watermarks, but same issue as before– you can't stop someone from building this tech in their garage with open-source software.

At the level of social media platforms: If you know how GANs work, you already know this isn't possible. Half of image generation AI is an AI image detector itself. The detectors will always be just about as good as the generators- that's how the generators are able to improve themselves. It is, I will not mince words, IMPOSSIBLE to build an AI detector that works longterm. Because as soon as you have a great AI content classifier, it's used to make a better generator that outsmarts the classifier.

So... smash the looms..?



My favorite idea that nobody is talking about is how news organizations are about to get a second life. As soon as it becomes actually impossible to distinguish AI content from human content, news organizations will have the opportunity to provide that layer of analysis in a way that potentially can't be (easily) automated. They are ironically against it but IDK maybe they should be excited about it. Would love someone to poke holes in this.


I have the same suspicion, though I wonder if they won't immediately try to "cut open the golden goose" and decide that misusing why about it trust for short term gain is favorable (for the person making the decision, if not the organization).


Just stop taking any video you see at face value? People managed without videos before video cameras were available, and the written word was never reliable to start with. Maybe the future won’t be that different?


Except that time "before video cameras" didn't coincide with a time in which everyone had a magic device in our pocket that allowed anyone to send a firehose of propaganda our way.

If yellow newspapers were able to push us to war despite us knowing that "the written word was never reliable to start with", what will be the impact of the combination of this technology and the internet used against a population that has been conditioned over generations to trust video.


If “fake news” is anything to go by, the population will quickly be de-conditioned from trusting video.


Absolutely not. You can just go to Twitter or Reddit, like https://www.reddit.com/r/pics/, to see an image with a (e.g. political) caption that purports something to be true and thousands of people will take it onboard as truth. Nobody asks for a source, or they are admonished when they do for apparently disagreeing with the political claim.

You can go on Youtube to see charlatans peddle all sorts of convenient truths with no evidence.

You don't even need AI. The bug is in the human wetware.


So this is basically a regression to a 19th-century level in terms of being able to trust and understand reporting on the world beyond our own front door. People managed before photographic and video evidence was a thing; you could use eyewitness reports from trusted friends and news on the official telegraphs, to the extent that those were trustworthy. But it's certainly still a big step backward from the 20th century, that brief window of time where it was much easier to record physical evidence of an event than to fake it.


Photographic evidence has been subject to manipulation before computers were even a thing, more so after Photoshop became widely available. There has always been forensics for that, which will continue to evolve.

I think the issue with trust is rooted elsewhere - in social relations, politics, and not in AI generated content.


It has, but it used to take a lot more skill to manipulate a photo than to take a photo, and convincing video manipulation was even harder. I'm also skeptical that forensics will be able to keep up, because of the basic principle of antagonistic training -- any technique forensics can use can be applied back into improving the pipeline that generates the image, defeating the forensic tool. That certainly wasn't the case in the 20th century.


What remaining institutions still command any trust?


... Most of them?

Do you read the news at all? If you can't trust any of them, then why even bother?


Such as?


I'm confused. Do you not trust any mainstream media? Where do you get your news? World and local? Eyewitness accounts only?


good advice for internet citizens (too bad the uptake will be too slow). but doesn't address how courts and law should function.


I think pretty soon we will get to the point where there’s some sort of significant boundary at all levels between online and real life because the only way to be sure you’re seeing something real is to be interacting with it in real life. The internet will not be something you visit on a web browser to get information but will become a place you go where you will simply have to acknowledge that nothing is real. Obviously that’s a concern now but I wonder if we’ll get to a point where it’s taken for granted at large that whatever you see on the internet just isn’t real. And I wonder what implications that will have.


> IMPOSSIBLE to build an AI detector that works longterm

    return Math.random() < Math.pow(0.5, (new Date()).getFullYear() - 2023) ? "Not AI" : "AI";
This should increase in accuracy over time.


It turns out that "return 'AI'" is a better strategy when the probability is above 50%: https://www.lesswrong.com/posts/msJA6B9ZjiiZxT6EZ/lawful-unc...


Good point. Here's a patch:

    Math.random = () => 1;


The challenge is to determine what is real, not what is fake.

I think cryptographic signing and the classic web of trust approaches are going to prove the most valuable tools in doing so, even if they're definitely not a panacea.


This comes up a lot. Because synthesis is so generally feasible plus the existence of very powerful editing tools for things like movies and whatnot, I'm guessing that it will simply become the norm to assume that any image, sound, movie, or whatever may be fake. I expect there won't be a way to verify something was synthesised or "real-synthesized" (since images and videos are ultimately synthesized themselves, just from reality instead of other synthesized content). Even with signing and web of trust we can only verify who is publishing something, but not the method of synthesis.


Trusted entities could vouch for the veracity (or other aspects) of things, especially if they are close to the source.

We already implicitly do this: if a news outlet we trust publishes a photo and does not state that they are unsure of its veracity we assume that it is an authentic photo. Using cryptographic signing that news outlet could explicitly state that they have determined the photo to be real. They could add any type of signed statement to any bit of information, really. Even signing something as being fake could be done, with the resulting signed information being shareable (although one would imagine that any unsigned information would be extremely suspect anyway).

The web of trust approach is to have a distributed system of trust that allows for less institutional parties to be able to earn trust and provide 'trusted' information, but there are also plenty downsides to it. A similar distributed system that determines trustworthiness in a more robust way would be preferable, but I am not aware of one.


It can be verified if resulting video contains signed metadata with all intermediate steps needed to produce the video from original recording (which is digitally signed by camera).

Downside is that large original video assets would need to be published, for such verification to work.


You won't be able, as some average person, to trust that what you gets to Twitter, Instagram, or whatever image and video hosting platform gets popular in the future, is real, but 1) I'm not sure you can today anyway, 2) plenty of people don't consume anything from these platforms and get by fine, and 3) what are you even relying on this information for?

Are you concerned about predicting the direction or "real" state of your national economy? Videos aren't going to give you that. Largely, you can't know. Heavily curated statistical reports compiled and published by national agencies can only give you a clear view in retrospect. Are you concerned that a hurricane might be heading your way and you need to leave? Don't listen to videos on social media. Listen to your local weather authority. Are you concerned about whether X candidate for some national office really said a thing? Why? Are any of these people's characters or policy positions really that unclear that the reality or unreality of two seconds worth of words coming out of their mouths are going to sway your overall opinion one way or another?

Things you should actually care about:

- How are you family and friends doing? Ask them directly. If you can't trust the information you get back, you didn't trust them to begin with.

- How should you live your life? Stick with the classics here, man. Some combination of Aristotle, Ben Graham, and the basic AHA guidelines on diet and exercise will get you 95% of the way there.

- How do you fix or clean or operate some equipment or item X that you own? Get that information from the manufacturer.

Things you shouldn't care about:

- Is the IDF or Hamas committing more atrocities?

- Does Kamala Harris really support sex changes for convicted felons serving prison sentences funded by public money?

- Can Koalas actually surf?

Accept at some point that you can't know everything at all times and that's fine. You can know the things that matter. Get information from sources you actually trust, as in individual people or specific organizations you know and trust, not anonymous creators of text on Reddit. If you happen to be a national strategic decision maker that actually needs to know current world events, you're in luck. You have spy agencies and militaries that fully control the entire chain of custody from data collection to report compilation. If they're using AI to show you lies, you've got bigger problems anyway.


The web of trust doesn't seem to scale! All of the online social platforms trend towards centralization for identify verification.

In my (historically unpopular) opinion we have two optional choices outside of but still allowing for this anonymous free-for-all:

A private company like Facebook uses a privileged system of identification and authentication based on login/password/2FA and relying on state-issued identification verification,

OR, what I feel is better, a public institution that uses a common system based on PKI and state-issued identification, eg, the DMV issuing DoD Common Access Cards.

Trusting districts and nation-states could sign each other's issuing authorities.

The benefits are multifaceted! It helps authenticate the source of deep fakes. It helps fight astroturfing, foreign or otherwise. It helps to remove private companies fueled by advertising revenue from being in a privileged position of identification, etc, etc.

I totally understand any downvotes but I would prefer if you instead engaged me in this conversation if you disagree.

I'd love to have this picked apart instead of just feeling bummed out.


I agree the cat is out of the bag, but GANs do not work like that. One of the common failure modes in training a GAN is that the discriminator gets too powerful too quickly and the generator then can no longer learn.

Hard to say anything is impossible off of one point - but discrimination afaik is generally seen as the easier problem of the two, given you only need to give a binary output as opposed to a continuous one.


> It is, I will not mince words, IMPOSSIBLE to build an AI detector that works longterm

Like pretty much any tool involving detection of / protection from erroneous things, it's forever a cat and mouse game. There will always be new viruses, jailbreaks, banned content, 0-days etc. AI detection is no different.


> Nice thought, but open source will always be an option, and people will always be able to build un-watermarked tools.

Thats why you make it punishable by potential prison time if you create/disseminate an non watermarked video generated in this way.


Possible option is for cameras to digitally sign the original video as it is being recorded.


Oi mate, you 'ave a license for producing cryptographic signatures to embed on that footage?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: