Skip to main content


As AI art gets better and better at photorealistic art, it gets worse at *art* and better at *deception*. But of course, as it gets worse, it gets worse at *art* and better at *making garbage*.

There is an obvious solution to this dilemma.
GANs (generative adversarial networks) have always been about deception. It’s a pair of models, one is learning to lie and the other is learning to detect those lies. I guess there’s a philosophical argument that something that cannot be proven to be a lie tends to resemble the truth, and that the better you can discern lies, the better you can perceive truth. But I don’t know if that’s true.
@mike I don't even think that GANs are the issue. GANs can be tuned for anything. It's just really hard to make a series of neural nodes that contain like "this makes me feel like a hug" or "this is cozy"

We can make something look like a cozy piece, but it's going to be a little fucky. So as you make it more realistic, it's going to lose some of that cozy, but look more like a real image.
@mike
Latent spaces don’t necessarily seem to have to work that way. I lean towards thinking generative AIs will be able to maintain many dimensions that don’t affect or attempt to compensate each other. “Realism” is ultimately just another parameter or artistic style; Dali’s surrealism is built on realism.
@mike No I agree
I was thinking about this image I saw of someone next to a TV with a person in the TV coming out to hug them.

If an AI made it, it would look bad and dumb

If a human made it, it means something weird and personal
@mike
Kind of like how sometimes who the artist is matters. The context of a work’s creation changes its meaning. But, a human still made potentially thousands of creative choices in the creation of an AI generated image, in much the same way a photographer or director does. An AI didn’t decide that out of it’s quasi-infinite potential outputs, a certain one should be circulated on social media with a certain presentation.

…yet