Beauty & the Bias

Imagine the clouds dripping.
Dig a hole in your garden to
put them in.
Yoko Ono wrote those lines in 1963 as part of her Instruction Pieces, works that transformed a fragment of text into an artwork through the reader's imagination.
Visual GenAI fascinates me for much the same reason. The words, the cultural references, the parameters you set: together they shape what appears on the screen. At its core, it's a conceptual practice. A set of written instructions. And those exact same instructions can yield endless variations.
Concept isn't tied to craft anymore. This idea is being met with a lot of resistance, but in the art world, it isn't new at all. One of my favorite works of art comes to mind: A Line Made by Walking (1967) by Richard Long. It's about the idea of the line, created by the simple act of walking. The idea can stand on its own.
Now the artwork becomes the idea described by a line of text.
❖
The output is not a product of our imagination alone. Because it is built on training data, the system inevitably inherits its flaws, including statistical patterns and stereotypes. Left to its own devices (e.g., a lazy prompt) GenAI tends to default to clichés.
An ‘engineer’ is systematically pictured as a white man, a ‘nurse’ as a woman.
We saw it again this summer, when Guess launched a GenAI campaign that drew heavy criticism. The images showed blonde caricatures of femininity. But in truth, Guess has been serving that aesthetic for decades, arguably since the supermodel ads of the 90s. There was never anything relatable about the beauty of a Claudia Schiffer or Anna Nicole Smith. AI did not invent that look. It only amplified the brand's image.
Anyone who works seriously with these visual GenAI tools knows the answer doesn't lie in a single prompt. It's a process of testing and iterating, even adding what researchers call ‘debiasing cues’ to the prompt, such as gender balance and cultural variety, until the image aligns with your vision. The research also shows that users tend to find the more stereotypical outputs, more aligned with their expectations. The complexity lies in the fact that bias is not only inherent in the AI models but also in human perception. That’s exactly what happened with the Guess campaign, where the most stereotypical images were the ones chosen for their high engagement.
As in real life, representation takes effort, a conscious counterweight to the pull of the mainstream.
GenAI asks us to /imagine in plain text and see what emerges. I'd like to believe this depends less on the data than on the stories that we choose to tell.