So we’ve finally hit peak absurdity in the AI age: charities are now fabricating images of suffering people rather than, you know, photographing actual suffering people. Because nothing says “we care about your dignity” quite like replacing you with a computer-generated stereotype.
According to this delightful piece of reporting, aid organizations are flooding their social media campaigns with AI-generated poverty porn – synthetic images of hollow-eyed children, cracked earth, and all the visual clichés that make donors reach for their wallets. Adobe’s selling licenses to these fake misery shots for about sixty quid a pop. That’s right, you can now purchase “Caucasian white volunteer provides medical consultation to young black children in African village” like you’re buying clipart for a PowerPoint presentation.
The justification? It’s cheaper and you don’t have to deal with pesky things like consent.
Let me pour that logic out for a second and watch it puddle on the bar. These organizations, whose entire mission supposedly revolves around helping real human beings, have decided that real human beings are too expensive and complicated to photograph. So instead, they’re commissioning artificial humans – perfect victims who never talk back, never demand dignity, never question how their image gets used. It’s the ultimate colonial fantasy: suffering people who exist purely for Western consumption and require zero actual engagement.
One researcher, Arsenii Alenichev, has been cataloging this garbage. He’s found over a hundred AI-generated images used in campaigns against hunger and sexual violence. Pictures of kids huddled in muddy water. African girls in wedding dresses with tears painted down their cheeks. All the greatest hits of the exploitation genre, now available without the inconvenience of exploiting actual people.
The CEO of Freepik, one of the stock photo sites peddling this stuff, offered up a defense that would make a whiskey distiller proud: basically, “Hey, we’re just giving the people what they want.” He said trying to fix the bias problem is “like trying to dry the ocean.” Which is a hell of a metaphor from someone running a business that literally creates artificial oceans of bias.
But here’s where it gets properly twisted. These NGOs spent years – YEARS – having anguished conversations about ethical imagery. About dignified storytelling. About not reducing human suffering to clickbait. They formed committees. They issued guidelines. They patted themselves on the back for being so thoughtful and progressive.
And then AI showed up with a shortcut, and all that ethical soul-searching went straight into the dumpster behind the building.
Plan International used AI to generate images of a girl with a black eye for a campaign against child marriage. Their defense? They were protecting the “privacy and dignity” of real girls. You read that right – they protected real girls’ dignity by creating fake girls to represent suffering. It’s like protecting someone from a mugging by inventing an imaginary person to get mugged instead. The logic is so circular it could power a merry-go-round.
The UN went even further, posting a video with AI-generated “re-enactments” of sexual violence. Computer-generated testimony from a woman describing being raped and left to die. When called out, they pulled it down and mumbled something about “improper use of AI” and “information integrity.” Translation: we got caught doing something monumentally stupid.
Now, I’m just a burned-out writer who spent too many years in soul-crushing jobs before landing here, but even I can spot the fundamental insanity. These organizations exist because real people are really suffering. Their entire pitch to donors is “look at these real people who really need help.” But when it comes time to show those real people, suddenly reality is too messy, too expensive, too legally complicated.
So they generate fakes instead.
The beautiful irony is that this completely defeats the supposed advantage of AI-generated images. The argument goes: we can’t get consent from vulnerable people, so we’ll use AI instead. But here’s the thing – if you can’t ethically photograph someone’s suffering, maybe you shouldn’t be using images of suffering at all. Maybe the problem isn’t consent logistics. Maybe the problem is that your entire fundraising model depends on turning human misery into visual spectacle.
AI didn’t create that problem. AI just made it easier to ignore.
And it gets worse. These synthetic poverty images are now feeding back into the training data for future AI models. Which means the stereotypes get amplified. The biases get baked in deeper. We’re creating a feedback loop of artificial suffering that has less and less connection to actual human experience. It’s poverty porn that doesn’t even need poor people anymore.
Kate Kardol, an NGO communications consultant, said the images “frighten” her. They should. Because what we’re watching is the logical endpoint of a system that values the appearance of caring more than actual caring. A system where the image of helping matters more than the help itself.
Think about what this means for a second. An aid worker in some office in London or New York can now conjure up perfect victims with a few keystrokes. No need to travel. No need to build relationships. No need to listen to what people actually need or want. Just generate some suffering, slap it on Instagram, and watch the donations roll in.
It’s the ultimate expression of contempt dressed up as compassion.
The researchers point out that these AI images replicate “the visual grammar of poverty” – all the tired tropes and stereotypes. But that’s not a bug, it’s the whole point. These organizations don’t want authentic representations of poverty. They want poverty that looks like what donors expect poverty to look like. They want suffering that’s been focus-grouped and optimized for engagement metrics.
Real poverty is complicated. Real people are messy. Real suffering doesn’t always photograph well or fit into a tweet. But AI poverty? AI poverty is perfect. It’s clean. It’s exactly stereotypical enough to trigger the right emotional response without challenging anyone’s assumptions.
And the truly grim part? It works. Those AI-generated images get shared. They get clicks. They probably drive donations. Because donors don’t want complexity either. They want to feel good about helping, and that’s easier when the people being helped are two-dimensional constructs rather than fully realized human beings.
So here we are. Organizations dedicated to fighting poverty are now in the business of manufacturing it. Synthetically creating the very thing they claim to oppose. It’s so perfectly backwards that I almost have to admire it.
Almost.
The whole mess reminds me that we keep asking the wrong question about AI. We keep asking “can we do this?” when we should be asking “what does it say about us that we want to?”
And what it says isn’t pretty.
Cheers from the cheap seats, Henry
Source: AI-generated ‘poverty porn’ fake images being used by aid agencies