The Ethics of AI Imagery: When Generated Becomes Indistinguishable from Real
The Ethics of AI Imagery: When Generated Becomes Indistinguishable from Real
January 22, 2026
AI can create any image you can imagine. Perfect lighting. Impossible locations. Models that don't exist. And that power comes with a responsibility most brands aren't thinking about yet.
AI can create any image you can imagine. Perfect lighting. Impossible locations. Models that don't exist. And that power comes with a responsibility most brands aren't thinking about yet.


Here's a brief we got six months ago.
The client needed campaign photography. Aspirational lifestyle imagery. Diverse cast. Multiple locations. Beautiful lighting. Standard stuff.
Then they asked: "Can we just generate it? It'd be faster and cheaper."
Technically? Yes. We could prompt Midjourney or DALL-E and create photorealistic images in hours instead of organizing a shoot over weeks.
Ethically? That's where it gets complicated.
Because AI imagery isn't just a production shortcut. It's a fundamental question about what brands owe their audiences: truth.
The Technology is Already Here
Let's establish what's possible right now, not in some distant future.
AI image generation has crossed the uncanny valley. You can create photorealistic images of people who don't exist in places that were never photographed doing things that never happened.
And unless you're looking carefully, pixel-peeping, examining shadows and reflections, you can't tell it's generated.
Here's what that means practically:
Product photography without products. You can generate images of your product in any setting before it's manufactured.
Model photography without models. You can create diverse, beautiful people representing your brand who were never born, never signed a release, never got paid.
Location photography without locations. That stunning sunset in Santorini? Generated. The warehouse in Shoreditch? AI. The desert in Dubai? Pixels.
Advertising campaigns without production budgets. No photographer. No crew. No travel. No permits. Just prompts and processing time.
This isn't theoretical. Brands are already doing this.
Where Brands Are Using AI Imagery Right Now
Let's talk about what's already happening in the market.
Levi's announced they'd use AI-generated models to increase diversity in their campaigns. They faced immediate backlash. Critics said: if you want diverse representation, hire diverse models. Don't generate them.
Cosmetics brands are using AI to show products on different skin tones without shooting every variation. Faster. Cheaper. But is the colour accuracy reliable? Are customers seeing what they'll actually get?
Real estate companies are using AI to stage empty properties. Furniture that doesn't exist. Styling that was never there. It looks better. It sells faster. But is it misleading?
Fashion brands are generating entire lookbooks. The clothes are real. The models, lighting, and locations aren't. Does that matter if the product photography is accurate?
Tech companies are using AI for lifestyle imagery showing their products in use. Perfect scenarios that never happened. Demographically diverse users who don't exist.
Some of this feels harmless. Some of it feels deceptive. And the line between them isn't always clear.
The Authenticity Problem
Here's why this matters for brands.
For decades, advertising has been aspirational but rooted in reality. Yes, the models were unusually beautiful. Yes, the lighting was perfect. Yes, the scenarios were idealised.
But they were real.
Real people were hired. Real locations were scouted. Real photographers captured real moments, even if those moments were art-directed within an inch of their lives.
There was a chain of authenticity. You could trace the image back to a moment in time when a camera shutter opened and light hit a sensor.
AI breaks that chain.
The person in your campaign imagery doesn't exist. They have no story. They never wore your product. They never experienced what you're claiming they experienced.
And yes, that's always been true of stock photography to some degree. But stock photography featured real people who really posed for that photo, even if they weren't actually using your product.
AI removes even that. It's simulation all the way down.
And once customers realise this, trust erodes.
The Deepfake Dilemma
Let's talk about the darker edge of this technology.
Deepfakes, AI-generated video of real people saying or doing things they never said or did, started as a novelty. Then they became a weapon.
Political misinformation. Celebrity exploitation. Fraud. Harassment.
The technology is the same technology brands are using for commercial imagery. The ethics are just more obviously fraught.
Here's the concern:
If brands normalise AI-generated imagery, we're normalising the infrastructure that enables deepfakes. We're training audiences to accept that what they see might not be real.
And once that becomes the baseline assumption, how do brands build trust? How do they prove authenticity when authenticity is no longer assumed?
This isn't hypothetical. It's happening now.
Deepfake scams are proliferating. AI-generated faces are used to create fake LinkedIn profiles, fake dating profiles, fake customer testimonials.
The technology is democratised. Anyone with a laptop can generate convincing fake imagery.
And brands have a choice. Contribute to the erosion of visual truth, or hold the line.
The DARB Stance: Transparency Over Convenience
Here's where we've landed after extensive internal debate.
We use AI imagery. But we disclose it.
If the image is generated, we tell you. We tell your customers. We don't pass it off as photography.
This doesn't mean avoiding AI entirely. It means using it responsibly and honestly.
Here's how we draw the lines:
Line One: Never Generate People for Testimonials or Case Studies
If you're showing a real customer, a real employee, or a real person using your product as social proof, that person must be real.
No AI-generated faces attached to made-up quotes. No simulated diversity. No fake testimonials.
This is non-negotiable.
Why? Because testimonials and case studies are trust signals. They're saying "a real person had this experience." If that person isn't real, you're lying.
Even if the sentiment is representative of real feedback, manufacturing the person destroys credibility when discovered. And it will be discovered.
Line Two: Product Photography Must Be Accurate
If you're selling a physical product, the product imagery must accurately represent what the customer receives.
You can generate the background. The setting. The lifestyle context around the product. But the product itself must be photographed or rendered accurately.
Why? Because this affects purchasing decisions. If the colour, texture, or proportions are wrong because an AI hallucinated details, that's misleading. That's not creative license, it's false advertising.
Line Three: Generated Humans Need Context Cues
If you're using AI-generated people for aspirational lifestyle imagery, you need to signal that somehow.
This doesn't mean putting "AI-generated" in huge text over the image. But it means designing campaigns where the generated nature is part of the creative, not hidden.
Illustrations. Stylised treatments. Surreal compositions. Make it clear this is creative vision, not documentary photography.
Why? Because audiences deserve to know when they're looking at real people versus simulations. Especially as AI improves and the line becomes impossible to see.
Line Four: Disclosure in Credits and Metadata
If AI generated any part of your campaign imagery, it should be credited.
Just like "Photography by [Name]" or "Illustration by [Name]," you should credit "AI imagery generated using [Tool]."
This serves multiple purposes. It's transparent. It protects you legally. And it normalises disclosure, making it standard practice rather than exceptional.
Why? Because transparency builds trust. And as regulations catch up to technology, disclosure requirements are coming. Better to be ahead of them.
When AI Imagery Actually Makes Sense
Let's be clear. We're not anti-AI. We're pro-transparency.
There are contexts where AI-generated imagery is the right choice, ethically and practically.
Conceptual work. Pitching ideas before production. Internal presentations. Mood boards. Here, AI speeds up ideation without deceiving anyone.
Impossible scenarios. Fantasy. Science fiction. Surreal brand worlds. When realism isn't the goal, AI is a tool like illustration or CGI.
Placeholder imagery. During development, before the real shoot happens. As long as it's clearly temporary.
Stylised campaigns. Where the aesthetic is obviously generated. Think highly stylised, artistic, or abstract. The generated nature is part of the creative vision.
Background elements. Textures. Environments. Non-human elements. These carry less ethical weight than generating people.
The question isn't "should we use AI?" It's "are we being honest about what we're using AI for?"
The Legal Landscape (Which Is Still Forming)
Let's talk about the regulatory side, because it's evolving fast.
Copyright is murky. If AI generates an image based on training data that included copyrighted works, who owns the output? The prompter? The AI company? No one? Courts are still figuring this out.
Likeness rights are complicated. If you generate a person who happens to look like a real person, can that real person sue? What if it was unintentional?
Advertising standards are tightening. The ASA in the UK and FTC in the US are starting to look at AI-generated content. Expect disclosure requirements soon.
Consumer protection laws apply. If AI imagery misleads customers about what they're buying, that's false advertising regardless of how the image was created.
Model release equivalents don't exist yet. You can't get a release from a person who doesn't exist. So what's the legal framework if that generated person resembles someone real?
The smart move? Be more conservative than the law requires. Because regulations lag technology. What's legal today might be litigated tomorrow. Transparency protects you.
How This Plays Out in Practice
Let's look at brands navigating this well and badly.
Coca-Cola's AI-generated Christmas campaign was widely criticised. They used AI to create nostalgic holiday imagery. People felt it lacked soul. That Christmas, of all things, should involve real human creativity.
The backlash wasn't about legality. It was about appropriateness. Some moments feel wrong for AI, even if the technology allows it.
Heinz used AI brilliantly. They prompted AI with "ketchup" and every result looked like a Heinz bottle. The campaign celebrated their brand recognition. The AI generation was the point, not hidden. Transparent. Clever. Effective.
Toys "R" Us created an AI-generated brand origin story film. Stylised. Obviously AI. Positioned as innovation. They were upfront about the process. Reactions were mixed, but at least they were honest.
The difference? Transparency and appropriateness. When AI is the story, it works. When AI is pretending to be something it's not, it fails.
What We Recommend to Clients
Here's our framework.
Ask: Does this image represent reality or imagination?
If reality (product shots, testimonials, case studies), use real photography. If imagination (concepts, fantasies, impossible scenarios), AI is fair game with disclosure.
Ask: Would customers feel deceived if they knew this was AI?
If yes, don't use AI. Or redesign the use so it's clearly creative, not documentary.
Ask: Are we replacing human workers unnecessarily?
Photographers, models, stylists, these are people with livelihoods. If you're only using AI to save money, consider the human cost. Sometimes efficiency isn't worth the ethical compromise.
Ask: Can we disclose this without undermining the campaign?
If disclosure would make the campaign feel less authentic, that's a sign the campaign is inherently deceptive. Redesign it.
Ask: What happens if this is discovered later?
Because it will be. AI detection tools are improving. Audiences are getting better at spotting generated content. If discovery would damage trust, don't hide it.
The Future: A World of Visual Scepticism
Here's where this is heading.
As AI imagery becomes ubiquitous, audiences will assume everything is generated unless proven otherwise.
Photography will need authentication. Blockchain verification. Camera metadata. Provenance chains. "This was really photographed" will become a selling point.
Brands that commit to real photography will differentiate. "Shot on location with real models" will be a trust signal, not just a production note.
Disclosure will become regulated. Just like "paid partnership" or "sponsored content," expect "AI-generated" to become mandatory in advertising.
The brands that win will be the ones who get ahead of this. Who establish transparency now, before they're forced to. Who build trust by being honest about their process.
The DARB Edge
We use AI as a tool, not a replacement for truth.
We help clients understand when AI is appropriate and when it's not. We build disclosure into campaigns from the start. And we protect brand authenticity by never pretending generated is real.
Whether you're launching in London, Dubai, or globally, we make sure your imagery builds trust, not scepticism.
Because in a world where anything can be faked, authenticity becomes your most valuable asset. And you can't fake authenticity with AI.
Need imagery that's beautiful, effective, and honest? Let's talk about using AI responsibly. Get in touch with DARB.
Here's a brief we got six months ago.
The client needed campaign photography. Aspirational lifestyle imagery. Diverse cast. Multiple locations. Beautiful lighting. Standard stuff.
Then they asked: "Can we just generate it? It'd be faster and cheaper."
Technically? Yes. We could prompt Midjourney or DALL-E and create photorealistic images in hours instead of organizing a shoot over weeks.
Ethically? That's where it gets complicated.
Because AI imagery isn't just a production shortcut. It's a fundamental question about what brands owe their audiences: truth.
The Technology is Already Here
Let's establish what's possible right now, not in some distant future.
AI image generation has crossed the uncanny valley. You can create photorealistic images of people who don't exist in places that were never photographed doing things that never happened.
And unless you're looking carefully, pixel-peeping, examining shadows and reflections, you can't tell it's generated.
Here's what that means practically:
Product photography without products. You can generate images of your product in any setting before it's manufactured.
Model photography without models. You can create diverse, beautiful people representing your brand who were never born, never signed a release, never got paid.
Location photography without locations. That stunning sunset in Santorini? Generated. The warehouse in Shoreditch? AI. The desert in Dubai? Pixels.
Advertising campaigns without production budgets. No photographer. No crew. No travel. No permits. Just prompts and processing time.
This isn't theoretical. Brands are already doing this.
Where Brands Are Using AI Imagery Right Now
Let's talk about what's already happening in the market.
Levi's announced they'd use AI-generated models to increase diversity in their campaigns. They faced immediate backlash. Critics said: if you want diverse representation, hire diverse models. Don't generate them.
Cosmetics brands are using AI to show products on different skin tones without shooting every variation. Faster. Cheaper. But is the colour accuracy reliable? Are customers seeing what they'll actually get?
Real estate companies are using AI to stage empty properties. Furniture that doesn't exist. Styling that was never there. It looks better. It sells faster. But is it misleading?
Fashion brands are generating entire lookbooks. The clothes are real. The models, lighting, and locations aren't. Does that matter if the product photography is accurate?
Tech companies are using AI for lifestyle imagery showing their products in use. Perfect scenarios that never happened. Demographically diverse users who don't exist.
Some of this feels harmless. Some of it feels deceptive. And the line between them isn't always clear.
The Authenticity Problem
Here's why this matters for brands.
For decades, advertising has been aspirational but rooted in reality. Yes, the models were unusually beautiful. Yes, the lighting was perfect. Yes, the scenarios were idealised.
But they were real.
Real people were hired. Real locations were scouted. Real photographers captured real moments, even if those moments were art-directed within an inch of their lives.
There was a chain of authenticity. You could trace the image back to a moment in time when a camera shutter opened and light hit a sensor.
AI breaks that chain.
The person in your campaign imagery doesn't exist. They have no story. They never wore your product. They never experienced what you're claiming they experienced.
And yes, that's always been true of stock photography to some degree. But stock photography featured real people who really posed for that photo, even if they weren't actually using your product.
AI removes even that. It's simulation all the way down.
And once customers realise this, trust erodes.
The Deepfake Dilemma
Let's talk about the darker edge of this technology.
Deepfakes, AI-generated video of real people saying or doing things they never said or did, started as a novelty. Then they became a weapon.
Political misinformation. Celebrity exploitation. Fraud. Harassment.
The technology is the same technology brands are using for commercial imagery. The ethics are just more obviously fraught.
Here's the concern:
If brands normalise AI-generated imagery, we're normalising the infrastructure that enables deepfakes. We're training audiences to accept that what they see might not be real.
And once that becomes the baseline assumption, how do brands build trust? How do they prove authenticity when authenticity is no longer assumed?
This isn't hypothetical. It's happening now.
Deepfake scams are proliferating. AI-generated faces are used to create fake LinkedIn profiles, fake dating profiles, fake customer testimonials.
The technology is democratised. Anyone with a laptop can generate convincing fake imagery.
And brands have a choice. Contribute to the erosion of visual truth, or hold the line.
The DARB Stance: Transparency Over Convenience
Here's where we've landed after extensive internal debate.
We use AI imagery. But we disclose it.
If the image is generated, we tell you. We tell your customers. We don't pass it off as photography.
This doesn't mean avoiding AI entirely. It means using it responsibly and honestly.
Here's how we draw the lines:
Line One: Never Generate People for Testimonials or Case Studies
If you're showing a real customer, a real employee, or a real person using your product as social proof, that person must be real.
No AI-generated faces attached to made-up quotes. No simulated diversity. No fake testimonials.
This is non-negotiable.
Why? Because testimonials and case studies are trust signals. They're saying "a real person had this experience." If that person isn't real, you're lying.
Even if the sentiment is representative of real feedback, manufacturing the person destroys credibility when discovered. And it will be discovered.
Line Two: Product Photography Must Be Accurate
If you're selling a physical product, the product imagery must accurately represent what the customer receives.
You can generate the background. The setting. The lifestyle context around the product. But the product itself must be photographed or rendered accurately.
Why? Because this affects purchasing decisions. If the colour, texture, or proportions are wrong because an AI hallucinated details, that's misleading. That's not creative license, it's false advertising.
Line Three: Generated Humans Need Context Cues
If you're using AI-generated people for aspirational lifestyle imagery, you need to signal that somehow.
This doesn't mean putting "AI-generated" in huge text over the image. But it means designing campaigns where the generated nature is part of the creative, not hidden.
Illustrations. Stylised treatments. Surreal compositions. Make it clear this is creative vision, not documentary photography.
Why? Because audiences deserve to know when they're looking at real people versus simulations. Especially as AI improves and the line becomes impossible to see.
Line Four: Disclosure in Credits and Metadata
If AI generated any part of your campaign imagery, it should be credited.
Just like "Photography by [Name]" or "Illustration by [Name]," you should credit "AI imagery generated using [Tool]."
This serves multiple purposes. It's transparent. It protects you legally. And it normalises disclosure, making it standard practice rather than exceptional.
Why? Because transparency builds trust. And as regulations catch up to technology, disclosure requirements are coming. Better to be ahead of them.
When AI Imagery Actually Makes Sense
Let's be clear. We're not anti-AI. We're pro-transparency.
There are contexts where AI-generated imagery is the right choice, ethically and practically.
Conceptual work. Pitching ideas before production. Internal presentations. Mood boards. Here, AI speeds up ideation without deceiving anyone.
Impossible scenarios. Fantasy. Science fiction. Surreal brand worlds. When realism isn't the goal, AI is a tool like illustration or CGI.
Placeholder imagery. During development, before the real shoot happens. As long as it's clearly temporary.
Stylised campaigns. Where the aesthetic is obviously generated. Think highly stylised, artistic, or abstract. The generated nature is part of the creative vision.
Background elements. Textures. Environments. Non-human elements. These carry less ethical weight than generating people.
The question isn't "should we use AI?" It's "are we being honest about what we're using AI for?"
The Legal Landscape (Which Is Still Forming)
Let's talk about the regulatory side, because it's evolving fast.
Copyright is murky. If AI generates an image based on training data that included copyrighted works, who owns the output? The prompter? The AI company? No one? Courts are still figuring this out.
Likeness rights are complicated. If you generate a person who happens to look like a real person, can that real person sue? What if it was unintentional?
Advertising standards are tightening. The ASA in the UK and FTC in the US are starting to look at AI-generated content. Expect disclosure requirements soon.
Consumer protection laws apply. If AI imagery misleads customers about what they're buying, that's false advertising regardless of how the image was created.
Model release equivalents don't exist yet. You can't get a release from a person who doesn't exist. So what's the legal framework if that generated person resembles someone real?
The smart move? Be more conservative than the law requires. Because regulations lag technology. What's legal today might be litigated tomorrow. Transparency protects you.
How This Plays Out in Practice
Let's look at brands navigating this well and badly.
Coca-Cola's AI-generated Christmas campaign was widely criticised. They used AI to create nostalgic holiday imagery. People felt it lacked soul. That Christmas, of all things, should involve real human creativity.
The backlash wasn't about legality. It was about appropriateness. Some moments feel wrong for AI, even if the technology allows it.
Heinz used AI brilliantly. They prompted AI with "ketchup" and every result looked like a Heinz bottle. The campaign celebrated their brand recognition. The AI generation was the point, not hidden. Transparent. Clever. Effective.
Toys "R" Us created an AI-generated brand origin story film. Stylised. Obviously AI. Positioned as innovation. They were upfront about the process. Reactions were mixed, but at least they were honest.
The difference? Transparency and appropriateness. When AI is the story, it works. When AI is pretending to be something it's not, it fails.
What We Recommend to Clients
Here's our framework.
Ask: Does this image represent reality or imagination?
If reality (product shots, testimonials, case studies), use real photography. If imagination (concepts, fantasies, impossible scenarios), AI is fair game with disclosure.
Ask: Would customers feel deceived if they knew this was AI?
If yes, don't use AI. Or redesign the use so it's clearly creative, not documentary.
Ask: Are we replacing human workers unnecessarily?
Photographers, models, stylists, these are people with livelihoods. If you're only using AI to save money, consider the human cost. Sometimes efficiency isn't worth the ethical compromise.
Ask: Can we disclose this without undermining the campaign?
If disclosure would make the campaign feel less authentic, that's a sign the campaign is inherently deceptive. Redesign it.
Ask: What happens if this is discovered later?
Because it will be. AI detection tools are improving. Audiences are getting better at spotting generated content. If discovery would damage trust, don't hide it.
The Future: A World of Visual Scepticism
Here's where this is heading.
As AI imagery becomes ubiquitous, audiences will assume everything is generated unless proven otherwise.
Photography will need authentication. Blockchain verification. Camera metadata. Provenance chains. "This was really photographed" will become a selling point.
Brands that commit to real photography will differentiate. "Shot on location with real models" will be a trust signal, not just a production note.
Disclosure will become regulated. Just like "paid partnership" or "sponsored content," expect "AI-generated" to become mandatory in advertising.
The brands that win will be the ones who get ahead of this. Who establish transparency now, before they're forced to. Who build trust by being honest about their process.
The DARB Edge
We use AI as a tool, not a replacement for truth.
We help clients understand when AI is appropriate and when it's not. We build disclosure into campaigns from the start. And we protect brand authenticity by never pretending generated is real.
Whether you're launching in London, Dubai, or globally, we make sure your imagery builds trust, not scepticism.
Because in a world where anything can be faked, authenticity becomes your most valuable asset. And you can't fake authenticity with AI.
Need imagery that's beautiful, effective, and honest? Let's talk about using AI responsibly. Get in touch with DARB.

