Mar 28 2025
H&M Will Use Digital Twins
The fashion retailer, H&M, has announced that they will start using AI generated digital twins of models in some of their advertising. This has sparked another round of discussion about the use of AI to replace artists of various kinds.
Regarding the H&M announcement specifically, they said they will use digital twins of models that have already modeled for them, and only with their explicit permission, while the models retain full ownership of their image and brand. They will also be compensated for their use. On social media platforms the use of AI-generated imagery will carry a watermark (often required) indicating that the images are AI-generated.
It seems clear that H&M is dipping their toe into this pool, doing everything they can to address any possible criticism. They will get explicit permission, compensate models, and watermark their ads. But of course, this has not shielded them from criticism. According to the BBC:
American influencer Morgan Riddle called H&M’s move “shameful” in a post on her Instagram stories.
“RIP to all the other jobs on shoot sets that this will take away,” she posted.
This is an interesting topic for discussion, so here’s my two-cents. I am generally not compelled by arguments about losing existing jobs. I know this can come off as callous, as it’s not my job on the line, but there is a bigger issue here. Technological advancement generally leads to “creative destruction” in the marketplace. Obsolete jobs are lost, and new jobs are created. We should not hold back progress in order to preserve obsolete jobs.
Machines have been displacing human laborers for decades, and all along the way we have heard warnings about losing jobs. And yet, each step of the way more jobs were created than lost, productivity increased, and everybody benefited. With AI we are just seeing this phenomenon spread to new industries. Should models and photographers be protected when line workers and laborers were not?
But I get the fact that the pace of creative destruction appears to be accelerating. It’s disruptive – in good and bad ways. I think it’s a legitimate role of government to try to mitigate the downsides of disruption in the marketplace. We saw what happens when industries are hollowed out because of market forces (such as globalization). This can create a great deal of societal ill, and we all ultimately pay the price for this. So it makes sense to try to manage the transition. This can mean providing support for worker retraining, protecting workers from unfair exploitation, protecting the right for collective bargaining, and strategically investing in new industries to replace the old ones. One factory is shutting down, so tax incentives can be used to lure in a replacement.
Regardless of the details – the point is to thoughtfully manage the creative destruction of the marketplace, not to inhibit innovation or slow down progress. Of course, industry titans will endlessly echo that sentiment. But they appear to be interested mostly in protecting their unfettered ability to make more billions. They want to “move fast and break things”, whether that’s the environment, human lives, social networks, or democracy. We need some balance so that the economy works for everyone. History consistently shows that if you don’t do this, the ultimate outcome is always even more disruptive.
Another angle here is if these large language model AIs were unfairly trained on the intellectual property of others. This mostly applies to artists – train an AI on the work of an artist and then displace that artist with AI versions of their own work. In reality it’s more complicated than that, but this is a legitimate concern. You can theoretically train an LLM only on work that is in the public domain, or give artists the option to opt out of having their work used in training. Otherwise the resulting work cannot be used commercially. We are currently wrestling with this issue. But I think ultimately this issue will become obsolete.
Eventually we will have high quality AI production applications that have been scrubbed of any ethically compromised content but still are able to displace the work of many content creators – models, photographers, writers, artists, vocal talent, news casters, actors, etc. We also won’t have to use digital twins, but just images of virtual people who never existed in real life. The production of sound, images, and video will be completely disconnected (if desired) from the physical world. What then?
This is going to happen, whether we want it to or not. The AI genie is out of the bottle. I don’t think we can predict exactly what will happen. There are too many moving parts, and people will react in unpredictable ways. But it will be increasingly disruptive. Partly we will need to wait and see how it plays out. But we cannot just sit on the sideline and wait for it to happen. Along the way we need to consider if there is a role for thoughtful regulation to limit the breaking of things. My real concern is that we don’t have a sufficiently functional and expert political class to adequately deal with this.