Aug 21 2025
A New Way to Watch Foreign Language Films
Watch this movie trailer before reading on. What did you think? This is an independent Swedish film, originally filmed in Swedish, and then dubbed into English. However, it wasn’t just dubbed in the conventional way – AI was used to convert the facial movements of the actors to the dubbed language.
I often like independent films and now there is also a lot of foreign-language films on streaming services like Netflix. But this creates a dilemma – do we watch in the original language with English subtitles, or do we watch a version dubbed into English? Both options, in my opinion, are suboptimal, and my wife and I often disagree about which version we should watch.
The dubbed version often involves bad voice acting that can destroy the vibe of the film. Squid Game is the best recent example of this – the English dubbed version was just bad, and so we watched with subtitles. In the original language you get a much better feel for the acting and the emotion of the actors. Also, when the dialogue is translated for dubbing it is tweaked to match the mouth movements as best as possible (well, it can be), and this is another trade-off. If you don’t do this, in order to preserve the best translation, the out of sync talking can be jarring. If you do, then the dialogue can be significantly different than the original. If you have ever watched a foreign-language film that is both dubbed and with subtitles at the same time you will notice that the two translations are often starkly different.
Subtitles preserves the original acting and more closely translates the language, but then you have to read the subtitles. In my opinion, this significantly detracts from the experience of watching the film. I want to look at the acting, and the editing, the cinematography – the full visual experience of the film. I can’t do this if I am trying to keep up with reading the dialogue.
One solution to this dilemma is to just do the dubbing really well. Germany, for example, embraces dubbing as a cinematic art form unto itself. German dubbing companies take great pains to match voice actors to the original actor, and to use only high quality voice acting. They also take great care to match the dubbed vocals to the movement of the actors mouth as much as possible. The result, from what I read (I don’t speak German), is as good as dubbing can get. Still, they have to make some translation compromises.
But what if we can have the best of both worlds? This would still require good voice acting (which is not a given), but then you can do whatever translation works best, and not worry about lip-sync. Then use AI to alter the video to match the dubbing, so it is seamless, as if filmed in the dubbed language. That is the idea behind the company, Flawless, who uses an AI app, DeepEditor, to alter the video to match the vocals. You can see the result in the trailer.
I would say – the voice acting seems fine, and the sync obviously works. But the facial movements were not quite there. Some shots were, in my opinion, in the uncanny valley. This is exactly what I feared when I read the article before looking at the video – making speaking mouth movements look natural is perhaps the last great challenge for AI. If you look at a lot of AI-generated video you may notice that you rarely see realistic characters speaking. You can see cartoonish characters speaking, or realistic characters not speaking. But – especially when showing off how good their video quality is – never realistic characters speaking. This is because no one can do it yet.
The reason is partly neurological – we just have a highly fine-tuned visual processing for human speech movements. This may be because we lip read to help us decipher spoken language (whether you realize it or not). If it’s off in the slightest we notice it, and it feels uncanny. I do wonder how long it will take to crack this challenge. It’s perhaps orders of magnitude more difficult than other aspects of simulating human movement.
We also tend to notice if the physics is off, even slightly, as we have a highly tuned sense of this as well. This is likely because our brains determine if something is an agent or not by whether it is moving in an inertial frame (only under the influence of gravity), so our brains are good at telling the difference. They basically solved this challenge.
Realistic mouth movements in speech is the last real great challenge for CG video. AI has gotten us closer than other approaches, but even that is not close enough. At this point, I think watching an entire movie using this tech will be distracting. I will likely try it just for the experience, but I doubt I will choose this option, not yet.
An easier task for using AI to help dub movies is to use AI to make the translation and create the vocals, trying to preserve the original actor’s voice and emotional content. This is actually an easier task, and AI apps have already achieved high levels of output here. But there is great pushback for using the tech in this way because it displaces voice actors. I can understand this at the high end of quality, like in Germany. But many dubbed movies have terrible voice acting, so I have no qualms about displacing them for higher quality AI-generated dubbing.
All the caveats about the downsides of AI technology aside, it would be nice to have the ultimate expression of this technology – the ability to seamlessly translate and dub film from any language into any language (video and voice), while preserving as much of the original language film as possible. Then my wife and I can stop arguing about which version to watch, dubbed or subtitled.






