When subtitles and voice-over make a great combo localization option

subtitles-translation-101-time-stamping-time-coding-spotting
Subtitles Translation 101: Time-Stamping, Time-Coding & Spotting
May 8, 2017
3-time-code-errors-that-kill-subtitles-translation-projects
3 Time-Code Errors that Kill Subtitles Translation Projects
May 15, 2017

It makes sense to think of voice-over dubbing and subtitling as completely distinct localization options. After all, they generally cover the same content (dialogue, narration and sometimes on-screen text), and most projects just choose between them based on content and budget. However, there are cases in which using both voiceover and subtitles in the same project can be required, can produce higher-quality products – and can even reduce costs.

This post will list the three times that subs and VO make a great combo localization option, and what you need to know to make these kinds of projects work.

[Average read time: 4 minutes]

The three subs-VO combo situations

Let’s jump right in – here are the three situations when a subs-VO combo makes a great translation solution.

1. For content with on-screen speakers and off-screen narration.

The different kinds of video dubbing line up into two camps – services that deal with on-screen speakers (UN-style, dialogue replacement, and lip-sync dubbing), and the one that specifically doesn’t address on-screen speakers (off-screen narration). Dealing with on-screen speakers adds a level of difficulty to any video translation production, which drives up the cost. In contrast, off-screen voice over narration – though it still requires highly-skilled, professional voice talents – is less labor-intensive, and more cost-effective.

For this reason, productions that have both an off-screen narration and on-screen speakers can benefit from a subs-VO combo – specifically, re-recording the off-screen narration while subbing any on-screen dialogue or presentation. Subs are more cost-effective than VO, so this alone reduces project costs. Likewise, productions that have on-screen speakers generally require multiple talents – even documentaries using UN-style voiceover – a cost which is likewise eliminated with subtitles.

subtitles-voice-over-dubbing-combinations-for-audio-video-translation-localization.jpg

While this option is very cost-effective, for some projects it actually be may the best option quality-wise. Here are just a couple of instances:

  • Documentaries: Because authenticity is crucial to docs, subbing on-screen speakers can in fact be a better option than re-voicing them, in particular if the documentary is aimed at an art-film audience. While UN-style voiceover retains the original speaker’s voice low in the background, subs allow the audience to hear it fully, registering the speaker’s emotion and overall tone completely. Because the off-screen narration is generally done later, by a professional talent, it can be replaced generally without a loss of authenticity. For this reason, this wouldn’t work as well for documentaries with narrator-subjects – for example, the work of Michael Moore or Agnès Varda, or news pieces, in which the reporters act as narrator-subjects usually.
  • Corporate presentations with short on-screen speaker clips: Think of a one-hour corporate presentation on ethics for a large company, which is kicked off by a 30-second message from the CEO. In this case, it doesn’t really make sense to dub the CEO’s message – it’s short, it’ll require an additional voice over talent, the CEO is a well-known person (whose voice the employees probably recognize), and subtitling is more cost-effective.

In the following clip you can see an example of this – we recorded the off-screen narration for this Sound Blaster video produced by Backyard Studios. However, the video has pro gamer Mike Ross speak right in the middle – because he’s a well-known figure, it makes sense to subtitle his line, as you can see in the following Russian voice translation clip.

You can read more about this project in our previous blog post.

2. For forced subtitles

Forced subtitling, also known as forced narratives, are subs present in a production that make it comprehensible to its intended native-language audience. Usually, this means dialogue in an English-language movie or TV show that’s in a different language. A great example is Star Trek, in which multiple characters speak different alien languages, all subtitled into English. Finnish localization for a Trek episode wouldn’t lip-sync dub the Klingon-speakers (though that might be hilarious) – they would be subtitled.

For more on forced narratives, check out our previous blog post, What are forced subtitles in video translation?

3. For audience accessibility

The United States requires captioning for all broadcast (and now, online) productions – and many countries around the world are either following suit or have similar requirements. Therefore, more productions now require foreign-language voice over as well as foreign-language subs for the deaf and hearing-impaired (also called SDH).

What you need to know for successful subs-VO combo projects

Two tips will help you keep your projects on track, and on budget:

  1. Remember that subs and VO have very different timelines – but you’ll need them delivered at the same time on the project. This means that subs-VO projects require more time than subs and VO projects by themselves, which means more close supervision and detailed scheduling. This is especially true because it’s crucial to…
  2. …use the same translator if at all possible for both the VO and subs parts. Not only does this require detailed scheduling, but it also requires a translator who’s familiar with both deliverables. But this is crucial to ensuring that your project stays consistent in terms of tone, style and vocabulary.
  3. For instances of accessibility, remember that captions for the hearing-impaired should match the source dialogue closely. While ideally the SDH would be translated separately from the dubbing to maximize the quality of each delivery (especially since the latter can require heavy editing for lip-sync), the reality is that often SDH is used by viewers in conjunction with the same-language audio track. In these cases, discrepancies between the actual audio and the SDH are quite jarring. Therefore, make sure that your workflow aligns these two deliverables, and implement a QA process for this.

In short, subs-VO combo localization projects are very cost-effective, and provide the optimal results for some kinds of content. Sometimes they may be required by different audience requests, local law, or corporate compliance/accessibility requirements. At the same time, they definitely require more planning ahead, better scheduling, more highly-skilled linguists, and a different (or augmented) quality assurance workflow. If you have one of these projects, make sure to allow more time for production, and especially for the QA process. And as always, engage your partner studio as early on in the process as possible, even during post-production of the original English-language content – this is by far the best way to plan ahead for localization, minimize costs, and keep your audio and video translation on time, on scope, and on budget.