Depending on the speed of speech, there are exceptions to this general recommendation see live subtitling , short and long sentences below 3. Three lines may be used if you are confident that no important picture information will be obscured. When deciding between one long line or two short ones, consider line breaks, number of words, pace of speech and the image. The ideal line-break will be at a piece of punctuation like a full stop, comma or dash.
If the break has to be elsewhere in the sentence, avoid splitting the following parts of speech: We are aiming to get a better television service. Line endings that break up a closely integrated phrase should be avoided where possible. Line breaks within a word are especially disruptive to the reading process and should be avoided. Ideal formatting should therefore compromise between linguistic and geometric considerations but with priority given to linguistic considerations.
In such cases, line breaks should be inserted at linguistically coherent points, taking eye-movement into careful consideration. We all hope you are feeling much better. This is left justified. This is centre justified. Problems occur with justification when a short sentence or phrase is followed by a longer one. In this case, there is a risk that the bottom line of the subtitle is read first. This could result in only half of the subtitle being read.
Allowances would therefore have to be made by breaking the line at a linguistically non-coherent point: Left, centre and right justification can be specified using tts: Also take into account the number of words, line breaks etc. However, you should also consider the image and the action on screen.
For example, consecutive subtitles may reflect better the pace of speech. Instead, allow a single long sentence to extend over more than one subtitle. Sentences should be segmented at natural linguistic breaks such that each subtitle forms an integrated linguistic unit.
Thus, segmentation at clause boundaries is to be preferred. When I jumped on the bus I saw the man who had taken the basket from the old lady. Segmentation at major phrase boundaries can also be accepted as follows: On two minor occasions small numbers of people were seen crossing the border. There is considerable evidence from the psycho-linguistic literature that normal reading is organised into word groups corresponding to syntactic clauses and phrases, and that linguistically coherent segmentation of text can significantly improve readability.
Random segmentation must certainly be avoided: On two minor occasions immediately following the war, small numbers of people, etc. In the examples given above, no markers are used to indicate that segmentation is taking place. It is also acceptable to use sequences of dots three at the end of a to-be-continued subtitle, and two at the beginning of a continuation to mark the fact that a segmentation is taking place, especially in legacy subtitle files.
Because line breaks require considering all of the above, they are better inserted manually. Implementers should avoid automatic line breaking. However, it is not always possible to produce good line-breaks as well as well-edited text and good timing.
Where these constraints are mutually exclusive, then well-edited text and timing are more important than line-breaks. However, viewers tend to prefer verbatim subtitles, so the rate may be adjusted to match the pace of the programme.
Most subtitle authoring tools calculate the WPM and can be configured to give a warning when the word rate exceeds a certain WPM threshhold. You can also calculate the WPM manually see box. The duration value can be calculated from the begin and end attributes. In the example fragment below, the first subtitle has a word rate of 2 words per second or WPM 0.
The second subtitle is cumulative: However, timings are ultimately an editorial decision that depends on other considerations, such as the speed of speech, text editing and shot synchronisation. When assessing the amount of time that a subtitle needs to remain on the screen, think about much more than the number of words on the screen; this would be an unacceptably crude approach. Circumstances which could mean giving less reading time are: However, always consider the alternative of merging with another subtitle.
If an item is already particularly concise, it may be impossible to edit it into subtitles at standard timings without losing a crucial element of the original. For instance, a detailed explanation of an economic or scientific story may prove almost impossible to edit without depriving the viewer of vital information.
In these situations a subtitler should be prepared to vary the timing to convey the full meaning of the original. For instance, if you have given 3: Anything shorter than this produces a very jerky effect. Try to not squeeze gaps in if the time can be used for text. Therefore subtitle appearance should coincide with speech onset. Subtitle disappearance should coincide roughly with the end of the corresponding speech segment, since subtitles remaining too long on the screen are likely to be re-read by the viewer.
When two or more people are speaking, it is particularly important to keep in sync. Subtitles for new speakers must, as far as possible, come up as the new speaker starts to speak. Whether this is possible will depend on the action on screen and rate of speech.
The same rules of synchronisation should apply with off-camera speakers and even with off-screen narrators, since viewers with a certain amount of residual hearing make use of auditory cues to direct their attention to the subtitle area. Ideally, when the speaker is in shot, your subtitles should not anticipate speech by more than 1. However, if the speaker is very easy to lip-read, slipping out of sync even by a second may spoil any dramatic effect and make the subtitles harder to follow.
The subtitle should not be on the screen after the speaker has disappeared. Note that some decoders might override the end timing of a subtitle so that it stays on screen until the next one appears. This is a non-compliant behaviour that the subtitle author and broadcaster have no control over.
Decoders need to match the begin and end timing specified in documents as closely as possible to maintain the careful synchronisation we expect from subtitle authors. In particular, see Annex E of EBU-TT-D regarding quantisation of timing for example if the video can only be presented at a low frame rate, such as in poor network conditions. If a speaker speaks very slowly, then the subtitles will have to be slow, too - even if this means breaking the timing conventions.
If a speaker speaks very fast, you have to edit as much as is necessary in order to meet the timing requirements see timing. But sometimes, in order to meet other requirements e.
In this case, subtitles should never appear more than 2 seconds after the words were spoken. This should be avoided by editing the previous subtitles. It is permissible to slip out of sync when you have a sequence of subtitles for a single speaker, providing the subtitles are back in sync by the end of the sequence. If the speech belongs to an out-of-shot speaker or is voice-over commentary, then it's not so essential for the subtitles to keep in sync.
For example, if there is a loud bang at the end of, say, a two-second shot, do not anticipate it by starting the label at the beginning of the shot. Wait until the bang actually happens, even if this means a fast timing. Many subtitles therefore start on the first frame of the shot and end on the last frame.
The duration of the overhang will depend on the content. To do this, you may need to split a sentence at an appropriate point, or delay the start of a new sentence to coincide with the shot change.
Authoring tools may use automated shot detection to avoid this scenario. Bear in mind, however, that it will not always be appropriate to merge the speech from two shots: For example, if someone sneezes on a very short shot, it is more effective to leave the "Atchoo! If possible, the subtitler should wait for the scene change before displaying the subtitle.
If this is not possible, the subtitle should be clearly labelled to explain the technique. And what have we here? The BBC's preferred techniques are colour and single quotes, but other techniques exist in legacy subtitle files and subtitles repurposed from non-UK sources.
Re-use of existing files with legacy techniques is acceptable, but unless specifically requested, new content should not use legacy techniques.
The available techniques include: This is the preferred method that should be used in most cases. Used to indicate an out-of-vision speaker, such as someone speaking via telephone, or to distinguish between in- and out-of-vision voices when both are spoken by the same character or by the narrator and therefore using the same colour e. Used to indicate the direction of out-of-vision sounds when the origin of the sound is not apparent. Can be used to resolve ambiguity as to who is speaking.
This is a legacy technique for identifying in-vision speakers, but it is still used for indicating off-screen speech. It is also used with Vertical positioning to avoid obscuring important information. This is a legacy technique. Must only be used with colour when unavoidable.
This is the preferred method for identifying speakers. Where the speech for two or more speakers of different colours is combined in one subtitle, their speech runs on: Did you see Jane?
I thought she went home. However, if two or more WHITE text speakers are interacting, you have to start a new line for each new speaker, preceded by a dash.