Imagine going to the cinema to watch a movie that has had its sound removed with no captions. You may be able to guess some of what the characters are saying, but it may be difficult to understand what is happening and to immerse yourself in the experience.
This is a problem that persons who are deaf or hard-of-hearing may have, not just in the cinema but also in the classroom and other places, when videos do not have captions or subtitles.
“Captions provide accessibility, and that’s very important because communication is a basic human right. By providing captions, we allow people to be included in a community, for example during learning activities or in discussion forums,” said Li Kunqi, founder of CaptionCube, a social enterprise in Singapore that provides video captioning, subtitling, transcription and translation services to the media and education sectors.
She added: “Captioning is a universal solution that benefits everyone, be it the elderly with hearing loss, young children who are learning how to spell, people in noisy environments who cannot hear their videos, or simply people on the train who want to watch their videos without turning the sound on and disturbing others.”
Over the past year, the firm has been conducting workshops to teach people how to create high-quality captions and subtitles. During a workshop held recently at Enabling Village, Kunqi said: “We empower people to work from home as freelancers for us, including people with disabilities who may find it hard to move around and go to an office to work.”
The science of good captions
Participants at the workshop learned that good captions do not just match what is being said on the screen, but are also easy to read and understand. For example, a single line of text should have no more than 42 characters, make sense on its own and not leave the reader hanging, unless it is for dramatic pauses.
This means that sentences that are too long to read in a single line should be broken up at appropriate places, such as punctuation points, conjunctions, prepositions and natural pauses in speech. A golden rule is to never break a sentence between an article (“a”, “the”) or a proposition noun (“his”, “hers”, “your”) and its subject.
Take the following two images:
Persons reading the caption in the first image will be left wondering what it is that CaptionCube embraces. Furthermore, readers who see only the next line of text, which would start with “inclusiveness”, would be confused by the lack of context. The caption in the second image, on the other hand, breaks the sentence up into lines that are each easily understood.
Keeping readers in mind
Good captions should also appear on screen about 100 to 200 milliseconds before the speech starts, and remain on screen for about 300 to 500 milliseconds after the speech ends. This is to give readers enough time to take in the words.
During the workshop, CaptionCube demonstrated the use of a typical captioning and subtitling software. Its interface is divided into three sections: one to enter text, another to view the video that the text will appear in, and a third that displays the audio’s waveform so that users can time the appearance and disappearance of captions and subtitles.
The elements of a typical captioning and subtitling software.
Poonam Tomar, one of the workshop’s participants, said that it opened her eyes to the nuances of good captions and subtitles. “I’ve seen movies where the information in the captions is incomplete or subpar, and this workshop has been very enlightening in terms of how the content should be delivered properly,” she said.
She added: “I think captioning is really important because people who cannot hear should have full descriptions of what they are watching. Such workshops can not only help to supply that, but also raise awareness of the need for captioning.”
To find out more about CaptionCube’s workshops, visit their website here.