Author Archive

How Does Stephen Hawking Talk?

Stephen Hawking The Theory of Everything

The Official Film Post

With the release of the Hollywood movie The Theory of Everything, looking back on the early life of Professor Stephen Hawking, the famed theoretical physicist known for his impressive body of work as well as the fact he lives with the degenerative motor neuron disease. The film reached the UK on January 1st and has been popular so far.

With this renewed interest in Professor Hawking, his work and life, we thought we’d take a closer look at his communication aids and how he is actually able to speak. With no verbal skills remaining, Hawking has used computer technology for speech synthesis for many years and we have discussed previously in our discussion of Klatt’s Last Tape, a BBC Radio documentary that featured Hawking’s daughter and a discussion of his role in the development of speech synthesis. Here we’re taking it a step further and discussing the question in more depth, exactly how does Stephen Hawking talk? Before that we’re taking a little look at Motor Neuron Disease and how it affects those living with it.

What is Motor Neuron Disease?

Trabasaxon Liam with Stephen Hawking

Trabasaxon Liam with Stephen Hawking

Motor Neuron Disease (MND) is a progressive disease that affects the nerves in the brain and spinal cord. Professor Hawking received his diagnosis aged 21 and there are many individuals living with the disease across the UK and around the world, including our Trabasaxon pal Liam Dwyer (pictured with Hawking) and it’s a disease which there is surprisingly little awareness of.

MND affects different people in different ways. It affects the way individuals walk, talk, eat, drink and breathe but it’s very rare that all the affects come on at once or in any particular order and not all individuals with MND get all symptoms. There is no cure for MND and symptoms are managed on an individual basis, this video shows Liam discussing MND in depth and how he works to raise awareness, using his own speech synthesiser:

Stephen Hawking and MND

As we said, Stephen Hawking received his diagnosis of MND aged 21 and soon began to require crutches to walk before a wheelchair. Stephen Hawking first began using his computer speech synthethiser in the 1980s and although there have been many developments since its first installation, the system remains very similar.

Hawking also uses a wheelchair and requires nursing support due to MND and whilst he originally didn’t want to focus on his disability he began working in the disability sector in the 1990s, providing a role model and an example of what can be achieved, however severe a disability you have. He is committed to the protection of disabled rights and got his family involved in the viral Ice Bucket Challenge in 2013, supporting MND awareness.

How does Stephen Hawking Talk?

Stephen Hawking Speech Synthesis

Professor Hawking giving a speech at NASA

Now, back to the main issue, since opting for speech synthesis how has Stephen Hawking managed to speak? Hawking doesn’t use text-to-speech or input anything into a keyboard physically so his speech synthesis is entirely based upon facial movements. It’s a revolutionary system which was developed with Hawking in mind but can be adapted for other users with similar needs.

Hawking communicates via a computer system which is mounted to his wheelchair and powered by the same batteries that keep his power chair going. The speech synthesis works through a specific programme called EZ Keys which gives Hawking an on-screen software keyboard. A cursor is present and moves automatically across the keyboard in rows or columns and Hawking is able to select character through moving his cheek to stop the cursor, getting the character he needs.

The technology is extremely advanced yet seems so simple. Hawking’s cheek movements are detected by an infrared switch which is mounted onto his glasses and that single switch is the only connection he has with the computer. EZ Keys has also been developed with word predictive capabilities meaning Hawking often types one or two letters before getting the word he needs, thus speeding up the speech process and making it less laborious than it could be.

To save time Hawking also has a bank of stored sentences and phrases for regular use, helping conversation flow and allowing him to give speeches based on pre-recorded sentences and statements. Hawking has tried other methods of switch to access his speech synthesis including Brain Controlled Interfaces, but the cheek movements are the most consistent and effective for his needs.

This video gives a concise and straightforward explanation of how Stephen Hawking talks, in his own words:

Stephen Hawking’s choice of speech synthesis is completely unique to his needs. Some of the more common speech synthesisers on the market, used regularly by people with a range of different disabilities include the Lightwriter, the Eyegaze Edge and the many software and technology options through CereProc, a company who specialise in text-to-speech and more innovative forms of speech synthesis.

The Theory of Everything

The Theory of Everything received much critical claim and has been a real success, with both the lead actors receiving praise for the sensitivity and genuine portrayal of their roles. The film was made as an adaptation of the memoir Travelling to Infinity: My Life with Stephen by Jane Wild Hawking, Hawking’s ex-wife who he is still close to and portrays the period in which Hawking received and tried to manage his diagnosis as well as his ground-breaking work in his field.

We asked Liam Dwyer (follow Liam on twitter) to review the film for us:

I thought the acting and the way Stephen Hawking was played was amazing. Eddie Redmayne played the part so well I thought it was Stephen Hawking. Felicity Jones played Jane great and she showed just what a wife/carer has to go through looking after a person. It was good to hear her side of the story too.

Here is the official film trailer to get a taste for it:

Stephen Hawking’s life and work is remarkable and this new film is a testament to that. It’s also fantastic to see how the developments in speech synthesis that he trials and tests are advancing the science for people in general, providing many more people with the opportunity to speak.

You may be interested in a revealing BBC interview by Jane Hawking about her life as a carer and wife of Steven and her book “Travelling to Infinity” on which the film is based.

Comments on this post from Assistive Technology Professionals:

Simon Churchill

He ‘talks’ using a voice output communication aid. For more information about them see speech generating devices. The film The Theory of Everything fudges the software he uses, as he uses a scanning technique which generates words at a far slower rate than was seen in the film, and the scanning software was not shown, presumably because the general public might not understand its use. A brilliant film, but it does deviate from the truth somewhat in this regard.

Hector Minto

Agreed Simon Churchill. Not sure I would describe it as advanced.

Denis Anson

It should also be observed that the system that Stephen Hawking uses is highly personalized, and may not be useful to anyone else on the planet. Hawking is arguably the most brilliant mind living, and, when he could still vocalize to some extent, would compose entire technical papers and books in his head, then dictate them to the one or two assistants who could understand him. His system 1) has his highly idiosyncratic vocabulary in it, and 2) uses abbreviations that he has learned, that most of us probably couldn’t make sense of.

A number of years ago, he spoke at University of Washington, while I was on faculty there. I was not able to attend the talk, but the reports that I heard were that, for his presentation, which was prepared in advance, he communicated at normal speeds. But for the question and answer session, the audience had to wait for him to compose his answers. Because he was Hawking, they would wait, but it was very slow.

David Selover

All of your comments are accurate. This is always a constant debate in the AT world. The AAC device that he uses is specific to him. As is all aac devices should. Be because every persons voice is unique to them. There are a plethora of devices out there, one size does not fit all.

Support Skoog 2.0


The all new Skoog 2.0


It’s no secret that we are huge fans of the Skoog. Their original instrument is an absolute marvel that helps people with learning or physical disability play music. So when we heard Skoog 2.0 was being launched and an IndieGoGo crowdfunding campaign had been started in its name, we were immediately interested in finding out more. We are huge supporters of the talented team at Skoog and want to see their newest venture at launching Skoog 2.0 with full mainstream appeal succeed.

The Instrument for All

Skoog follows generations of equipment aimed at synthesising sound to make it more accessible. The brilliant thing about the Skoog and even more so with the Skoog 2.0 is it’s designed with inclusivity in mind. It follows design for all principles and in their own words it’s ‘a new musical instrument that you don’t need to learn how to use’. Simple, fun and able to be used in many different ways the Skoog 2.0 is tactile, soft to touch and absolutely fantastic when it comes to making music whether you’re 5 or 65.

Skoog 2.0 Design For All

Skoog 2.0 Design For All

Skoog 2.0 is a huge improvement from the original Skoog, which rose to fame thanks to its successful use in many classrooms around the world. Skoog 2.0 is designed with the average buyer in mind and it is finally becoming a reality that Skoog could be seen regularly in our homes, classrooms and communities. The new Skoog 2.0 has been enhanced in many ways. It is now wireless, iOS and Android compatible and an exceptionally expressive instrument which helps you to create music from the very first touch.

Anyone can play Skoog 2.0, it’s truly universal. All the technical barriers usually faced when trying to play a musical instrument are removed and the sound can be focused on, allowing the player to feel like a real musician and enjoy their very own tune within mere minutes. This campaign video gives a little more information about Skoog 2.0 works:

Making Music with Technology

As that video suggest Skoog and Skoog 2.0 help you to make music using tech and the sounds are simply produced by your movements and the way you touch and press the instrument. This video shows in more depth exactly how you play the Skoog and below this there is an example of a Skoog musician playing a popular song:

As you can see Skoog is fun, user-friendly and allows true enjoyment of music for people of all ages and experiences.

Supporting Skoog 2.0

Despite the fact that the original Skoog was hugely popular the team still need more funding to launch their second product on the mass market and we want to give them a boost. Their IndieGoGo campaign is well on its way to achieving its impressive £75,000 target but we’ve only got until January 8th  2015 to help and we really want to see the guys getting the results they deserve. There are perks for making a donation, all of which sound pretty worthwhile to us and are well explained in the below graphic:



The original Skoog and our Trabasack + Media Mount

Music for Everyone

Music can be enjoyed by people from all backgrounds and of all experiences. It’s a key feature in sensory education and as the photo to the left shows, the original Skoog is perfectly partnered with our Trabasack Media Mount to create a one-person musical station. Universal products need more support and we believe that Skoog 2.0 should be made available to as many people as possible. Go and contribute if you can or share this news with your networks to support a great cause!

Click to visit the skoog crowdfunding page now!

World Autism Awareness Day 2014

World Autism Awareness Day

World Autism Awareness Day

World Autism Awareness Day takes place on the 2nd of April, a day for raising the awareness of autism as well as celebrating the achievements and strengths of people living with autism.

Celebrating World Autism Awareness Day

World Autism Awareness Day gives everybody involved with someone living with autism, the people themselves and those who care about sharing awareness of the condition the chance to shout about it. World Autism Awareness Day is celebrated across many countries and it’s an opportunity for people to share their stories and come together to celebrate the people who succeed and exceed expectations every single day.

In the UK it’s believed more than 1 in every 100 people has autism and this equates to around 700,000 just in the UK. It is a condition which effects different people in different ways and whilst some people living with Autism can live independently, find employment and enjoy a busy social calendars others are non-verbal and require 24 hour care and support.

At Communication Aids many of our articles and posts support people living with autism or their parents and carers. Many of the verbal communication issues we discuss are highly relevant to people involved in the care of someone with autism and we hope our efforts have been helpful in some way or another.

This video from 2013 shows exactly how much World Autism Awareness Day means to many different people:

Technology, Communication and Autism

World Autism Awareness Day

Proloquo2go AAC App for Speech

All of us use technology on a daily basis: from checking our emails to watching TV, a screen is never far away. We already know that technology can be hugely beneficial and helpful for people living with communication difficulties and this extends to people wit autism.

Many people living with autism have difficulties with communication, both in terms of verbal speech and comprehension of others. Tablets such as ones made by Apple or Android offer a range of AAC apps for speech which we have discussed at length previous and this can ease the anxiety and frustration of many people with an ASD, as communication can finally become a possibility.

With a lack of traditional communication methods can often come difficulties in learning from traditional teaching and classroom methods.  Listening for long periods of time can be difficult due to many people living with autism having difficulty with concentration and organisation. A learning app gives the user the ability to learn at their own pace, with the option of repeating segments they may well have missed if it were the spoken word. Game-based learning is also proven to be extremely valuable.

Trabasack for your Tablet

The Trabasack is a lightweight lap tray and secure travel bag, ideal for carrying a tablet safely and securely. Using the Trabasack Media Mount your tablet can be in the upright position, or simply lay it flat on the large, sturdy tray which attaches comfortably around the waist. The beanbag cushion underneath provides comfort and support when using your Trabasack, while the D-Ring attachments make the bag easy to carry around by shoulder strap or over the handles of a wheelchair. To enhance your tablet experience, buy your Trabasack now.

Charlotte White’s Musical Fight

After the popularity of  our recent post, Klatt’s Last Tapes, we have made the second in a series of videos profiling fascinating assistive technology stories:

Charlotte White’s Musical Fight is a BBC Radio 4 documentary that provides an intimate and in-depth look into the life of a young woman called Charlotte White, who, after an accident in her early teens, was left almost entirely paralysed.

The documentary looks back on Charlotte’s experiences post-accident; how she felt patronised by the immediate rehabilitation therapies she was offered, how she still desired to make music and express her creativity and the struggle to find her place as a teenager in mainstream society.

Video: Charlotte White’s Musical Fight

(for a video transcript: scroll to the bottom or use youtube captions)

In spite of her set-back, Charlotte showed determination to continue advancing the musical skills that she had shown such promise with as a young child, and with the help of assistive technology and the Drake Music Project, Charlotte was provided with a very modern method to allow her creative side to shine. Charlotte is now a  professional classical musician and composer.

Drake Music

Drake Music is a charitable organisation that gives those with disabilities the opportunity to create music using assistive and adaptive technology, helping to provide a creative outlet to many who would otherwise struggle to use ordinary instruments or learn music via typical methods.

Founded in 1988 by Adele Drake, Drake Music is a nation-wide initiative with regional bases dotted around the country in London, Manchester and Bristol.  Their ever-growing team of techs, teachers and advocates continue to work in partnership with numerous schools, universities and local authorities to provide musical opportunities, both creative and educational, to disabled people across the country.

Charlotte speaks of how her introduction to Drake Music was tentative at first, based-upon her previous experiences with music therapy. However, it didn’t take Charlotte long to realise that Drake Music was a far more innovative and beneficial tool than traditional therapies she had already dismissed, and with patience, understanding and ground-breaking assistive technology, she soon found a way to create music again.

Image of Charlotte White smiling, wearing a red cardigan and patterned dress

Charlotte speaks candidly and openly about her post-accident experiences, and how Drake Music changed her outlook.

“When I became disabled, I was introduced to music therapy. Music therapy is literally someone sitting in front of you banging a drum or playing a guitar, and you’re meant to tell them all your worries about life or you’re meant to be really happy because someone’s banging a drum in your face.

[I found that to be] patronising and very boring and completely pointless. And I expected Drake to be like that, but it wasn’t at all. Drake Music gave you the opportunity to play independently, rather than just sitting there listening like a lemon.”


Through Charlotte White’s Musical Fight, we are introduced to a strong-willed, determined young woman, brimming with creativity and promise, who with the help of the Drake Music Project, defies all opposition in continuing to sate her creative needs through the use of assistive technology, and the support of staff at Drake Music.

Enable Us

Charlotte has set up her own website at Enable Us:

Enable Us has been set up as a result of difficulties that my family, friends and I have come across over the years. The overall aim of the site and the project is to empower individuals with impairments, preventing society from disabling people and preventing them from fulfilling their potential.

We also have heard there is a project that Charlotte is working on using music and a certain revolutionary instrument…but we cannot say more at this stage.  We are very excited about it! Watch this space!

Charlotte and Trabasack

We were very pleased to hear that Charlotte has recently become a big fan of trabasack and our new media mount accessory, describing it:

“I love my trabasack,  the velcro thing is great, especially for drinks. I’ve been using it for cooking, work and all sorts!”

Please comment below the transcript and share if you have enjoyed the video.

Video Transcript

00:01 S?: Now on Radio Four, we’ve the touching story of a disabled student and her struggle to play music. Josie D’Arby presents, “Charlotte White’s Musical Fight.”


00:22 Josie D’Arby: In 2008, a video clip appeared on the internet of a teenage girl performing the prelude to Bach’s Cello Suite. Nothing remarkable about this, you may think. Until you learn that the musician, Charlotte White, was playing every crotchet and quaver using only the slightest movements of her head and thumbs.


00:51 JD: This performance proved to be a defining moment in Charlotte’s rehabilitation, but it also raised questions about how musical talent and achievement are assessed. Questions that have yet to be answered.


01:17 JD: Well, I’m just arriving at the home of Charlotte, which is in a small village in Buckinghamshire, where I’m going to meet her and her mother, and just find out how much music has actually changed their lives.

[background music]

01:43 JD: Charlotte, when did you first start playing music?

01:46 Charlotte White: When I was about six years old, I had regular piano lessons like all my friends did at school.

01:52 JD: Were you having examinations?

01:55 CW: I never did exams. My mom wanted us to play for fun rather than to play to achieve something.

02:01 JD: In those early days, did you enjoy doing the piano? Were you loving it?

02:06 CW: Not particularly. It was more something I did because we were all expected to do it. I didn’t start enjoying music until later on in life.

02:13 JD: So can I ask you just to go back to your accident really, would you be able to tell us what happened?

02:18 CW: When I was 11 years old, I used to ride a lot. I competed on a pony. And for a period of a year, I constantly fell off my pony for no apparent reason. The last time, I was in the stable yard holding my rabbit and guinea pig. And I fell over backwards and hit my head, and everything went downhill from there.

02:39 JD: And what was the diagnosis back then? Was it something that they expected you to recover from or what did they tell you could have happened?

02:46 CW: I don’t have a full diagnosis. I got diagnosis which cover some of my problems, but not all of my problems. They’re constantly finding new things out, even now, 11 years on.

02:58 S?: And not surprisingly, this had huge consequences on Charlotte’s quality of life.

03:06 CW: For a long period of time, my life had been about exercise, physiotherapy, occupational therapy, speech therapy and that was it. That was drummed into me day in, day out, day in. And all I was expected to do was achieve and get physically stronger, which wasn’t happening a lot of the time. So that was quite depressing that I was doing all this work and not getting much out of it. And that was the only life I knew. A lot of my friends had moved on by then. They were having fun at school, enjoying life, where I was just having physio, physio, physio. I would only see physios. I’d only see speech therapists. I’d only see people who were meant to make my life better, and improvement, but it never seemed to happen.

03:46 S?: After the accident, Charlotte gradually lost all movement in her body. She spent five years in and out of hospital, and eventually went into a period of rehabilitation regaining movement in her head and then gradually her fingers. At 16, Charlotte began attending St. Rose’s School in Stroud. It was there that she was introduced in the Drake Music Project. An organization that uses technology to help people with disabilities participate in music.

04:14 CW: Doug came up, and I had an option of a cooking class or going to meet Doug and see what Drake Music was about.

04:20 JD: Did you think back to your piano days at six, and think “I have a feel for music.” Did you know that you had a feel?

04:27 CW: When I became disabled, I was introduced to music therapy. Music therapy is literally someone sitting in front of you banging a drum or playing a guitar, and you’re meant to tell them all your worries about life or you’re meant to be really happy because someone’s banging a drum in your face.

04:43 JD: And what… You found that patronizing or what?

04:46 CW: Incredibly patronizing and very boring and completely pointless. And I expected Drake to be like that, but it wasn’t at all. Drake Music gave you the opportunity to play independently, rather than just sitting there listening like a lemon.


05:02 JD: And did that effect your attitude towards it? Tell me about your very first lessons.

05:07 CW: At the time, I had a huge sensitivity to light. Therefore, I wore dark glasses. And spent a lot of time in sort of a half lit room playing music and Doug getting me to interact with him to begin with, and then learning the basics and chords and beats. We listened to a lot of Robbie Williams.

05:28 JD: Was that educational? Or…


05:30 CW: It became educational. [laughter] Very surprisingly.

05:37 Doug Bott: We were working one-to-one, in the dark, very quietly because at the time, she was very sensitive to light. So the only light in the room was the glare off my laptop screen. And the music we were playing was so quiet, that actually the whirr of the fan on the laptop was almost louder than the music at points.

05:57 S?: Doug Bott was the first person to work with Charlotte to create music.

06:01 DB: Sitting on the table we have, what we call a ‘magic arm’, it’s a piece of equipment which can fix any piece of technology in just about any position around a person’s body and attached by Velcro to this arm is a fairly and spectacular-looking back rectangular box, which is a magnetic motion sensor. So, it emits a small magnetic field and you can assign pretty much anything that you want to that magnetic field. So, in Charlotte’s case, we assigned about seven or eight notes to it and she was able to make very small head movements in order to play those musical notes. Then she had one switch, very small switch, on each thumb. One the switches did a very simple task which was to turn the sound that she was playing on and off, so that if she wanted to move her head without playing music, she could.

07:01 DB: The other switch controlled with her other thumb changed the configuration of notes available to her on the motion sensor that she was playing with her head. So, it’s… Liken this to playing a guitar, it’s as if the right hand that a guitarist would normally use to finger pick the notes, to pick out the individual notes, this is as if the right hand was her head moving in and out of the motion sensor to pick the notes. And then the guitarist’s left hand, which changes the cord shapes on the thread board of the guitar, the role of the left hand was taken by the switch that Charlotte was using to change the configuration of notes available to be played by her head.

07:42 JD: What was your first impression of Charlotte?

07:45 DB: My first impressions, somebody who was interested in classical music which not many of the young people I was working with at the time were. Somebody who is interested very much in working on her own in her own way. So yeah, the early sessions were very much about finding out what she was interested in and also how physically and practically she was going to create music, perform it, learn about it, compose it.

08:20 JD: At what point did you think she has got something special?

08:28 DB: I think it was just before, a few weeks before the first time she actually performed in public. I’ve been very careful not to put too much pressure on her to move forward and to achieve. I was very happy for her to go at her own pace. But she knew there was a concert coming up in school and she announced that she wanted to be a part of that, that she wanted to perform in it. Given the rate at which we had been working in the previous months, I was a bit nervous because I didn’t really think that she would be able to get the piece together in time to be able to perform it, but she did. She really knuckled down and applied herself and practised an awful lot outside of our sessions, which was quite a thing because the equipment that she was using at the time, I wasn’t able to leave it in school. So, when she was practising by herself, she was doing it entirely in her own head and making the movements from memory without the equipment. So, yeah that’s when I realized she has something special because the music it was in her head.


09:47 CW: That was very scary. I was outside waiting to go on. Like, “No, no, no, no. I’m not gonna do this.” And Doug was like, “Yes, yes, you are.” Like, “No I’m not.” He was like, “Just calm down and relax. If you don’t wanna do it, you don’t have to.” I was like, “You are not meant to say that.” [laughter] And eventually I got on the stage and Doug came on with me because I wanted him there, and I performed in front of everyone and I got really shaky and nervous as I had never performed in front of people before then. And it went reasonably well, I think, and piece came out maybe a bit too fast, but it went well enough. Everyone seemed to enjoy it and quite a few people were surprised I think.

10:29 JD: Did you have family and friends in the audience?

10:31 CW: My aunt was there and my mum.

10:35 S?: And for Charlotte’s mum, Tessa, seeing her daughter’s transformation was nothing short of remarkable.

10:41 Speaker 4: It was fantastic and she is really very good. She had been through such a rotten time and it just gave her something that she could achieve, and it was just wonderful as a mother to see her doing so. That’s why I am gonna cry.


11:00 S4: [11:01] ____. [laughter] It gave her something which she could achieve and be successful at. And as a parent, it was just wonderful to see that the determination she had actually was successful and she was good at it. It was very good.


11:24 JD: Has the music changed Charlotte’s life?

11:29 S4: I think it was the achievement of being able to play performing in front of people was I think was incredibly nerve wrecking for Charlotte, so the fact that she managed to do it gave her a little confidence which I think also then helped in other spheres of her life, so academically and probably socially as well. And I do think its helped her realize that she can achieve anything she wants to if she puts her mind to it.

11:56 JD: Relative to your memories of playing the piano, playing music in this way, does it feel similar if that makes any sense?

12:06 CW: I think it was very different. I practised a lot. I don’t really remember practising much when I played the piano. I enjoyed it. I wanted to achieve at it because it made people see me as a person rather than a disabled person who they made presumptions about.

12:21 DB: First I heard about Charlotte when Jonathan Westrup from Drake posted a video clip of Charlotte playing on the teaching music website.

12:29 S?: David Ashworth is a freelance educational consultant who specializes in music and technology.

12:34 David Ashworth: The performance was significant because… Well there were two things. One was it showed someone who obviously had severe disabilities, but who was actually able to overcome those to play a standard piece of repertoire and I’d never seen that before.

12:48 JD: How did it compare in relation to say a traditional cellist?

12:53 DA: Well that’s interesting. If you were to listen just the audio, you would find Charlotte’s performance is wanting. The quality of the sound, the phrasing, the timing that you get with a professional musician playing the real cello, all the expressive qualities is in a league of its own. Then you hear… You hear what Charlotte’s doing and its nowhere near the same level. However when you watch a video clip and see what she’s doing, it then becomes very powerful. It makes you realize that actually music is more about listening. It’s more about the whole contextual thing if you like and not just me, but other commentators who’ve been on the website, seen the clip and left comments, have found its a deeply moving experience hearing someone play a piece of Bach in that way.

13:36 JD: There is an argument that Charlotte’s performance is akin to being given a keyboard with only the right notes on it. How would you react to that?

13:43 DA: That’s an interesting one. In fact there are conventional instruments if you like only have the right notes, but in fact its a bigger thing than that. I think right notes is only part of the picture. We tend to get obsessed with people playing the right notes. The pictures of a note becomes all important, but there’s far more to music than the actual pictures of the notes that you play. And what was so interesting about Charlotte’s performance was that you could see, you could witness, the mental and the physical engagement, and also the musical engagement as well and… Well the spiritual engagement if you like and that was the powerful thing to me. So just to reduce music to a conversation about how you access the right pitches as a note is only part of the picture. You look at that clip of Charlotte and what’s really… The most powerful bit for me is at right at the end when she stops playing, there is a moment’s pause, and then she breaks into a big broad grin. And you know, she knows she’s made something musically significant, that she’s achieved something musically significant there.


15:01 DB: The principle behind the way that we use assistive music technology is almost the opposite to a conventional musical instrument. So with a conventional musical instrument, the instrument itself is fixed and the musician has to master that instrument and has to almost subordinate themselves to the demands of that instrument. Whereas what assistive music technology does is to take a person and their particular interests, their physical needs, and create a musical instrument, a way of playing music which is absolutely right for that person. Not just physically and musically, but also in terms of ensuring that there’s an appropriate challenge.

15:45 JD: Where does the technology end and the skill of the musician begin?

15:51 DB: That’s quite a difficult question to answer. It completely depends upon the individual musician, but I could probably answer that in terms of conventional musical instruments. If you take a piano for example there are all kinds of elements of a piano, which are already assistive. The keys are ordered on the keyboard from low to high. They’re tuned according to a convention, equal temperament. They’re tuned to concert pitch. I dare say that if you went into a music exam having prepared all your piano pieces and the examiner was to tell you, “Oh by the way, today in order to test you a little bit further we’ve rearranged all of the notes on the piano keyboard and retuned it, but if you’re a good pianist then you should be able to handle that.” That gives maybe some kind of an impression. All musical instruments are assistive in some way because they are set up in a certain way. The difference with assistive music technology is that it varies from person-to-person.

16:50 Jonathan Westrup: It’s set up so the sound starts working about there, so that distance. You can change the distance at which it starts actually triggering. You can make it trigger from here onwards, so you can do something quite big or you can do something very small. So as I’m pulling away from the device, [music] and as I move my hand further away, [music] it plays up the scale.


17:13 S?: Jonathan Westrup from Drake Music demonstrated some of the technology they use at St. Rose’s School in Stroud.

17:21 JW: The actual device itself looks like a small red torch and it emits an invisible beam and when you break the beam with any part of your body or whatever, it will trigger sound and you can set up what that sound is. At the moment we’ve got a cello here which we could just play a little bit. I’m just moving my hand now in front of it, [music] so you can hear now that’s the scale. [music] The student’s got a very wide motion. For example, if they can swing their left arm you know that’s a big movement they’ve got, then it could still pick up the sound rather than the small fine motors movements, which other students might want to use in different equipment, but that’s quite good for big movements. It does take as much time to master as any other instrument really. Because then, like you finding, you need to kind of find… [music] Try to find a little riff there. [music] I’m not a master, by any means.

18:19 S?: Aileen [18:20] ____ runs music classes for disabled students in the Norwegian city of Tromso. Their Arctic winters are long and dark. And in January, the city celebrates the end of the polar nights with a large cultural festival. Having seen Charlotte perform, Aileen invited her to compose music for the festival.

18:38 Speaker 7: It’s the darkest period in Tromso when we have no sun. It’s also a way of making life to the city, having a big music festival with musicians coming from all over the world. It’s all kinds of music being performed there. From big symphony orchestras to small jazz ensembles, and rock bands in the evenings. So its a very diverse music festival.

19:03 JD: And can you describe how her compositions were performed?

19:07 S7: Before the performance, it was quite a long project with months of her composing and sending files to Norway, speaking on phone about what we wanted with the music and how it should fit with the dancers. Charlotte was also very clear on… She wanted acoustic instruments. So we had musicians from the symphony orchestra of Tromso to do a recording of her music. [music] The performance at the Northern Lights Music Festival was outdoor in minus 10. [music] This was in the town square of Tromso and it was packed with people around there, and the scene was made up by ice and snow sculptures. And they had proper lighting and dancers dancing to the music. So it was quite magic to hear the music in that setting.


20:24 CW: I really wanted to pursue grades, I wanted to pursue music at college, but unfortunately establishments who grade musicians wouldn’t recognize it. Examining boards wouldn’t recognize it, and therefore, I couldn’t progress.

20:39 JD: Do you understand why they won’t recognize it? Do you think that’s fair?

20:42 CW: They’re very traditional in the way they recognize any examination. And therefore, the way the Drake Music and students play music is very different. And they either need to set up an examination which can be qualified at the same level, which is specifically for music technology of any to accept it. We’re meant to be in an equal society, therefore everyone should be equally graded.

21:07 S?: Charlotte’s achievements were recognized when she received a Bronze Arts award from Trinity College, London. In a statement, Trinity College go on to say, “Although there is no specific campaign to encourage the use of assistive technology, we have taken great interest in Charlotte’s achievement and profiled her story both on our website and in other print materials and press articles. We hope that this has actively encouraged others working with assistive technology, to see how Arts award could work for them.” The music examining boards are consistent in their approach, in as far as they don’t accredit music performed electronically, but as Doug Bott explains, its early days.

21:47 DB: If Charlotte had come to us in 20 years’ time, then I would fully expect that she would have been able to have had her achievements accredited either through the formal school music curriculum or through instrumental exams. Whether that’s through the Associated Board of the Royal Schools of Music or anyone else. At the moment, its very new territory for everybody I think. There are young disabled people who have their achievements accredited in various ways. But one issue, which I think people tend to shy away from talking about and which I’m quite happy to talk about, is that there’s a very big issue around the nature of people’s different disabilities. So differently disabled people access music in different ways and some of those means of access, whether its through Braille music or whether its through British sign language, some of those means of access are perhaps more able to slot in to the existing accreditation frameworks. Other forms of access, for example assistive music technology which is particularly useful for people who face physical barriers to music, these means of access haven’t really been tried and tested yet.

23:07 DB: We’re talking, a fair bit at the moment, to the Associated Board and they’re quite open about the fact that currently they don’t accredit any kind of music produced electronically, let alone the kind of assistive technology that our students are using, but they’re very keen to engage with these kinds of developments. And what we’re currently in the very early stages of discussing with them and also colleagues at Bath Spa University, are ways that you can accredit the quality of a musical performance in such a way that its not necessarily linked to the particular instrument that a person is playing. But what we’re arguing for is something which, to play devil’s advocate, takes it even further and says, “Okay, but what if you were to turn up to a piano exam to play the piano repertoire and you would say actually I’m not going to play on the piano today, I’m gonna to play on a flute.” How would you examine that? Because that really is what we are dealing with. We’re dealing with people who are playing instruments which are unique to them and maybe they’re not even playing repertoire. Maybe they’re playing music which they themselves have created.

24:18 S?: And for music consultant, David Ashworth, Charlotte’s performance could be just the beginning.

24:23 DA: I’ve been working in special schools where I’ve seen young people making music using assistive technology and its always tended to be making music in its own terms and its own style, if you like. A lot of improvisation. And a lot of fairly cutting edge avant-garde sort of sounds, if you like. What makes Charlotte different is she was actually playing crotchet and quavers. She was playing the dots, if you like. She was playing a mainstream piece of music which we normally associate as being accessed by, if you like, a mainstream musician. And that was what was different. She actually had the audacity, if you like, to actually step into their world, and that was what made it so significant I think. Where Charlotte has been important, she’s been a catalyst if you like to get this debate really going, and I’m sure she will see it in that way and feel rightly proud of that achievement.


25:25 S?: Charlotte White chose to pursue her academic studies and gained a place at university studying social policy and criminology. Advancements in the availability and price of software though, means she may soon return to music. And for Doug Bott, that moment can’t come soon enough.

25:41 DB: As a composer, she was very instinctive. She’s extraordinary in terms of the fact that she has a really innate musical ability. I think that any music teacher or music educator who would come across her, whether she was a disabled person or not, would find her to be an outstanding student in terms of the way that she engages with learning, practising, and performing musical instruments. And in terms of the way that she engages with composition and the fact that it really comes from inside her rather than from her understanding of the rules of music.

26:27 CW: Music inspired me in the belief that I could achieve anything and a new belief in myself, which had pretty much gone for the most part, and that belief became sort of lit in every part of my life. It became lit like my physiotherapy and my occupational therapy, and my speech therapy. I became more enthusiastic and had much more of a drive to achieve, which I had slightly lost before then, and I did start achieving in all those areas much more than I had done. And wanting to break the barriers and do the same things as everyone else was rather than thus been bracketed as a disabled person who wouldn’t achieve.

27:12 CW: I’ve got ambition back of what I want to achieve in the future and then complete in the long run. I started to enjoy life as well and have fun, and start experiencing things that the average teenager does.

27:29 S?: Charlotte White’s musical flight was presented by Josie D’Arby and produced in Bristol by Toby Field. All the music in the program was either composed or performed by Charlotte.



Klatt’s Last Tapes: A History of Speech Synthesisers Video

Klatt’s Last Tapes: A History of Speech Synthesisers

Speech Synthesisers in Use

Stephen Hawking and his Speech Synthesiser

Speech synthesisers and technology involved in giving a voice to those who can’t utilise has an interesting and enthralling history. It’s an area of technology and science that has fascinated scientists and therapists from many fields but is rarely discussed in the mainstream. World renowned physicist and cosmologist Stephen Hawking has made the presence of this technology more widely known.

Klatt’s Last Tapes was a one off exclusive on BBC Radio 4 which looked into the work of Dennis Klatt, the American pioneer of text to speech machines. Klatt’s work is explored by Lucy Hawking, the daughter of Stephen, who during this video goes on  a journey back through the history of speech machines. It really shows the ingenuity and creativity of the inventors and the quirky history of the predecessors of the machines that help her father communicate.



In the Beginning

Speech synthesisers have been produced and developed for over 200 years. Beginning mechanically with Wolfgang von Kempelen’s speaking machine which he built in 1769. Lucy Hawking visit Saarland University to see and try out a working replica of this primitive

wooden box with a mouthpiece and a bellows that was an early speech machine

Replica of Von Kempelen Speaking Machine

machine and learns more about von Kempelen’s dedication to finding a mechanical solution for people who were unable to speech. Von Kempelen found the main problem with his machine and developments was the lack of tongue and this particular element of the speech system was beyond his abilities to recreate mechanically.

Mechanics to Electronics

Experts believe there was no smooth transition between mechanical and electrical speech synthesisers. The first known electrical system was The Voder developed in the 1930s and displayed for all to see at the 1937 World Fair in New York. It operated much like an organ and it was remarked that it would take people a year at least to get to grips with the controls required to master its use.

Problems in Speech Synthesis

Through speaking to experts in the field Lucy Hawking realises and explores some of the main problems that have been battled against since the first speech synthesisers were developed. Initially it was possible to create plausible male voices but creating a female voice proved and still does prove difficult. Simulating women’s’ voices is harder due to different characteristics and they sound much more artificial than male. Articulation for the female voice is different and this is something even the most advanced computer systems has struggled with. It’s clear, as Hawking remarks in the show that using a synthesised male voice would provide women with a huge loss of identity.

Similarly, adult speech synthesisers have proved problematic for children. Speaking with an adult synthesised voice makes socialisation harder for children whose peers may find it harder to relate to them with an adult voice. The long term aim is to create personalised speech synthesis machines which grow with their user.

Dennis Klatt – The Father of Computerised Speech Synthesis

Dennis Klatt was the man who made a difference to speech synthesis. He was the pioneer of text to speech machines from a technological perspective and created an interface which allowed for speech for non-expert users for the first time. Before Klatt’s work, non-verbal individuals would need specialist support to be able to speak at all.

Lucy Hawking discusses Klatt’s work with his daughter Dr Laura Fine during the show. Klatt invented DECTalk, the original system which could take text and turn it into speech. Klatt also produced a definitive history of speech devices which includes a collection of recordings from all the devices developed throughout the 20th century. It’s a hugely valuable resource for development as well as for prosperity.

Klatt was dedicated to the production of a system for speech synthesis that was natural and intelligible. As Dr Fine explains he combined engineering and speech production research with people’s perceptions to create the end product. Perception data and the way people interpret speech is key to how successful a speech synthesiser is for regular conversation and socialisation.

Klatt created a range of different voices, entertainingly labelled the DECTalk Gang, and they gave a choice to DECTalk users. Choices included Beautiful Betty, Kit the Kid and Perfect Paul. Stephen Hawking’s voice is very similar to Perfect Paul.

Eye Gaze Speech Synthesisers

The show tells us that over 1 million people in America are unable to speak for a range of reasons. Lucy Hawking then goes onto to talk to Michael Cubis who lose his voice after a stroke. He controls his speech synthesiser through gaze control which is increasingly where text to speech technology is heading.

Eye Gaze technology uses movement of the eyes to generate text and speaking to Mick Donegan, a specialist in the field Hawking further discusses how the technology works and how it’s developed. The technology itself has been around for about 30 years but the systems have developed a lot in the 21st century. Sophistication in new speech synthesisers mean they can be utilised by individuals who live with involuntary movement, perhaps muscle spasms or shakes. People living with conditions such as cerebral palsy and multiple sclerosis are now able to access gaze controlled text to speech machines as well as games and leisure pursuits.

Initially machines were developed without punctuation or even capital letters but Donegan tells Hawking that this was met with disappointment by Michael Cubis who was insistent that proper speech, with the proper markers, is key to his identity and expressing himself as a fully literate, intelligent person.

The Future

Mick Donegan continues to discuss the future of speech synthesisers and recent research is even looking into how they can provide speech to people living with Locked-In syndrome.

The ideal way of giving someone their speech back is through implants, which is obviously an area which needs more research but Donegan asserts that caps which can boost signals are the current best option.

Speech Synthesisers and Identity

Hawking looks a little at how a speech synthesiser gives or takes away someone’s identity by chatting to Irish director Simon Fitzmaurice. With motor neuron disease Fitzmaurice lost his voice but was provided with a new one through his speech synthesiser – a new American voice.

The American voice of the synthesiser has become synonymous with him for Fitzmaurice’s family with his children unnerved by changes to it through other computer systems and programmes. Despite this Fitzmaurice has been participating in research alongside CereProc, a leading synthetic speech company, to build him a new voice.

CereProc have used recordings of Fitzmaurice’s voice and even data from his father’s voice to produce a speech synthesiser which mimics how he used to sound. This is fascinating technology and the show suggests that if you live with a disease where you may lose your voice there is now scope to make recordings in advance to try and save their part of your identity in the long run.

We thought we’d end this piece with a bit of friendly advice from Michael Cubis. When asked how do you talk to someone with a speech machine he replied:

“I would ask people them not to ask long questions and be patient because it can take a long time to answer. Also please bear in mind that it can be very tiring for those using speech output devices”


Please share and comment

If you enjoyed this video, please embed it on your sites or share it. We would also love to hear your comments below the video transcription.


Klatt’s Last Tapes Radio Show Transcript:

00:01 Speaker 1: We’ve comedy in half an hour when Richie Webb and Nick Walker star as the Hobby Bobbies. Before that, here on BBC Radio 4, Lucy Hawking traces the development of speech synthesis in Klatt’s Last Tapes.

00:16 Speaker 2: You are listening to the voice of a machine.

00:20 Speaker 3: Mama, mama.

00:24 Speaker 4: A, B, C, D, E, F, G…

00:29 Speaker 5: Once upon a time, there lived a king and queen who had no children.

00:34 Speaker 6: Do I sound like a boy or a girl?

00:37 Speaker 7: How are you? I love you.

00:40 S2: I do not understands what the words mean when I read them.

00:45 Speaker 8: Ha-ha-ha.

00:47 Speaker 9: I can serve as an authority figure.

00:50 Speaker 10: What did you say before that?

00:53 Speaker 11: Can you understand me even though I am whispering?

00:56 Speaker 12: To be or not to be, that is the question.

01:01 Lucy Hawking: My name is Lucy Hawking and I have been regularly chatting to a user of speech technology, my father Stephen, for the past 28 years. I write adventure stories for primary aged children about astronomy, astrophysics and cosmology. When I go to schools, I always talk about my father’s use of speech technology and I tell the kids that even though my father may sound robotic, when I play them a clip of him talking, I ask them to remember that actually it’s a real man talking to them. And it’s a man who’s using a computer to give himself back the voice that his illness has taken away from him.

01:42 Speaker 14: Development of speech synthesizers. One, The Voder of Homer Dudley, 1939.

01:50 Speaker 15: Will you please make the Voder say for our Eastern listeners, “Good evening radio audience.”?

01:55 Speaker 16: Good evening radio audience.

01:59 LH: To find out where speech technology started, I went to Saarland University in Germany, where two researchers had built a model of the first ever voice machine. It was originally created in the 18th Century by inventor, scientist, and impresario Wolfgang Von Kempelen.

[background noises]

02:24 LH: Hello.

02:24 Speaker 17: Hello.

02:25 LH: Good morning.

02:26 S1: Please come in.

02:26 LH: Thank you so much.

02:27 S1: I’m very pleased to meet you.

02:28 S1: Hello.

[background conversation]

02:30 Jürgen Trouvain: My name is Jürgen Trouvain. I’m a lecturer and researcher here at the Department of Computational Linguistics and Phonetics at Saarland University and I’m also interested in the history of speech communication devices, like the one of von Kempelen, for example. Kempelen was both a good showman and a very good scientist, but he was really like, sort of a genius, a real engineer, because he was interested in building things which can function and can help also people.

03:03 Fabian Brackhane: My name is Fabian Brackhane.

03:04 LH: What do you think the relationship was between von Kempelen’s original inspiration and the organ?

03:11 FB: It’s a very curious thing, because there is a stop in the pipe organ called “vox humana.”


03:24 FB: When this stop was invented in the 17th century, it should be a representation of the human voice playing the organ.

03:39 LH: So, they wanted to take the vox humana from a musical note, something you’d find in compositions at the time, to actually be able to produce human speech.

03:53 FB: Exactly. Yes. But Kempelen knew very well that this stuff couldn’t be the solution to get a speech synthesis.

[background music]

04:07 S1: Three, PAT the Parametric Artificial Talker of Walter Lawrence, 1953.

04:14 S1: What did you say before that?

04:18 LH: And so, we’re looking at von Kempelen’s speech machine. [chuckle] The door of which has just fallen off. It looks like a small bird house. Yeah. So, we’re taking the lid off the box, which houses the speech machine. And so, Fabian is putting one hand through one hole with his elbow on the bellows, which represent the lungs and his other hand is coming underneath the rubber cone. Which, what does the rubber cone represent?

04:53 FB: The mouth.

04:54 LH: The mouth. So, it’s hand under the mouth piece.

04:59 S3: Mama Mama.

05:03 S1: Ooh, it’s creepy. Sorry.


05:05 S3: Papa Papa.

05:10 FB: So, it’s… These are the both best words he/she could say it.

05:17 S3: Mama.

05:19 FB: So, you have the nose to be opened.

05:23 S3: Papa.

05:25 LH: So, Fabian is moving his hand rapidly over the mouthpiece and using two fingers over the nostrils effectively, while pressing down with his elbow on the lungs. Fabian is actually mouthing the words “mama” and “papa” while the machine is saying them.


05:45 S1: Four, The “OVE” cascade formant synthesizer of Gunnar Fant, 1953.

05:51 S7: How are you? I love you.

05:59 Bernd Möbius: I might be able to find out whether Lucy is able to…

06:02 LH: Should we see… Should we see, perhaps like in…

06:03 FB: So, there’s your instructor.

06:05 LH: Right.

06:06 FB: If you want to say “em,” you have to close the mouth and the nostrils have to be opened.

06:12 LH: The nostrils are open, front [06:12] ____.

06:13 FB: And if you want to say “ah,” you have to move the hand backwards. So, just mah, mah, while I’m pressing them…

06:22 LH: While pressing…

06:23 S3: Mm… Mama… Mam…


06:23 LH: I did that with three syllables. [chuckle] I’ll try with two this time.

06:34 S3: Mama…

06:37 LH: Right and what about papa? How would I do papa?

06:39 FB: The same way but you have to close the nostrils. Well…

06:44 LH: Okay. So…

06:44 S3: Pa-pa-paaaaa.


06:50 LH: Let’s see if I can just do it with two syllables this time.

06:53 S3: Pa-paa…

06:56 LH: Can I get her to say anything else or will I be… Would I be able to make it say any other words?

07:03 FB: If you don’t cover the mouth, it’s an A.

07:07 S3: Ah…

07:09 S1: And the more you cover the mouth, the vowel quality changes.

07:13 S3: Ahh… A… B… Mm…


07:28 FB: He knew that the missing of the tongue was very important thing and in his book, he wrote to his readers, to invent this machine forward, but nobody could invent it with the tongue, with teeth, so that, it could speak more than this few, very few things.


07:57 LH: It seems to me that his aim was actually to give a voice to people who couldn’t speak. And so, he must have hoped for further development of his machine ’cause he can’t have imagined that, it would just be mama and papa or those short sentences. He must have had in mind, this idea that people would be able to speak freely, mechanically.

08:15 JT: And there was a plea in that book Fabian mentioned, please read out that means, researchers and the later generations, please, go on with the development of that machine. So, we’re still trying to do that here.


08:32 S1: 16, Output from the first computer-based phonemic-synthesis-by-rule program, created by John Kelly and Louis Gerstman, 1961.

08:44 S1: To be or not to be, that is the question.

08:49 LH: It would be really nice to get a sense of the progression from a mechanical to electrical to computer solutions to providing a voice for people who can’t speak.

09:01 BM: I’m not sure whether that was actually a smooth transition from mechanical systems like [09:09] ____ to the first electrical ones. I only know that, all of a sudden, that’s how it looks. My name is Bernd Möbius. I am the Professor of Phonetics and Phonology at Saarland University. In the 1930s, there was an electrical system around, the so-called Voder, did by Homer Dudley, that was demonstrated at the World Fair in New York, I believe in 1937.

09:35 S1: For example, Helen, will you have the Voder say, “She saw me”?

09:41 Speaker 21: She saw me.

09:42 S1: That sounded awfully flat, how about a little expression? Say the sentence in answer to these questions. “Who saw you?”

09:49 S2: She saw me.

09:51 S1: Whom did she see?

09:52 S2: She saw me.

09:55 S1: What did she, see you or hear you?

09:57 S2: She saw me.

09:59 BM: During the demonstration at the World Fair, there was a female operator of the system who played the device a little bit like a church organ.

10:09 S1: About how long did it take you to become an expert in operating the Voder?

10:12 Speaker 22: It took me about a year of constant practice. This is about the average time required in most cases.


10:23 S2: She saw me. Who saw me? She saw me. She saw me. Who saw me? She saw me.

10:37 JT: We have to go back to the or is the floor next to the top, the top floor.

10:42 LH: I’m now just getting into an elevator, which probably I can talk to. So, does it speak English?

10:47 JT: Hopefully, yes.

10:51 S2: Okay. Hello, elevator. It doesn’t say hello back.

10:58 JT: You must be patient with that. It’s a machine. Maybe with German.

11:01 S?: Hello [German]

11:01 Speaker 23: Hi there, where can I take you?

11:08 LH: The third floor. Third floor.

11:14 S2: Okay, I’m bringing you to the third floor. Bye, bye.

11:18 LH: Bye now.

11:19 S1: 19. Rules to control a low-dimensionality articulatory model, by Cecil Coker, 1968.

11:28 S2: [11:28] ____. You are listening to the voice of a machine.

11:39 Speaker 24: I’m Eva Lizotte [11:39] ____, and I’m a PhD student and working in articulatory synthesis. The actual situation right now, is that, it’s very hard to simulate women’s voices ’cause they have a slightly different characteristics and if you just tune up the F0, the fundamental frequency or the pitch of the voice, it starts sounding really artificial and what you actually have to do, you have, also to alter the articulation. So when “ah”, when I or when we speak an “ah,” it’s different from a male long vocal tract “ah.” So, you have… You can not easily interpolate the articulation.

12:19 LH: Because of course it’d be awful for women not only to be using a speech synthesizer, but then, to be coming out with a man’s voice.

12:25 S2: Yeah.


12:26 LH: I mean, that would constitute… That would be a real loss of identity.

12:29 S2: Yeah. Exactly.

12:31 Speaker 25: This is result of trying to imitate a female voice by increasing the pitch.


12:37 S1: 24, the first full text-to-speech system, done in Japan by Noriko Umeda et al., 1968.

12:47 S5: Once upon a time, there lived a king and queen who had no children.

12:55 S1: But I think it’s also important to think of children for example, growing up and of course at the beginning to speak with an adult’s voice, even the sex would be the same, would be awful I think…

13:08 LH: Definitely very important just for making friends. It’s gonna be very hard for a child speaking with an adult’s voice to actually communicate with kids of their own age.

13:17 S2: Yeah.

13:18 JT: But at the moment we don’t know very much about the speaking voice of children coming, adults, for example. What’s really happening during the maturation of the vocal folds.

13:29 LH: So, the aim is to create speech machines which can grow up with somebody.

13:32 JT: That would be really nice. Then you would have shown real knowledge about what’s going on in your voice during life span, at least, of a first say, 20 years or so.


13:47 S1: 21, sentence-level phonology incorporated in rules by Dennis Klatt, 1976.

13:55 Speaker 26: It was the night before Christmas, went all through the house, not a creature was staring. Not even a mouse.

14:04 LH: Can you see that people who don’t maybe know, who Dennis Klatt is, could you put him in context?

14:09 JT: Yeah, he’s definitely one of the pioneers of speech emphasis, in the technological sense, but also in providing an interface for non-experts who could basically type in text and get synthetic speech out of the system, which wasn’t possible before I think.

14:27 S2: Before Klatt, you would actually have to be a specialist in order to be able to input what you wanted to say.

14:33 JT: Exactly.

14:33 LH: Okay. Laura can you hear me?

14:36 S2: I can hear you. Can you hear me?

14:37 LH: Yes. I’ve got you. That was fantastic. This is Dr, Laura Fine, the daughter of Dennis Klatt. Dennis Klatt is really the father of the modern speech machine. He created DECtalk, the system which takes text, inputted by the user and turns it into speech. Dennis Klatt also produced the definitive history of speech devices which includes a collection of recordings of each device through out the 20th century.

15:01 S2: He really was interested in making a natural and intelligible system. So, the most important qualities of a speech synthesis system are really the naturalness and the intelligibility. And he was very much interested in making those of high quality. One of the unique contributions was that, he used not only his understanding from an engineering standpoint and a speech production standpoint, but he also asked for analysis with perception data. How do people interpret speech and what is it in the listener that helps them determine, is this a child, is this a female, is this a male? What cues are important? And that really helped him to make an intelligible system that incorporated different age speakers and different genders.


15:47 S6: Do I sound like a boy or a girl?

15:51 S2: My mother came across this drawing that my father made of the different speakers. In the center, we have Perfect Paul. This is a picture of my father.

16:01 Speaker 27: I am Perfect Paul, the standard male voice.

16:04 S2: And then, this is beautiful Betty which is the standard female voice. And that is a picture that he drew of my mother.

16:13 Speaker 28: I am beautiful Betty, the standard female voice. Some people think I sound a bit like a man.


16:22 S2: This is Kit the kid, who’s a 10-year old child. So, this is a picture of me.

16:27 Speaker 29: My name is Kit the kid and I am about 10-years old.

16:31 S2: With my nice short hair cut, as a child.

16:33 LH: Oh, is that you?

16:34 S2: I was a lab rat. As a child, I spent a lot of time at MIT. My father had a candy drawer. I spent hours with him at MIT, in his laboratory and he took snippets of my voice and that helped to develop the child’s voice.

16:51 LH: I love that they’re called the DECtalk gang.

16:54 S2: The DECtalk gang.

16:55 LH: That is a great… That is a great title.

16:57 S2: So, there was my father in later years and underneath the caption says, Huge Harry. Kind of older gentleman’s voice.

17:04 S9: I am Huge Harry, a very large person with a deep voice. I can serve as an authority figure.

17:12 LH: Laura, I have to tell you something, Perfect Paul, sounds just like my dad.

17:17 S2: I mean, I think that’s amazing.

17:18 LH: Is Perfect Paul based on your father’s voice?

17:21 S2: Yes.

17:22 LH: Which therefore means that, my father is actually speaking with your father’s voice.

17:27 S2: It’s amazing, he would be so, so thrilled.

17:30 LH: I think, one of the things that strikes me about your father is his humanity and that he was obviously an amazing scientist, who managed to do something that has had a very profound impact on people’s day-to-day lives. And but also that he had quite a sense of humour.

17:45 S2: He did.


17:47 LH: Is it true that he gave his synthesizer the ability to sing, “Happy birthday to you”?

17:53 S2: He did.

17:54 S2: Happy birthday to you. Happy birthday to you. Happy birthday dear…

18:03 S2: One of the ironies is, as a 40-year old man, he began to be somewhat hoarse, because he had thyroid cancer. And, he had had a thyroidectomy, but his vocal chords were affected by the disease. And so, he spoke in later years with a raspy voice. And I think he understood all too well your father’s challenges in terms of communication.

18:29 LH: So, he had a real sense himself of what it would actually be like to find that you had no voice.

18:36 S2: Yes, my father unfortunately passed away at age 50, way too young. And he knew that he had a terminal illness really, when I was quite young. He knew that he would not be around perhaps to see me graduate from college. But he was always so optimistic. I think it’s been such an amazing experience for me to talk to you about how your father’s life has been transformed by my father’s research. And I had never really thought before that my father’s voice lives on.


19:11 S1: 33, The Klattalk system by Dennis Klatt of MIT which formed the basis for Digital Equiptment Corporation’s DECtalk system, 1983.

19:24 S2: According to the American Speech and Hearing Association, there are over one million people in the United States who are unable to speak for one reason or another.

19:37 Speaker 30: I will show you the way that you can write using my eyes.

19:41 Speaker 31: At first, when people meet me as someone who is unable to speak, they’d seem to assume that you have some form of mental deficiency.

19:49 S3: I will show you the way that you can write using my eyes.

19:52 LH: This is [19:53] ____ Michael Cubis. And Michael lost his voice from a stroke some years ago.

19:56 Speaker 32: Some people will talk to me as if I have a learning disability. I find this quite funny as some of them [20:02] ____ the most ridiculous way. Some of them catch on fairly fast and realize that I’m perfectly sane. Other’s continue to act this way though, which is funny and completely bizarre.


20:20 S3: People are quite anxious about how to approach someone with a disability. And that’s what Michael does, he puts people at their ease. So, it is easy to communicate with him.

20:30 LH: Mick Donegan’s speciality is an eye gaze technology, and that means, using the movements of the eye in order to generate text, which can then be turned into speech. Could you explain a bit more to us about gaze control, about the kind of technology that we have just had a conversation with Michael [20:49] ____?

20:50 S3: It’s a system, it’s based on a very powerful camera system combined with low level infra-red lights. The actual technology has been around probably two or three decades, but the significant change that’s happened this century, is that systems began to cope with significant involuntary movement. That means that the significant numbers of people with cerebral palsy, for example, who have involuntary movement, suddenly that group of people were able to use the system. People with MS who have involuntary movement.


21:23 S1: 11, The DAVO articulatory synthesizer developed by George Rosen at MIT, 1958.

21:31 S4: A, B, C, D, E, F, G, H, I, J, K…

21:36 S3: When I first tried Michael with eye gaze technology, we used just a lower case system and Michael was very unhappy about that. He was insistent that I put capital letters, full stops, commas, semicolons, because it’s really important for him to show everyone that he’s a fully literate guy who is able to speak independently and in the highest literacy level.

21:56 S4: When we know our A, B, C…

22:02 LH: Mick, I wonder if you could tell us a bit about how you see the future of this technology developing?

22:07 S3: I’ve just finished being an advisor for a European project on brain-computer interface and disability. And for me, that’s a technology that excites me because for those people who are completely locked in, who can’t even move their eyes, then there is no other way to go, other than to use a brain computer interface. At the moment, you know it’s kind of inconvenient, because for the best signal… Well, in fact, for the best signal, you need an implant. But the second best signal [chuckle] is to actually wear a cap and for that [22:31] ____ gel on it, etcetera. But there are various dry caps being developed that have a reasonable signal as I understand it.

22:39 LH: I’m always asked how to talk to my father, and it would be great to know what advice you would give to people who are not familiar with speech machines, but who would like to have a conversation with you?

22:49 Speaker 33: I would ask them not to ask long questions and be patient because it can take a long time to answer. Also, please bear in mind that it can be very tiring for those using speech output devices.


23:06 Speaker 34: The question of whether I would change my voice given the opportunity is a difficult one. And I suddenly have an opportunity.

23:14 LH: This is acclaimed film-maker, Simon Fitzmaurice, who has lost his voice through MND.

23:20 S3: This voice, my voice is a generic one that came with the computer, turning an Irish man into an American overnight. But it has become my voice.

23:33 S?: Yeah. This is actually something that we have in mind as a real application for people who know that there’s a chance that they will lose their voice to record themselves. Such that the experts will be able to build a speech synthesiser that has that person’s voice.

23:51 S3: There are two key issues, and the question of changing my voice. What I think about my voice, and what those closest to me think and feel about my voice? And I can tell you what my children feel straightaway. They find the idea of me changing my voice completely abhorrent. Just recently, I was testing out another computer, when I glimpsed out of the corner of my eye, my two little boys standing outside the door, their heads close together whispering… They are four and six years of age. They are whispering and looking in my direction. It turns out they are discussing the strange voice coming out of this different computer. Later, back on my own computer, it’s bedtime and right my six-year old comes to give me a kiss, I type up “Goodnight” on my screen. “No. Say it.” I say it, “Goodnight.” He turns to his brother at the door, “You see, I told you. It’s the same.” Someone’s voice is part of their identity, integral to their perceived makeup, it’s funny though, I feel less protective of my computer voice than others, probably because my voice inside my head is what is familiar to me, my thoughts, not the voice that expresses them.

25:20 S3: Recently, I came across a video on YouTube, we have a doctor in Sweden with motor and neuron disease and there it was, my voice out of someone else’s computer, identical. It was a little unnerving. So, I decided to see if I could get some semblance of my old spoken voice back, uniquely mine. I’ve been working with a company in Edinburgh, CereProc, the world leaders in synthetic speech who have built a synthetic voice out of old recordings of my spoken voice. I was lucky enough to have a recording of me reading some of my poetry and other recordings. However, because of the lack of data in comparison to someone who would deliberately bank their voice, my synthetic voice is limited by the amount of original material. As a solution, CereProc are now in the process of using my father’s voice as a similar source from which to fill in the missing DNA and to build a harmonias rounded voice.

26:23 Speaker 35: Harmonious rounded voice. I await the results.

26:27 S3: I await the results.

26:27 S3: So, the question remain…

26:29 S3: The question remains…

26:30 S3: Will I change my voice?

26:31 S3: Will I change my voice. And more importantly…

26:34 S3: Will my children allow it?

26:36 S3: Will my children allow it?


26:40 S1: 30, The MIT MITalk system by Jonathan Allen, Sheri Hunnicut, and Dennis Klatt, 1979.

26:49 Speaker 36: Speech is so familiar, a feature of daily life that we rarely pause to define it.

26:56 S1: End of the demonstration. These recordings were made by Dennis Klatt, on November 22nd 1986.

27:04 LH: Amazingly, we’ve progressed from Von Kempelen’s 18th century machine which had a limited vocabulary to being able to recreate the exact voice that was lost and give it expression, meaning and modulation in a way that mimics the naturally produced voice. Soon, speech technology users will be able to make their voices smile.

27:26 S1: Klatt’s Last Tape was presented by Lucy Hawking.

27:29 S6: Do I sound like a boy or a girl.

27:31 S?: The recordings were made available by the Acoustical Society of America.

27:35 S4: A, B, C, D, E, F…

27:37 S?: The sound design was by Nick Romero.

27:40 S7: How are you? I love you.

27:43 S?: It was produced by Julian Mayers.

27:45 S8: Ha-ha-ha.

27:46 S?: It was a Sweet Talk production for BBC Radio 4.

27:51 S2: Thank you for listening and good luck on all your cosmic journeys.

28:01 S1: I’m a bit concerned about that last bit, but while I’ve still got a job, I’ll introduce Peter White to tell us about You and Yours in half an hour. Peter.

28:07 Speaker 37: Yeah. We’re pretty concerned up here too. It’s claimed over 200,000 people who lost money when the life assurance company, Equitable Life, collapsed 10 years ago, could end up with no compensation at all. The Public Accounts Committee has blamed the Treasury for not getting a grip on the scheme. We’ll be looking at what can be done before the current deadline runs out, next spring. Wales, has cut its use of carrier bags by a massive three-quarters by imposing a charge. England still says, “It’s not ready… ”


Photo Credit: Attribution Some rights reserved by lwpkommunikacio

Talk Shop Conference 2013

An image of two purple speech bubbles, one with "talk" the other with "shop" written in white.

Talk Shop 2013 takes place 21st of June at the Daventry Court Hotel.

Friday 21st of June, 2013, will see Trabasack once again attending the annual Talk Shop – the national Speech & Language and Occupational therapy conference.

trabasack lap tray with microphone

Trabasack can be used to mount a microphone to encourage speech

The Talk Shop fair is a one day conference that brings together Speech and Language and Occupational Therapists from around the country to discuss ideas, ignite creativity in the field and keep up-to-date with available resources.

Talk Shop is the ideal location for therapists to meet up with others in the field, and gives them an opportunity to discuss their teaching and therapy methods, share stories and learn how others help their patients get the most from therapy.

By providing a forum for those in the SLT and OT profession, Talk Shop can help keep the field of communication therapy fresh and creative. As each patient in need of communication therapy will have their strengths and weaknesses, many therapist will have unique stories to tell, and having a chance to chat and share experiences can help provide new approaches for speech and language therapy.

TalkShop Workshops 2013

This year Talk Shop will be providing 4 unique and in-depth workshops for parents, carers, SLTs and OTs to take part in.

Apps for use in Therapy

With the fast changing technology that is now available for use in communication therapy, Talk Shop have chosen to present a workshop dedicated to iPads and apps as communication and sensory aids. This workshop will be hosted by Richard Hirstwood, well known for his passionate and experienced approach to multi sensory therapy. He will be talking about how to use iPads for children and adults with additional needs, to engage, motivate and to help connect with those who have communication issues. He will also share ideas for creating multi-sensory experiences for children using toys and environments, as well as touch-screen technology. For a sneak-peak of Richard’s work, you can visit his website

Auditory Processing – ‘The Importance of a Full Sensory Assessment’

The next workshop on offer is Auditory Processing – The importance of a Full Sensory Assessment. Alan Heath, head of the workshop, has taken part in a number of Talk Shop events over the years, and is back again to discuss how the complex mix of all 5 senses allows a child or adult with additional needs to understand the world around them. He will talk about how issues with processing one of the senses can impact upon the processing of the other four, and in turn general daily functioning. For more information on Alan’s work, visit his website

An Introduction to TalkTools Oral Placement Therapy for Feeding and Speech

Next up is the introduction to TalkTools Oral Placement Therapy workshop. TalkTools products and systems were developed in the USA and are specifically targeted at helping therapist aid patients with speech and feeding issues. The workshop includes information on motor and sensory issues that can affect speech and feeding, and therapy techniques that utilise oralsensory/ motor tools. Helen Woodrow is heading the workshop, and is an accredited level 4 TalkTools Therapist, making her the most experienced TalkTools therapist in Europe. You can find out more about Helen and her TalkTools experience by visiting her website

We have been using TalkTools with our son who has Dravet Syndrome. They have really helped him with his eating and drinking and we shall continue to tell other parents about them.

‘How do you SLOT in? Joint SLT and OT working’

The final workshop available on the day is an in-depth look at what TalkShop is all about. The workshop is headed by Hayley and Jess – Speech and Language Therapist and Occupational Therapists respectively, they are highly experienced in their fields. Hayley and Jess are currently combining their skills and experience to create a new independent therapy practise called “We Do Therapy”. They will provide an interactive presentation covering how they met and came to work together, why setting up “We Do Therapy” was important to them, and plenty of hints and tips on how to work collaboratively on projects to achieve desired goals. For more information follow Hayley and Jess on twitter : @WeDoTherapy

Exhibitors and Learning Zones at TalkShop 2013

As well as a fantastic range of workshops for SLT and OT professionals, the TalkShop conference also includes a large selection of exhibitors each showcasing their products and communication aids. It is here that Trabasack will be demonstrating their multi-use lap tray bag and media mount, providing helpful ideas on how to get the most out of your Trabasack in relation to communication and sensory aids.

Media Mount holding a communication switch and a 'hello' symbol

Trabasack being used for symbols and switches

The Learning Zones offer different environments for experimenting with and seeing various equipment and technology in action. This year sees four zones on offer – Tech Zone, featuring the latest in assistive, speech and interactive technologies. Then the Sensory Zone provides an area dedicated to providing the latest in engaging sensory equipment and experiences. The Classroom Zone is a ‘mock’ classroom which will showcase the most inclusive and innovative furniture and school equipment on the market. Lastly the Design Zone will allow you to see ideas that are still in development, get involved with prototypes and take part in discussion on how to develop innovative therapy tools.

Finally, there will be a “Day in the Life” presentation where companies and experts examine the daily equipment needs of children with additional needs, including everything from waking, hoisting, feeding, travelling and bathing. This presentation will demonstrate some of the products on offer from many of the exhibitors and will help provide you with ideas on new equipment that may help your own child.

Image of someone with a Talk Shop tote bag on their shoulder, browsing items for sale on a table.

Each attendee will receive a Talk Shop bag filled with resources and ideas.

TalkShop Venue and Ticket Bookings

TalkShop 2013 will take place on Friday, 21st of June at the Daventry Court Hotel, Northamptonshire. Doors open at 9:15am and the work shop and exhibits are available throughout the day until closing at 4:45pm. There is a large car park available for attendees and tickets are available for £55 per person. To book a place at TalkShop 2013 simple fill in the online form here or contact Louise Scrivener via phone 07881 523804 or email


The video below is a little taste of the kind of information Alan Heath will provide during his sensory and iPad app workshop:

QR Codes Communication Ideas

QR Voice and using QR Codes for Communication

How QR Codes can be used for communication

Communication Aids QR Code

Try this QR Code

Quick Response (QR) Codes appear everywhere. You’ll see them in magazines, on bus stop advertisements and pretty much anywhere you can reach with a smart phone. They have become an integral part of modern mobile society but what exactly do they do and how can they be used to aid communication? We look at QR Voice and Symbol Boards.

What are QR Codes?

QR Codes are simply a particular type of small barcode which is extremely quick to read via the right device and can load up large volumes information in comparison to traditional bar codes. You can scan a QR Code with most smartphones, tablets and also specifically built QR reading devices. After scanning they connect to a webpage, sound bite, video or other digital information source.

How can QR Codes be used for speech and communication?

This is where QR Voice comes in. As mentioned any digital information source can be encoded simply into a QR barcode and this includes verbal responses, statements and more. QR Voice encodes text messages into a QR code and then this message can be stored in a smartphone or tablet and used multiple times. Once created and scanned the QR Voice clip can be used regularly to aid communication for those who have non-verbal issues or may just have specific times where verbal speech becomes difficult.

The QR Voice site was developed for regular use but it can be easily adapted as an extremely simple AAC device that’s completely free for anybody to access. You could programme in some specific phrases such as ‘I’m thirsty’, ‘I’m hungry’ and simple ‘Yes’ and ‘No’ utterances and support the user to select specific codes when trying to communicate verbally through their smartphone or tablet.

QR codes on a Symbol Board

You could create a symbol board using these QR codes. Usually Symbol Boards have letters or pictures but using QR codes and a smart phone takes the board to another level of sophistication. The QR symbol could be used with QR Voice to create small sentences that are spoken when the code is read, or the Symbol could lead to a website or image online.

Symbol board mount, using a trabasack to secure a symbol board on the lap

A Trabasack Mini Connect could be used to hold a QR code symbol board

With a little practice, it should be quite simple to get used to and for more advanced users of technology messages of up to 100 characters can be crafted, allowing for a short conversation where possible. Another use could be a series of instructions or a set of sentences, weblinks or images for a talk or presentation.

This short video shows QR Voice in action:

We think trabasack would make an idea bag for QR codes on symbol boards and a smart phone for communication. Please send us your pictures or videos if you have used a  trabasack in this way. We will send you a free T-shirt!

Trabasack at Talk Shop 2011

Trabasack at Talk Shop 2011

This Friday, September 30th, Trabasack will be at Talk Shop’s National Speech & Language and Occupational Therapy fair. 2011 is the national year of communication which aims to help highlight the importance of communication, speech and language. We are very pleased to be part of an event embracing this.


Talk Shop communtication fair

The Communication Champion, Jean Gross will be speaking at the event and we are very much looking to hear about her work as a champion of the needs of children with communication difficulties.

This one-day event will the be attended by Speech and Language Therapists and Occupational Therapists and their assistants and students . There will be a trade area where we will have a stand alongside some of our old friends such as Guy from Disabled Gear. There will also be a resource sharing area, workshops and discussion groups running throughout the day.


The event is being held between 9am and 4.30pm in Derby’s Yew Lodge Best Western Hotel in Kegworth, East Midlands. So it is a venue very near to us. The conference facilities will give the professionals in both speech and language and occupational therapy a chance to communicate, share ideas, products and developments.
As one of the many exhibitors at the event, Trabasack we will be taking the opportunity to demonstrate trabasack and its uses to the many professional attendees. We are hoping that they in turn will be able to pass on information about the quality of our product and its uses as a communication aid mount or for speech therapy tools and educational toys.
As well as the other exhibitors showing off their products and services, a number of workshops are available. From Working Effectively in an Inclusive Classroom for professionals who work in an mainstream educational capacity to Sound Foundations – The Power of Music & Sound, the range of workshops is really diverse and bound to interest a range of different delegates.

If you do decide to attend please say hello, we will be the ones making notes on at the workshops on a Trabasack!


While doing some research for the conference I found this very interesting new video for the Hello campaign:

Earlier this year, Wendy Lee, Professional Director at The Communication Trust, interviewed 7 children with speech, language and communication needs about their life, their experiences at school and what it’s like to have a communication difficulty.  The Way We Talk is a new film from the Hello campaign showing how speech, language and communication needs can appear in some children through the words of Oliver (aged 8), Attiyyah (15), Luke (4), Jamie (15), Barnaby (6), Aiden (7) and Alex (6).

[flowplayer src=’’ width=512 height=288 autoplay=false]

Trabasack is available from these Communication Aid companies (to add your company to the list, please email duncan{at}