communication stories

How Does Stephen Hawking Talk?

Stephen Hawking The Theory of Everything

The Official Film Post

With the release of the Hollywood movie The Theory of Everything, looking back on the early life of Professor Stephen Hawking, the famed theoretical physicist known for his impressive body of work as well as the fact he lives with the degenerative motor neuron disease. The film reached the UK on January 1st and has been popular so far.

With this renewed interest in Professor Hawking, his work and life, we thought we’d take a closer look at his communication aids and how he is actually able to speak. With no verbal skills remaining, Hawking has used computer technology for speech synthesis for many years and we have discussed previously in our discussion of Klatt’s Last Tape, a BBC Radio documentary that featured Hawking’s daughter and a discussion of his role in the development of speech synthesis. Here we’re taking it a step further and discussing the question in more depth, exactly how does Stephen Hawking talk? Before that we’re taking a little look at Motor Neuron Disease and how it affects those living with it.

What is Motor Neuron Disease?

Trabasaxon Liam with Stephen Hawking

Trabasaxon Liam with Stephen Hawking

Motor Neuron Disease (MND) is a progressive disease that affects the nerves in the brain and spinal cord. Professor Hawking received his diagnosis aged 21 and there are many individuals living with the disease across the UK and around the world, including our Trabasaxon pal Liam Dwyer (pictured with Hawking) and it’s a disease which there is surprisingly little awareness of.

MND affects different people in different ways. It affects the way individuals walk, talk, eat, drink and breathe but it’s very rare that all the affects come on at once or in any particular order and not all individuals with MND get all symptoms. There is no cure for MND and symptoms are managed on an individual basis, this video shows Liam discussing MND in depth and how he works to raise awareness, using his own speech synthesiser:

Stephen Hawking and MND

As we said, Stephen Hawking received his diagnosis of MND aged 21 and soon began to require crutches to walk before a wheelchair. Stephen Hawking first began using his computer speech synthethiser in the 1980s and although there have been many developments since its first installation, the system remains very similar.

Hawking also uses a wheelchair and requires nursing support due to MND and whilst he originally didn’t want to focus on his disability he began working in the disability sector in the 1990s, providing a role model and an example of what can be achieved, however severe a disability you have. He is committed to the protection of disabled rights and got his family involved in the viral Ice Bucket Challenge in 2013, supporting MND awareness.

How does Stephen Hawking Talk?

Stephen Hawking Speech Synthesis

Professor Hawking giving a speech at NASA

Now, back to the main issue, since opting for speech synthesis how has Stephen Hawking managed to speak? Hawking doesn’t use text-to-speech or input anything into a keyboard physically so his speech synthesis is entirely based upon facial movements. It’s a revolutionary system which was developed with Hawking in mind but can be adapted for other users with similar needs.

Hawking communicates via a computer system which is mounted to his wheelchair and powered by the same batteries that keep his power chair going. The speech synthesis works through a specific programme called EZ Keys which gives Hawking an on-screen software keyboard. A cursor is present and moves automatically across the keyboard in rows or columns and Hawking is able to select character through moving his cheek to stop the cursor, getting the character he needs.

The technology is extremely advanced yet seems so simple. Hawking’s cheek movements are detected by an infrared switch which is mounted onto his glasses and that single switch is the only connection he has with the computer. EZ Keys has also been developed with word predictive capabilities meaning Hawking often types one or two letters before getting the word he needs, thus speeding up the speech process and making it less laborious than it could be.

To save time Hawking also has a bank of stored sentences and phrases for regular use, helping conversation flow and allowing him to give speeches based on pre-recorded sentences and statements. Hawking has tried other methods of switch to access his speech synthesis including Brain Controlled Interfaces, but the cheek movements are the most consistent and effective for his needs.

This video gives a concise and straightforward explanation of how Stephen Hawking talks, in his own words:

Stephen Hawking’s choice of speech synthesis is completely unique to his needs. Some of the more common speech synthesisers on the market, used regularly by people with a range of different disabilities include the Lightwriter, the Eyegaze Edge and the many software and technology options through CereProc, a company who specialise in text-to-speech and more innovative forms of speech synthesis.

The Theory of Everything

The Theory of Everything received much critical claim and has been a real success, with both the lead actors receiving praise for the sensitivity and genuine portrayal of their roles. The film was made as an adaptation of the memoir Travelling to Infinity: My Life with Stephen by Jane Wild Hawking, Hawking’s ex-wife who he is still close to and portrays the period in which Hawking received and tried to manage his diagnosis as well as his ground-breaking work in his field.

We asked Liam Dwyer (follow Liam on twitter) to review the film for us:

I thought the acting and the way Stephen Hawking was played was amazing. Eddie Redmayne played the part so well I thought it was Stephen Hawking. Felicity Jones played Jane great and she showed just what a wife/carer has to go through looking after a person. It was good to hear her side of the story too.

Here is the official film trailer to get a taste for it:

Stephen Hawking’s life and work is remarkable and this new film is a testament to that. It’s also fantastic to see how the developments in speech synthesis that he trials and tests are advancing the science for people in general, providing many more people with the opportunity to speak.

You may be interested in a revealing BBC interview by Jane Hawking about her life as a carer and wife of Steven and her book “Travelling to Infinity” on which the film is based.

Comments on this post from Assistive Technology Professionals:

Simon Churchill

He ‘talks’ using a voice output communication aid. For more information about them see speech generating devices. The film The Theory of Everything fudges the software he uses, as he uses a scanning technique which generates words at a far slower rate than was seen in the film, and the scanning software was not shown, presumably because the general public might not understand its use. A brilliant film, but it does deviate from the truth somewhat in this regard.

Hector Minto

Agreed Simon Churchill. Not sure I would describe it as advanced.

Denis Anson

It should also be observed that the system that Stephen Hawking uses is highly personalized, and may not be useful to anyone else on the planet. Hawking is arguably the most brilliant mind living, and, when he could still vocalize to some extent, would compose entire technical papers and books in his head, then dictate them to the one or two assistants who could understand him. His system 1) has his highly idiosyncratic vocabulary in it, and 2) uses abbreviations that he has learned, that most of us probably couldn’t make sense of.

A number of years ago, he spoke at University of Washington, while I was on faculty there. I was not able to attend the talk, but the reports that I heard were that, for his presentation, which was prepared in advance, he communicated at normal speeds. But for the question and answer session, the audience had to wait for him to compose his answers. Because he was Hawking, they would wait, but it was very slow.

David Selover

All of your comments are accurate. This is always a constant debate in the AT world. The AAC device that he uses is specific to him. As is all aac devices should. Be because every persons voice is unique to them. There are a plethora of devices out there, one size does not fit all.

Charlotte White’s Musical Fight

After the popularity of  our recent post, Klatt’s Last Tapes, we have made the second in a series of videos profiling fascinating assistive technology stories:

Charlotte White’s Musical Fight is a BBC Radio 4 documentary that provides an intimate and in-depth look into the life of a young woman called Charlotte White, who, after an accident in her early teens, was left almost entirely paralysed.

The documentary looks back on Charlotte’s experiences post-accident; how she felt patronised by the immediate rehabilitation therapies she was offered, how she still desired to make music and express her creativity and the struggle to find her place as a teenager in mainstream society.

Video: Charlotte White’s Musical Fight

(for a video transcript: scroll to the bottom or use youtube captions)

In spite of her set-back, Charlotte showed determination to continue advancing the musical skills that she had shown such promise with as a young child, and with the help of assistive technology and the Drake Music Project, Charlotte was provided with a very modern method to allow her creative side to shine. Charlotte is now a  professional classical musician and composer.

Drake Music

Drake Music is a charitable organisation that gives those with disabilities the opportunity to create music using assistive and adaptive technology, helping to provide a creative outlet to many who would otherwise struggle to use ordinary instruments or learn music via typical methods.

Founded in 1988 by Adele Drake, Drake Music is a nation-wide initiative with regional bases dotted around the country in London, Manchester and Bristol.  Their ever-growing team of techs, teachers and advocates continue to work in partnership with numerous schools, universities and local authorities to provide musical opportunities, both creative and educational, to disabled people across the country.

Charlotte speaks of how her introduction to Drake Music was tentative at first, based-upon her previous experiences with music therapy. However, it didn’t take Charlotte long to realise that Drake Music was a far more innovative and beneficial tool than traditional therapies she had already dismissed, and with patience, understanding and ground-breaking assistive technology, she soon found a way to create music again.

Image of Charlotte White smiling, wearing a red cardigan and patterned dress

Charlotte speaks candidly and openly about her post-accident experiences, and how Drake Music changed her outlook.

“When I became disabled, I was introduced to music therapy. Music therapy is literally someone sitting in front of you banging a drum or playing a guitar, and you’re meant to tell them all your worries about life or you’re meant to be really happy because someone’s banging a drum in your face.

[I found that to be] patronising and very boring and completely pointless. And I expected Drake to be like that, but it wasn’t at all. Drake Music gave you the opportunity to play independently, rather than just sitting there listening like a lemon.”


Through Charlotte White’s Musical Fight, we are introduced to a strong-willed, determined young woman, brimming with creativity and promise, who with the help of the Drake Music Project, defies all opposition in continuing to sate her creative needs through the use of assistive technology, and the support of staff at Drake Music.

Enable Us

Charlotte has set up her own website at Enable Us:

Enable Us has been set up as a result of difficulties that my family, friends and I have come across over the years. The overall aim of the site and the project is to empower individuals with impairments, preventing society from disabling people and preventing them from fulfilling their potential.

We also have heard there is a project that Charlotte is working on using music and a certain revolutionary instrument…but we cannot say more at this stage.  We are very excited about it! Watch this space!

Charlotte and Trabasack

We were very pleased to hear that Charlotte has recently become a big fan of trabasack and our new media mount accessory, describing it:

“I love my trabasack,  the velcro thing is great, especially for drinks. I’ve been using it for cooking, work and all sorts!”

Please comment below the transcript and share if you have enjoyed the video.

Video Transcript

00:01 S?: Now on Radio Four, we’ve the touching story of a disabled student and her struggle to play music. Josie D’Arby presents, “Charlotte White’s Musical Fight.”


00:22 Josie D’Arby: In 2008, a video clip appeared on the internet of a teenage girl performing the prelude to Bach’s Cello Suite. Nothing remarkable about this, you may think. Until you learn that the musician, Charlotte White, was playing every crotchet and quaver using only the slightest movements of her head and thumbs.


00:51 JD: This performance proved to be a defining moment in Charlotte’s rehabilitation, but it also raised questions about how musical talent and achievement are assessed. Questions that have yet to be answered.


01:17 JD: Well, I’m just arriving at the home of Charlotte, which is in a small village in Buckinghamshire, where I’m going to meet her and her mother, and just find out how much music has actually changed their lives.

[background music]

01:43 JD: Charlotte, when did you first start playing music?

01:46 Charlotte White: When I was about six years old, I had regular piano lessons like all my friends did at school.

01:52 JD: Were you having examinations?

01:55 CW: I never did exams. My mom wanted us to play for fun rather than to play to achieve something.

02:01 JD: In those early days, did you enjoy doing the piano? Were you loving it?

02:06 CW: Not particularly. It was more something I did because we were all expected to do it. I didn’t start enjoying music until later on in life.

02:13 JD: So can I ask you just to go back to your accident really, would you be able to tell us what happened?

02:18 CW: When I was 11 years old, I used to ride a lot. I competed on a pony. And for a period of a year, I constantly fell off my pony for no apparent reason. The last time, I was in the stable yard holding my rabbit and guinea pig. And I fell over backwards and hit my head, and everything went downhill from there.

02:39 JD: And what was the diagnosis back then? Was it something that they expected you to recover from or what did they tell you could have happened?

02:46 CW: I don’t have a full diagnosis. I got diagnosis which cover some of my problems, but not all of my problems. They’re constantly finding new things out, even now, 11 years on.

02:58 S?: And not surprisingly, this had huge consequences on Charlotte’s quality of life.

03:06 CW: For a long period of time, my life had been about exercise, physiotherapy, occupational therapy, speech therapy and that was it. That was drummed into me day in, day out, day in. And all I was expected to do was achieve and get physically stronger, which wasn’t happening a lot of the time. So that was quite depressing that I was doing all this work and not getting much out of it. And that was the only life I knew. A lot of my friends had moved on by then. They were having fun at school, enjoying life, where I was just having physio, physio, physio. I would only see physios. I’d only see speech therapists. I’d only see people who were meant to make my life better, and improvement, but it never seemed to happen.

03:46 S?: After the accident, Charlotte gradually lost all movement in her body. She spent five years in and out of hospital, and eventually went into a period of rehabilitation regaining movement in her head and then gradually her fingers. At 16, Charlotte began attending St. Rose’s School in Stroud. It was there that she was introduced in the Drake Music Project. An organization that uses technology to help people with disabilities participate in music.

04:14 CW: Doug came up, and I had an option of a cooking class or going to meet Doug and see what Drake Music was about.

04:20 JD: Did you think back to your piano days at six, and think “I have a feel for music.” Did you know that you had a feel?

04:27 CW: When I became disabled, I was introduced to music therapy. Music therapy is literally someone sitting in front of you banging a drum or playing a guitar, and you’re meant to tell them all your worries about life or you’re meant to be really happy because someone’s banging a drum in your face.

04:43 JD: And what… You found that patronizing or what?

04:46 CW: Incredibly patronizing and very boring and completely pointless. And I expected Drake to be like that, but it wasn’t at all. Drake Music gave you the opportunity to play independently, rather than just sitting there listening like a lemon.


05:02 JD: And did that effect your attitude towards it? Tell me about your very first lessons.

05:07 CW: At the time, I had a huge sensitivity to light. Therefore, I wore dark glasses. And spent a lot of time in sort of a half lit room playing music and Doug getting me to interact with him to begin with, and then learning the basics and chords and beats. We listened to a lot of Robbie Williams.

05:28 JD: Was that educational? Or…


05:30 CW: It became educational. [laughter] Very surprisingly.

05:37 Doug Bott: We were working one-to-one, in the dark, very quietly because at the time, she was very sensitive to light. So the only light in the room was the glare off my laptop screen. And the music we were playing was so quiet, that actually the whirr of the fan on the laptop was almost louder than the music at points.

05:57 S?: Doug Bott was the first person to work with Charlotte to create music.

06:01 DB: Sitting on the table we have, what we call a ‘magic arm’, it’s a piece of equipment which can fix any piece of technology in just about any position around a person’s body and attached by Velcro to this arm is a fairly and spectacular-looking back rectangular box, which is a magnetic motion sensor. So, it emits a small magnetic field and you can assign pretty much anything that you want to that magnetic field. So, in Charlotte’s case, we assigned about seven or eight notes to it and she was able to make very small head movements in order to play those musical notes. Then she had one switch, very small switch, on each thumb. One the switches did a very simple task which was to turn the sound that she was playing on and off, so that if she wanted to move her head without playing music, she could.

07:01 DB: The other switch controlled with her other thumb changed the configuration of notes available to her on the motion sensor that she was playing with her head. So, it’s… Liken this to playing a guitar, it’s as if the right hand that a guitarist would normally use to finger pick the notes, to pick out the individual notes, this is as if the right hand was her head moving in and out of the motion sensor to pick the notes. And then the guitarist’s left hand, which changes the cord shapes on the thread board of the guitar, the role of the left hand was taken by the switch that Charlotte was using to change the configuration of notes available to be played by her head.

07:42 JD: What was your first impression of Charlotte?

07:45 DB: My first impressions, somebody who was interested in classical music which not many of the young people I was working with at the time were. Somebody who is interested very much in working on her own in her own way. So yeah, the early sessions were very much about finding out what she was interested in and also how physically and practically she was going to create music, perform it, learn about it, compose it.

08:20 JD: At what point did you think she has got something special?

08:28 DB: I think it was just before, a few weeks before the first time she actually performed in public. I’ve been very careful not to put too much pressure on her to move forward and to achieve. I was very happy for her to go at her own pace. But she knew there was a concert coming up in school and she announced that she wanted to be a part of that, that she wanted to perform in it. Given the rate at which we had been working in the previous months, I was a bit nervous because I didn’t really think that she would be able to get the piece together in time to be able to perform it, but she did. She really knuckled down and applied herself and practised an awful lot outside of our sessions, which was quite a thing because the equipment that she was using at the time, I wasn’t able to leave it in school. So, when she was practising by herself, she was doing it entirely in her own head and making the movements from memory without the equipment. So, yeah that’s when I realized she has something special because the music it was in her head.


09:47 CW: That was very scary. I was outside waiting to go on. Like, “No, no, no, no. I’m not gonna do this.” And Doug was like, “Yes, yes, you are.” Like, “No I’m not.” He was like, “Just calm down and relax. If you don’t wanna do it, you don’t have to.” I was like, “You are not meant to say that.” [laughter] And eventually I got on the stage and Doug came on with me because I wanted him there, and I performed in front of everyone and I got really shaky and nervous as I had never performed in front of people before then. And it went reasonably well, I think, and piece came out maybe a bit too fast, but it went well enough. Everyone seemed to enjoy it and quite a few people were surprised I think.

10:29 JD: Did you have family and friends in the audience?

10:31 CW: My aunt was there and my mum.

10:35 S?: And for Charlotte’s mum, Tessa, seeing her daughter’s transformation was nothing short of remarkable.

10:41 Speaker 4: It was fantastic and she is really very good. She had been through such a rotten time and it just gave her something that she could achieve, and it was just wonderful as a mother to see her doing so. That’s why I am gonna cry.


11:00 S4: [11:01] ____. [laughter] It gave her something which she could achieve and be successful at. And as a parent, it was just wonderful to see that the determination she had actually was successful and she was good at it. It was very good.


11:24 JD: Has the music changed Charlotte’s life?

11:29 S4: I think it was the achievement of being able to play performing in front of people was I think was incredibly nerve wrecking for Charlotte, so the fact that she managed to do it gave her a little confidence which I think also then helped in other spheres of her life, so academically and probably socially as well. And I do think its helped her realize that she can achieve anything she wants to if she puts her mind to it.

11:56 JD: Relative to your memories of playing the piano, playing music in this way, does it feel similar if that makes any sense?

12:06 CW: I think it was very different. I practised a lot. I don’t really remember practising much when I played the piano. I enjoyed it. I wanted to achieve at it because it made people see me as a person rather than a disabled person who they made presumptions about.

12:21 DB: First I heard about Charlotte when Jonathan Westrup from Drake posted a video clip of Charlotte playing on the teaching music website.

12:29 S?: David Ashworth is a freelance educational consultant who specializes in music and technology.

12:34 David Ashworth: The performance was significant because… Well there were two things. One was it showed someone who obviously had severe disabilities, but who was actually able to overcome those to play a standard piece of repertoire and I’d never seen that before.

12:48 JD: How did it compare in relation to say a traditional cellist?

12:53 DA: Well that’s interesting. If you were to listen just the audio, you would find Charlotte’s performance is wanting. The quality of the sound, the phrasing, the timing that you get with a professional musician playing the real cello, all the expressive qualities is in a league of its own. Then you hear… You hear what Charlotte’s doing and its nowhere near the same level. However when you watch a video clip and see what she’s doing, it then becomes very powerful. It makes you realize that actually music is more about listening. It’s more about the whole contextual thing if you like and not just me, but other commentators who’ve been on the website, seen the clip and left comments, have found its a deeply moving experience hearing someone play a piece of Bach in that way.

13:36 JD: There is an argument that Charlotte’s performance is akin to being given a keyboard with only the right notes on it. How would you react to that?

13:43 DA: That’s an interesting one. In fact there are conventional instruments if you like only have the right notes, but in fact its a bigger thing than that. I think right notes is only part of the picture. We tend to get obsessed with people playing the right notes. The pictures of a note becomes all important, but there’s far more to music than the actual pictures of the notes that you play. And what was so interesting about Charlotte’s performance was that you could see, you could witness, the mental and the physical engagement, and also the musical engagement as well and… Well the spiritual engagement if you like and that was the powerful thing to me. So just to reduce music to a conversation about how you access the right pitches as a note is only part of the picture. You look at that clip of Charlotte and what’s really… The most powerful bit for me is at right at the end when she stops playing, there is a moment’s pause, and then she breaks into a big broad grin. And you know, she knows she’s made something musically significant, that she’s achieved something musically significant there.


15:01 DB: The principle behind the way that we use assistive music technology is almost the opposite to a conventional musical instrument. So with a conventional musical instrument, the instrument itself is fixed and the musician has to master that instrument and has to almost subordinate themselves to the demands of that instrument. Whereas what assistive music technology does is to take a person and their particular interests, their physical needs, and create a musical instrument, a way of playing music which is absolutely right for that person. Not just physically and musically, but also in terms of ensuring that there’s an appropriate challenge.

15:45 JD: Where does the technology end and the skill of the musician begin?

15:51 DB: That’s quite a difficult question to answer. It completely depends upon the individual musician, but I could probably answer that in terms of conventional musical instruments. If you take a piano for example there are all kinds of elements of a piano, which are already assistive. The keys are ordered on the keyboard from low to high. They’re tuned according to a convention, equal temperament. They’re tuned to concert pitch. I dare say that if you went into a music exam having prepared all your piano pieces and the examiner was to tell you, “Oh by the way, today in order to test you a little bit further we’ve rearranged all of the notes on the piano keyboard and retuned it, but if you’re a good pianist then you should be able to handle that.” That gives maybe some kind of an impression. All musical instruments are assistive in some way because they are set up in a certain way. The difference with assistive music technology is that it varies from person-to-person.

16:50 Jonathan Westrup: It’s set up so the sound starts working about there, so that distance. You can change the distance at which it starts actually triggering. You can make it trigger from here onwards, so you can do something quite big or you can do something very small. So as I’m pulling away from the device, [music] and as I move my hand further away, [music] it plays up the scale.


17:13 S?: Jonathan Westrup from Drake Music demonstrated some of the technology they use at St. Rose’s School in Stroud.

17:21 JW: The actual device itself looks like a small red torch and it emits an invisible beam and when you break the beam with any part of your body or whatever, it will trigger sound and you can set up what that sound is. At the moment we’ve got a cello here which we could just play a little bit. I’m just moving my hand now in front of it, [music] so you can hear now that’s the scale. [music] The student’s got a very wide motion. For example, if they can swing their left arm you know that’s a big movement they’ve got, then it could still pick up the sound rather than the small fine motors movements, which other students might want to use in different equipment, but that’s quite good for big movements. It does take as much time to master as any other instrument really. Because then, like you finding, you need to kind of find… [music] Try to find a little riff there. [music] I’m not a master, by any means.

18:19 S?: Aileen [18:20] ____ runs music classes for disabled students in the Norwegian city of Tromso. Their Arctic winters are long and dark. And in January, the city celebrates the end of the polar nights with a large cultural festival. Having seen Charlotte perform, Aileen invited her to compose music for the festival.

18:38 Speaker 7: It’s the darkest period in Tromso when we have no sun. It’s also a way of making life to the city, having a big music festival with musicians coming from all over the world. It’s all kinds of music being performed there. From big symphony orchestras to small jazz ensembles, and rock bands in the evenings. So its a very diverse music festival.

19:03 JD: And can you describe how her compositions were performed?

19:07 S7: Before the performance, it was quite a long project with months of her composing and sending files to Norway, speaking on phone about what we wanted with the music and how it should fit with the dancers. Charlotte was also very clear on… She wanted acoustic instruments. So we had musicians from the symphony orchestra of Tromso to do a recording of her music. [music] The performance at the Northern Lights Music Festival was outdoor in minus 10. [music] This was in the town square of Tromso and it was packed with people around there, and the scene was made up by ice and snow sculptures. And they had proper lighting and dancers dancing to the music. So it was quite magic to hear the music in that setting.


20:24 CW: I really wanted to pursue grades, I wanted to pursue music at college, but unfortunately establishments who grade musicians wouldn’t recognize it. Examining boards wouldn’t recognize it, and therefore, I couldn’t progress.

20:39 JD: Do you understand why they won’t recognize it? Do you think that’s fair?

20:42 CW: They’re very traditional in the way they recognize any examination. And therefore, the way the Drake Music and students play music is very different. And they either need to set up an examination which can be qualified at the same level, which is specifically for music technology of any to accept it. We’re meant to be in an equal society, therefore everyone should be equally graded.

21:07 S?: Charlotte’s achievements were recognized when she received a Bronze Arts award from Trinity College, London. In a statement, Trinity College go on to say, “Although there is no specific campaign to encourage the use of assistive technology, we have taken great interest in Charlotte’s achievement and profiled her story both on our website and in other print materials and press articles. We hope that this has actively encouraged others working with assistive technology, to see how Arts award could work for them.” The music examining boards are consistent in their approach, in as far as they don’t accredit music performed electronically, but as Doug Bott explains, its early days.

21:47 DB: If Charlotte had come to us in 20 years’ time, then I would fully expect that she would have been able to have had her achievements accredited either through the formal school music curriculum or through instrumental exams. Whether that’s through the Associated Board of the Royal Schools of Music or anyone else. At the moment, its very new territory for everybody I think. There are young disabled people who have their achievements accredited in various ways. But one issue, which I think people tend to shy away from talking about and which I’m quite happy to talk about, is that there’s a very big issue around the nature of people’s different disabilities. So differently disabled people access music in different ways and some of those means of access, whether its through Braille music or whether its through British sign language, some of those means of access are perhaps more able to slot in to the existing accreditation frameworks. Other forms of access, for example assistive music technology which is particularly useful for people who face physical barriers to music, these means of access haven’t really been tried and tested yet.

23:07 DB: We’re talking, a fair bit at the moment, to the Associated Board and they’re quite open about the fact that currently they don’t accredit any kind of music produced electronically, let alone the kind of assistive technology that our students are using, but they’re very keen to engage with these kinds of developments. And what we’re currently in the very early stages of discussing with them and also colleagues at Bath Spa University, are ways that you can accredit the quality of a musical performance in such a way that its not necessarily linked to the particular instrument that a person is playing. But what we’re arguing for is something which, to play devil’s advocate, takes it even further and says, “Okay, but what if you were to turn up to a piano exam to play the piano repertoire and you would say actually I’m not going to play on the piano today, I’m gonna to play on a flute.” How would you examine that? Because that really is what we are dealing with. We’re dealing with people who are playing instruments which are unique to them and maybe they’re not even playing repertoire. Maybe they’re playing music which they themselves have created.

24:18 S?: And for music consultant, David Ashworth, Charlotte’s performance could be just the beginning.

24:23 DA: I’ve been working in special schools where I’ve seen young people making music using assistive technology and its always tended to be making music in its own terms and its own style, if you like. A lot of improvisation. And a lot of fairly cutting edge avant-garde sort of sounds, if you like. What makes Charlotte different is she was actually playing crotchet and quavers. She was playing the dots, if you like. She was playing a mainstream piece of music which we normally associate as being accessed by, if you like, a mainstream musician. And that was what was different. She actually had the audacity, if you like, to actually step into their world, and that was what made it so significant I think. Where Charlotte has been important, she’s been a catalyst if you like to get this debate really going, and I’m sure she will see it in that way and feel rightly proud of that achievement.


25:25 S?: Charlotte White chose to pursue her academic studies and gained a place at university studying social policy and criminology. Advancements in the availability and price of software though, means she may soon return to music. And for Doug Bott, that moment can’t come soon enough.

25:41 DB: As a composer, she was very instinctive. She’s extraordinary in terms of the fact that she has a really innate musical ability. I think that any music teacher or music educator who would come across her, whether she was a disabled person or not, would find her to be an outstanding student in terms of the way that she engages with learning, practising, and performing musical instruments. And in terms of the way that she engages with composition and the fact that it really comes from inside her rather than from her understanding of the rules of music.

26:27 CW: Music inspired me in the belief that I could achieve anything and a new belief in myself, which had pretty much gone for the most part, and that belief became sort of lit in every part of my life. It became lit like my physiotherapy and my occupational therapy, and my speech therapy. I became more enthusiastic and had much more of a drive to achieve, which I had slightly lost before then, and I did start achieving in all those areas much more than I had done. And wanting to break the barriers and do the same things as everyone else was rather than thus been bracketed as a disabled person who wouldn’t achieve.

27:12 CW: I’ve got ambition back of what I want to achieve in the future and then complete in the long run. I started to enjoy life as well and have fun, and start experiencing things that the average teenager does.

27:29 S?: Charlotte White’s musical flight was presented by Josie D’Arby and produced in Bristol by Toby Field. All the music in the program was either composed or performed by Charlotte.



Klatt’s Last Tapes: A History of Speech Synthesisers Video

Klatt’s Last Tapes: A History of Speech Synthesisers

Speech Synthesisers in Use

Stephen Hawking and his Speech Synthesiser

Speech synthesisers and technology involved in giving a voice to those who can’t utilise has an interesting and enthralling history. It’s an area of technology and science that has fascinated scientists and therapists from many fields but is rarely discussed in the mainstream. World renowned physicist and cosmologist Stephen Hawking has made the presence of this technology more widely known.

Klatt’s Last Tapes was a one off exclusive on BBC Radio 4 which looked into the work of Dennis Klatt, the American pioneer of text to speech machines. Klatt’s work is explored by Lucy Hawking, the daughter of Stephen, who during this video goes on  a journey back through the history of speech machines. It really shows the ingenuity and creativity of the inventors and the quirky history of the predecessors of the machines that help her father communicate.



In the Beginning

Speech synthesisers have been produced and developed for over 200 years. Beginning mechanically with Wolfgang von Kempelen’s speaking machine which he built in 1769. Lucy Hawking visit Saarland University to see and try out a working replica of this primitive

wooden box with a mouthpiece and a bellows that was an early speech machine

Replica of Von Kempelen Speaking Machine

machine and learns more about von Kempelen’s dedication to finding a mechanical solution for people who were unable to speech. Von Kempelen found the main problem with his machine and developments was the lack of tongue and this particular element of the speech system was beyond his abilities to recreate mechanically.

Mechanics to Electronics

Experts believe there was no smooth transition between mechanical and electrical speech synthesisers. The first known electrical system was The Voder developed in the 1930s and displayed for all to see at the 1937 World Fair in New York. It operated much like an organ and it was remarked that it would take people a year at least to get to grips with the controls required to master its use.

Problems in Speech Synthesis

Through speaking to experts in the field Lucy Hawking realises and explores some of the main problems that have been battled against since the first speech synthesisers were developed. Initially it was possible to create plausible male voices but creating a female voice proved and still does prove difficult. Simulating women’s’ voices is harder due to different characteristics and they sound much more artificial than male. Articulation for the female voice is different and this is something even the most advanced computer systems has struggled with. It’s clear, as Hawking remarks in the show that using a synthesised male voice would provide women with a huge loss of identity.

Similarly, adult speech synthesisers have proved problematic for children. Speaking with an adult synthesised voice makes socialisation harder for children whose peers may find it harder to relate to them with an adult voice. The long term aim is to create personalised speech synthesis machines which grow with their user.

Dennis Klatt – The Father of Computerised Speech Synthesis

Dennis Klatt was the man who made a difference to speech synthesis. He was the pioneer of text to speech machines from a technological perspective and created an interface which allowed for speech for non-expert users for the first time. Before Klatt’s work, non-verbal individuals would need specialist support to be able to speak at all.

Lucy Hawking discusses Klatt’s work with his daughter Dr Laura Fine during the show. Klatt invented DECTalk, the original system which could take text and turn it into speech. Klatt also produced a definitive history of speech devices which includes a collection of recordings from all the devices developed throughout the 20th century. It’s a hugely valuable resource for development as well as for prosperity.

Klatt was dedicated to the production of a system for speech synthesis that was natural and intelligible. As Dr Fine explains he combined engineering and speech production research with people’s perceptions to create the end product. Perception data and the way people interpret speech is key to how successful a speech synthesiser is for regular conversation and socialisation.

Klatt created a range of different voices, entertainingly labelled the DECTalk Gang, and they gave a choice to DECTalk users. Choices included Beautiful Betty, Kit the Kid and Perfect Paul. Stephen Hawking’s voice is very similar to Perfect Paul.

Eye Gaze Speech Synthesisers

The show tells us that over 1 million people in America are unable to speak for a range of reasons. Lucy Hawking then goes onto to talk to Michael Cubis who lose his voice after a stroke. He controls his speech synthesiser through gaze control which is increasingly where text to speech technology is heading.

Eye Gaze technology uses movement of the eyes to generate text and speaking to Mick Donegan, a specialist in the field Hawking further discusses how the technology works and how it’s developed. The technology itself has been around for about 30 years but the systems have developed a lot in the 21st century. Sophistication in new speech synthesisers mean they can be utilised by individuals who live with involuntary movement, perhaps muscle spasms or shakes. People living with conditions such as cerebral palsy and multiple sclerosis are now able to access gaze controlled text to speech machines as well as games and leisure pursuits.

Initially machines were developed without punctuation or even capital letters but Donegan tells Hawking that this was met with disappointment by Michael Cubis who was insistent that proper speech, with the proper markers, is key to his identity and expressing himself as a fully literate, intelligent person.

The Future

Mick Donegan continues to discuss the future of speech synthesisers and recent research is even looking into how they can provide speech to people living with Locked-In syndrome.

The ideal way of giving someone their speech back is through implants, which is obviously an area which needs more research but Donegan asserts that caps which can boost signals are the current best option.

Speech Synthesisers and Identity

Hawking looks a little at how a speech synthesiser gives or takes away someone’s identity by chatting to Irish director Simon Fitzmaurice. With motor neuron disease Fitzmaurice lost his voice but was provided with a new one through his speech synthesiser – a new American voice.

The American voice of the synthesiser has become synonymous with him for Fitzmaurice’s family with his children unnerved by changes to it through other computer systems and programmes. Despite this Fitzmaurice has been participating in research alongside CereProc, a leading synthetic speech company, to build him a new voice.

CereProc have used recordings of Fitzmaurice’s voice and even data from his father’s voice to produce a speech synthesiser which mimics how he used to sound. This is fascinating technology and the show suggests that if you live with a disease where you may lose your voice there is now scope to make recordings in advance to try and save their part of your identity in the long run.

We thought we’d end this piece with a bit of friendly advice from Michael Cubis. When asked how do you talk to someone with a speech machine he replied:

“I would ask people them not to ask long questions and be patient because it can take a long time to answer. Also please bear in mind that it can be very tiring for those using speech output devices”


Please share and comment

If you enjoyed this video, please embed it on your sites or share it. We would also love to hear your comments below the video transcription.


Klatt’s Last Tapes Radio Show Transcript:

00:01 Speaker 1: We’ve comedy in half an hour when Richie Webb and Nick Walker star as the Hobby Bobbies. Before that, here on BBC Radio 4, Lucy Hawking traces the development of speech synthesis in Klatt’s Last Tapes.

00:16 Speaker 2: You are listening to the voice of a machine.

00:20 Speaker 3: Mama, mama.

00:24 Speaker 4: A, B, C, D, E, F, G…

00:29 Speaker 5: Once upon a time, there lived a king and queen who had no children.

00:34 Speaker 6: Do I sound like a boy or a girl?

00:37 Speaker 7: How are you? I love you.

00:40 S2: I do not understands what the words mean when I read them.

00:45 Speaker 8: Ha-ha-ha.

00:47 Speaker 9: I can serve as an authority figure.

00:50 Speaker 10: What did you say before that?

00:53 Speaker 11: Can you understand me even though I am whispering?

00:56 Speaker 12: To be or not to be, that is the question.

01:01 Lucy Hawking: My name is Lucy Hawking and I have been regularly chatting to a user of speech technology, my father Stephen, for the past 28 years. I write adventure stories for primary aged children about astronomy, astrophysics and cosmology. When I go to schools, I always talk about my father’s use of speech technology and I tell the kids that even though my father may sound robotic, when I play them a clip of him talking, I ask them to remember that actually it’s a real man talking to them. And it’s a man who’s using a computer to give himself back the voice that his illness has taken away from him.

01:42 Speaker 14: Development of speech synthesizers. One, The Voder of Homer Dudley, 1939.

01:50 Speaker 15: Will you please make the Voder say for our Eastern listeners, “Good evening radio audience.”?

01:55 Speaker 16: Good evening radio audience.

01:59 LH: To find out where speech technology started, I went to Saarland University in Germany, where two researchers had built a model of the first ever voice machine. It was originally created in the 18th Century by inventor, scientist, and impresario Wolfgang Von Kempelen.

[background noises]

02:24 LH: Hello.

02:24 Speaker 17: Hello.

02:25 LH: Good morning.

02:26 S1: Please come in.

02:26 LH: Thank you so much.

02:27 S1: I’m very pleased to meet you.

02:28 S1: Hello.

[background conversation]

02:30 Jürgen Trouvain: My name is Jürgen Trouvain. I’m a lecturer and researcher here at the Department of Computational Linguistics and Phonetics at Saarland University and I’m also interested in the history of speech communication devices, like the one of von Kempelen, for example. Kempelen was both a good showman and a very good scientist, but he was really like, sort of a genius, a real engineer, because he was interested in building things which can function and can help also people.

03:03 Fabian Brackhane: My name is Fabian Brackhane.

03:04 LH: What do you think the relationship was between von Kempelen’s original inspiration and the organ?

03:11 FB: It’s a very curious thing, because there is a stop in the pipe organ called “vox humana.”


03:24 FB: When this stop was invented in the 17th century, it should be a representation of the human voice playing the organ.

03:39 LH: So, they wanted to take the vox humana from a musical note, something you’d find in compositions at the time, to actually be able to produce human speech.

03:53 FB: Exactly. Yes. But Kempelen knew very well that this stuff couldn’t be the solution to get a speech synthesis.

[background music]

04:07 S1: Three, PAT the Parametric Artificial Talker of Walter Lawrence, 1953.

04:14 S1: What did you say before that?

04:18 LH: And so, we’re looking at von Kempelen’s speech machine. [chuckle] The door of which has just fallen off. It looks like a small bird house. Yeah. So, we’re taking the lid off the box, which houses the speech machine. And so, Fabian is putting one hand through one hole with his elbow on the bellows, which represent the lungs and his other hand is coming underneath the rubber cone. Which, what does the rubber cone represent?

04:53 FB: The mouth.

04:54 LH: The mouth. So, it’s hand under the mouth piece.

04:59 S3: Mama Mama.

05:03 S1: Ooh, it’s creepy. Sorry.


05:05 S3: Papa Papa.

05:10 FB: So, it’s… These are the both best words he/she could say it.

05:17 S3: Mama.

05:19 FB: So, you have the nose to be opened.

05:23 S3: Papa.

05:25 LH: So, Fabian is moving his hand rapidly over the mouthpiece and using two fingers over the nostrils effectively, while pressing down with his elbow on the lungs. Fabian is actually mouthing the words “mama” and “papa” while the machine is saying them.


05:45 S1: Four, The “OVE” cascade formant synthesizer of Gunnar Fant, 1953.

05:51 S7: How are you? I love you.

05:59 Bernd Möbius: I might be able to find out whether Lucy is able to…

06:02 LH: Should we see… Should we see, perhaps like in…

06:03 FB: So, there’s your instructor.

06:05 LH: Right.

06:06 FB: If you want to say “em,” you have to close the mouth and the nostrils have to be opened.

06:12 LH: The nostrils are open, front [06:12] ____.

06:13 FB: And if you want to say “ah,” you have to move the hand backwards. So, just mah, mah, while I’m pressing them…

06:22 LH: While pressing…

06:23 S3: Mm… Mama… Mam…


06:23 LH: I did that with three syllables. [chuckle] I’ll try with two this time.

06:34 S3: Mama…

06:37 LH: Right and what about papa? How would I do papa?

06:39 FB: The same way but you have to close the nostrils. Well…

06:44 LH: Okay. So…

06:44 S3: Pa-pa-paaaaa.


06:50 LH: Let’s see if I can just do it with two syllables this time.

06:53 S3: Pa-paa…

06:56 LH: Can I get her to say anything else or will I be… Would I be able to make it say any other words?

07:03 FB: If you don’t cover the mouth, it’s an A.

07:07 S3: Ah…

07:09 S1: And the more you cover the mouth, the vowel quality changes.

07:13 S3: Ahh… A… B… Mm…


07:28 FB: He knew that the missing of the tongue was very important thing and in his book, he wrote to his readers, to invent this machine forward, but nobody could invent it with the tongue, with teeth, so that, it could speak more than this few, very few things.


07:57 LH: It seems to me that his aim was actually to give a voice to people who couldn’t speak. And so, he must have hoped for further development of his machine ’cause he can’t have imagined that, it would just be mama and papa or those short sentences. He must have had in mind, this idea that people would be able to speak freely, mechanically.

08:15 JT: And there was a plea in that book Fabian mentioned, please read out that means, researchers and the later generations, please, go on with the development of that machine. So, we’re still trying to do that here.


08:32 S1: 16, Output from the first computer-based phonemic-synthesis-by-rule program, created by John Kelly and Louis Gerstman, 1961.

08:44 S1: To be or not to be, that is the question.

08:49 LH: It would be really nice to get a sense of the progression from a mechanical to electrical to computer solutions to providing a voice for people who can’t speak.

09:01 BM: I’m not sure whether that was actually a smooth transition from mechanical systems like [09:09] ____ to the first electrical ones. I only know that, all of a sudden, that’s how it looks. My name is Bernd Möbius. I am the Professor of Phonetics and Phonology at Saarland University. In the 1930s, there was an electrical system around, the so-called Voder, did by Homer Dudley, that was demonstrated at the World Fair in New York, I believe in 1937.

09:35 S1: For example, Helen, will you have the Voder say, “She saw me”?

09:41 Speaker 21: She saw me.

09:42 S1: That sounded awfully flat, how about a little expression? Say the sentence in answer to these questions. “Who saw you?”

09:49 S2: She saw me.

09:51 S1: Whom did she see?

09:52 S2: She saw me.

09:55 S1: What did she, see you or hear you?

09:57 S2: She saw me.

09:59 BM: During the demonstration at the World Fair, there was a female operator of the system who played the device a little bit like a church organ.

10:09 S1: About how long did it take you to become an expert in operating the Voder?

10:12 Speaker 22: It took me about a year of constant practice. This is about the average time required in most cases.


10:23 S2: She saw me. Who saw me? She saw me. She saw me. Who saw me? She saw me.

10:37 JT: We have to go back to the or is the floor next to the top, the top floor.

10:42 LH: I’m now just getting into an elevator, which probably I can talk to. So, does it speak English?

10:47 JT: Hopefully, yes.

10:51 S2: Okay. Hello, elevator. It doesn’t say hello back.

10:58 JT: You must be patient with that. It’s a machine. Maybe with German.

11:01 S?: Hello [German]

11:01 Speaker 23: Hi there, where can I take you?

11:08 LH: The third floor. Third floor.

11:14 S2: Okay, I’m bringing you to the third floor. Bye, bye.

11:18 LH: Bye now.

11:19 S1: 19. Rules to control a low-dimensionality articulatory model, by Cecil Coker, 1968.

11:28 S2: [11:28] ____. You are listening to the voice of a machine.

11:39 Speaker 24: I’m Eva Lizotte [11:39] ____, and I’m a PhD student and working in articulatory synthesis. The actual situation right now, is that, it’s very hard to simulate women’s voices ’cause they have a slightly different characteristics and if you just tune up the F0, the fundamental frequency or the pitch of the voice, it starts sounding really artificial and what you actually have to do, you have, also to alter the articulation. So when “ah”, when I or when we speak an “ah,” it’s different from a male long vocal tract “ah.” So, you have… You can not easily interpolate the articulation.

12:19 LH: Because of course it’d be awful for women not only to be using a speech synthesizer, but then, to be coming out with a man’s voice.

12:25 S2: Yeah.


12:26 LH: I mean, that would constitute… That would be a real loss of identity.

12:29 S2: Yeah. Exactly.

12:31 Speaker 25: This is result of trying to imitate a female voice by increasing the pitch.


12:37 S1: 24, the first full text-to-speech system, done in Japan by Noriko Umeda et al., 1968.

12:47 S5: Once upon a time, there lived a king and queen who had no children.

12:55 S1: But I think it’s also important to think of children for example, growing up and of course at the beginning to speak with an adult’s voice, even the sex would be the same, would be awful I think…

13:08 LH: Definitely very important just for making friends. It’s gonna be very hard for a child speaking with an adult’s voice to actually communicate with kids of their own age.

13:17 S2: Yeah.

13:18 JT: But at the moment we don’t know very much about the speaking voice of children coming, adults, for example. What’s really happening during the maturation of the vocal folds.

13:29 LH: So, the aim is to create speech machines which can grow up with somebody.

13:32 JT: That would be really nice. Then you would have shown real knowledge about what’s going on in your voice during life span, at least, of a first say, 20 years or so.


13:47 S1: 21, sentence-level phonology incorporated in rules by Dennis Klatt, 1976.

13:55 Speaker 26: It was the night before Christmas, went all through the house, not a creature was staring. Not even a mouse.

14:04 LH: Can you see that people who don’t maybe know, who Dennis Klatt is, could you put him in context?

14:09 JT: Yeah, he’s definitely one of the pioneers of speech emphasis, in the technological sense, but also in providing an interface for non-experts who could basically type in text and get synthetic speech out of the system, which wasn’t possible before I think.

14:27 S2: Before Klatt, you would actually have to be a specialist in order to be able to input what you wanted to say.

14:33 JT: Exactly.

14:33 LH: Okay. Laura can you hear me?

14:36 S2: I can hear you. Can you hear me?

14:37 LH: Yes. I’ve got you. That was fantastic. This is Dr, Laura Fine, the daughter of Dennis Klatt. Dennis Klatt is really the father of the modern speech machine. He created DECtalk, the system which takes text, inputted by the user and turns it into speech. Dennis Klatt also produced the definitive history of speech devices which includes a collection of recordings of each device through out the 20th century.

15:01 S2: He really was interested in making a natural and intelligible system. So, the most important qualities of a speech synthesis system are really the naturalness and the intelligibility. And he was very much interested in making those of high quality. One of the unique contributions was that, he used not only his understanding from an engineering standpoint and a speech production standpoint, but he also asked for analysis with perception data. How do people interpret speech and what is it in the listener that helps them determine, is this a child, is this a female, is this a male? What cues are important? And that really helped him to make an intelligible system that incorporated different age speakers and different genders.


15:47 S6: Do I sound like a boy or a girl?

15:51 S2: My mother came across this drawing that my father made of the different speakers. In the center, we have Perfect Paul. This is a picture of my father.

16:01 Speaker 27: I am Perfect Paul, the standard male voice.

16:04 S2: And then, this is beautiful Betty which is the standard female voice. And that is a picture that he drew of my mother.

16:13 Speaker 28: I am beautiful Betty, the standard female voice. Some people think I sound a bit like a man.


16:22 S2: This is Kit the kid, who’s a 10-year old child. So, this is a picture of me.

16:27 Speaker 29: My name is Kit the kid and I am about 10-years old.

16:31 S2: With my nice short hair cut, as a child.

16:33 LH: Oh, is that you?

16:34 S2: I was a lab rat. As a child, I spent a lot of time at MIT. My father had a candy drawer. I spent hours with him at MIT, in his laboratory and he took snippets of my voice and that helped to develop the child’s voice.

16:51 LH: I love that they’re called the DECtalk gang.

16:54 S2: The DECtalk gang.

16:55 LH: That is a great… That is a great title.

16:57 S2: So, there was my father in later years and underneath the caption says, Huge Harry. Kind of older gentleman’s voice.

17:04 S9: I am Huge Harry, a very large person with a deep voice. I can serve as an authority figure.

17:12 LH: Laura, I have to tell you something, Perfect Paul, sounds just like my dad.

17:17 S2: I mean, I think that’s amazing.

17:18 LH: Is Perfect Paul based on your father’s voice?

17:21 S2: Yes.

17:22 LH: Which therefore means that, my father is actually speaking with your father’s voice.

17:27 S2: It’s amazing, he would be so, so thrilled.

17:30 LH: I think, one of the things that strikes me about your father is his humanity and that he was obviously an amazing scientist, who managed to do something that has had a very profound impact on people’s day-to-day lives. And but also that he had quite a sense of humour.

17:45 S2: He did.


17:47 LH: Is it true that he gave his synthesizer the ability to sing, “Happy birthday to you”?

17:53 S2: He did.

17:54 S2: Happy birthday to you. Happy birthday to you. Happy birthday dear…

18:03 S2: One of the ironies is, as a 40-year old man, he began to be somewhat hoarse, because he had thyroid cancer. And, he had had a thyroidectomy, but his vocal chords were affected by the disease. And so, he spoke in later years with a raspy voice. And I think he understood all too well your father’s challenges in terms of communication.

18:29 LH: So, he had a real sense himself of what it would actually be like to find that you had no voice.

18:36 S2: Yes, my father unfortunately passed away at age 50, way too young. And he knew that he had a terminal illness really, when I was quite young. He knew that he would not be around perhaps to see me graduate from college. But he was always so optimistic. I think it’s been such an amazing experience for me to talk to you about how your father’s life has been transformed by my father’s research. And I had never really thought before that my father’s voice lives on.


19:11 S1: 33, The Klattalk system by Dennis Klatt of MIT which formed the basis for Digital Equiptment Corporation’s DECtalk system, 1983.

19:24 S2: According to the American Speech and Hearing Association, there are over one million people in the United States who are unable to speak for one reason or another.

19:37 Speaker 30: I will show you the way that you can write using my eyes.

19:41 Speaker 31: At first, when people meet me as someone who is unable to speak, they’d seem to assume that you have some form of mental deficiency.

19:49 S3: I will show you the way that you can write using my eyes.

19:52 LH: This is [19:53] ____ Michael Cubis. And Michael lost his voice from a stroke some years ago.

19:56 Speaker 32: Some people will talk to me as if I have a learning disability. I find this quite funny as some of them [20:02] ____ the most ridiculous way. Some of them catch on fairly fast and realize that I’m perfectly sane. Other’s continue to act this way though, which is funny and completely bizarre.


20:20 S3: People are quite anxious about how to approach someone with a disability. And that’s what Michael does, he puts people at their ease. So, it is easy to communicate with him.

20:30 LH: Mick Donegan’s speciality is an eye gaze technology, and that means, using the movements of the eye in order to generate text, which can then be turned into speech. Could you explain a bit more to us about gaze control, about the kind of technology that we have just had a conversation with Michael [20:49] ____?

20:50 S3: It’s a system, it’s based on a very powerful camera system combined with low level infra-red lights. The actual technology has been around probably two or three decades, but the significant change that’s happened this century, is that systems began to cope with significant involuntary movement. That means that the significant numbers of people with cerebral palsy, for example, who have involuntary movement, suddenly that group of people were able to use the system. People with MS who have involuntary movement.


21:23 S1: 11, The DAVO articulatory synthesizer developed by George Rosen at MIT, 1958.

21:31 S4: A, B, C, D, E, F, G, H, I, J, K…

21:36 S3: When I first tried Michael with eye gaze technology, we used just a lower case system and Michael was very unhappy about that. He was insistent that I put capital letters, full stops, commas, semicolons, because it’s really important for him to show everyone that he’s a fully literate guy who is able to speak independently and in the highest literacy level.

21:56 S4: When we know our A, B, C…

22:02 LH: Mick, I wonder if you could tell us a bit about how you see the future of this technology developing?

22:07 S3: I’ve just finished being an advisor for a European project on brain-computer interface and disability. And for me, that’s a technology that excites me because for those people who are completely locked in, who can’t even move their eyes, then there is no other way to go, other than to use a brain computer interface. At the moment, you know it’s kind of inconvenient, because for the best signal… Well, in fact, for the best signal, you need an implant. But the second best signal [chuckle] is to actually wear a cap and for that [22:31] ____ gel on it, etcetera. But there are various dry caps being developed that have a reasonable signal as I understand it.

22:39 LH: I’m always asked how to talk to my father, and it would be great to know what advice you would give to people who are not familiar with speech machines, but who would like to have a conversation with you?

22:49 Speaker 33: I would ask them not to ask long questions and be patient because it can take a long time to answer. Also, please bear in mind that it can be very tiring for those using speech output devices.


23:06 Speaker 34: The question of whether I would change my voice given the opportunity is a difficult one. And I suddenly have an opportunity.

23:14 LH: This is acclaimed film-maker, Simon Fitzmaurice, who has lost his voice through MND.

23:20 S3: This voice, my voice is a generic one that came with the computer, turning an Irish man into an American overnight. But it has become my voice.

23:33 S?: Yeah. This is actually something that we have in mind as a real application for people who know that there’s a chance that they will lose their voice to record themselves. Such that the experts will be able to build a speech synthesiser that has that person’s voice.

23:51 S3: There are two key issues, and the question of changing my voice. What I think about my voice, and what those closest to me think and feel about my voice? And I can tell you what my children feel straightaway. They find the idea of me changing my voice completely abhorrent. Just recently, I was testing out another computer, when I glimpsed out of the corner of my eye, my two little boys standing outside the door, their heads close together whispering… They are four and six years of age. They are whispering and looking in my direction. It turns out they are discussing the strange voice coming out of this different computer. Later, back on my own computer, it’s bedtime and right my six-year old comes to give me a kiss, I type up “Goodnight” on my screen. “No. Say it.” I say it, “Goodnight.” He turns to his brother at the door, “You see, I told you. It’s the same.” Someone’s voice is part of their identity, integral to their perceived makeup, it’s funny though, I feel less protective of my computer voice than others, probably because my voice inside my head is what is familiar to me, my thoughts, not the voice that expresses them.

25:20 S3: Recently, I came across a video on YouTube, we have a doctor in Sweden with motor and neuron disease and there it was, my voice out of someone else’s computer, identical. It was a little unnerving. So, I decided to see if I could get some semblance of my old spoken voice back, uniquely mine. I’ve been working with a company in Edinburgh, CereProc, the world leaders in synthetic speech who have built a synthetic voice out of old recordings of my spoken voice. I was lucky enough to have a recording of me reading some of my poetry and other recordings. However, because of the lack of data in comparison to someone who would deliberately bank their voice, my synthetic voice is limited by the amount of original material. As a solution, CereProc are now in the process of using my father’s voice as a similar source from which to fill in the missing DNA and to build a harmonias rounded voice.

26:23 Speaker 35: Harmonious rounded voice. I await the results.

26:27 S3: I await the results.

26:27 S3: So, the question remain…

26:29 S3: The question remains…

26:30 S3: Will I change my voice?

26:31 S3: Will I change my voice. And more importantly…

26:34 S3: Will my children allow it?

26:36 S3: Will my children allow it?


26:40 S1: 30, The MIT MITalk system by Jonathan Allen, Sheri Hunnicut, and Dennis Klatt, 1979.

26:49 Speaker 36: Speech is so familiar, a feature of daily life that we rarely pause to define it.

26:56 S1: End of the demonstration. These recordings were made by Dennis Klatt, on November 22nd 1986.

27:04 LH: Amazingly, we’ve progressed from Von Kempelen’s 18th century machine which had a limited vocabulary to being able to recreate the exact voice that was lost and give it expression, meaning and modulation in a way that mimics the naturally produced voice. Soon, speech technology users will be able to make their voices smile.

27:26 S1: Klatt’s Last Tape was presented by Lucy Hawking.

27:29 S6: Do I sound like a boy or a girl.

27:31 S?: The recordings were made available by the Acoustical Society of America.

27:35 S4: A, B, C, D, E, F…

27:37 S?: The sound design was by Nick Romero.

27:40 S7: How are you? I love you.

27:43 S?: It was produced by Julian Mayers.

27:45 S8: Ha-ha-ha.

27:46 S?: It was a Sweet Talk production for BBC Radio 4.

27:51 S2: Thank you for listening and good luck on all your cosmic journeys.

28:01 S1: I’m a bit concerned about that last bit, but while I’ve still got a job, I’ll introduce Peter White to tell us about You and Yours in half an hour. Peter.

28:07 Speaker 37: Yeah. We’re pretty concerned up here too. It’s claimed over 200,000 people who lost money when the life assurance company, Equitable Life, collapsed 10 years ago, could end up with no compensation at all. The Public Accounts Committee has blamed the Treasury for not getting a grip on the scheme. We’ll be looking at what can be done before the current deadline runs out, next spring. Wales, has cut its use of carrier bags by a massive three-quarters by imposing a charge. England still says, “It’s not ready… ”


Photo Credit: Attribution Some rights reserved by lwpkommunikacio

Joanna Grace’s Sensory Story Project

Sensory Communication – Sensory Stories

Hello everyone, my name is Joanna Grace and I write sensory stories for children with profound and multiple learning disabilities. I’m currently running a project on Kickstarter to create a set of these stories that families could use – please check it out, we only have a few days left!

Sensory stories have many things to offer children, one of which is the opportunity to develop communication. I’ll explain, but first I should tell you what a sensory story is!

What Are Sensory Stories?

Image representing the 5 senses - smell, sound, touch, taste and sight

Joanna’s Sensory Stories engage children’s 5 senses.

Sensory stories are constructed out of a combination of sensory experiences and text.

I aim to write stories in less than ten sentences. You might think you can’t get much of a story into so little text, but think of how much a poet can convey in a haiku, and think of the adage a ‘A picture speaks a thousand words’ and you’ve a start on imagining what could be in a sensory story.

I seek out rich sensory experiences to put into my stories, these needn’t be expensive things, it’s just a matter of viewing the world creatively and spotting things that would make a good experience. This can get you a few funny looks as you sniff things in shops, or feel them, but it’s a lot of fun. I aim to put at least one experience from each of the five famous senses into a story (did you have seven senses?)

Why sensory stimulation?

Your brain needs sensory input in order to develop and lay down neural pathways. An able bodied child can access a wide range of sensory stimuli for themselves, a child with physical disabilities will need help to access a range of stimuli. Sensory stories are a fun way of providing this support.

Communication Support for Children with Additional Needs

Sensory stories can support communication in children with profound and multiple learning disabilities in a number of ways:

Encouraging engagement

Researchers have found that some of the passivity they observe in individuals with profound and multiple learning disabilities is not down to the disability itself but to a learned helplessness that

Image of Joanna twisting a blue household duster to simulate the sound of the wind blowing through grass

Joanna uses a number of surprising yet familiar objects to illustrate her stories via the senses.

leaves the individual disengaged with the world. When you think about it, it is easy to see how, if you couldn’t easily access the world around you, you might begin to see it as not relevant to you and turn inwards seeking stimulation from within. In some cases this can also include self harm as a means of gaining stimulation. By introducing sensory experiences to individuals with profound and multiple learning disabilities you can encourage them to become interested in objects and people. This is a great first step towards communication.

Communication skills

Story telling is a wonderful form of communication that our ancestors enjoyed and that future generations will enjoy. It’s a way we bond ourselves together and form our identities. By sharing a story in a sensory way you can include someone who accesses the world in a purely sensory way in the experience of story telling. Aspects of the process of telling the story also support individuals in learning skills involved in communication, for example the turn taking nature of sharing the story: that I say the words, and then you experience the stimuli, echoes the turn taking nature of conversation: it’s your turn to speak, my turn to listen, then my turn to speak, your time to listen.

Expressing preferences

Image of Joanna in a living room, placing her hand upon a piece of textured foam

Joanna explains that even the most simple of objects can provide important sensory experiences.

People who care for individuals with profound and multiple learning disabilities try hard to personalise that care in a way that the individual would choose for themselves were they able to express themselves. Choices are made on our best discernment of what they individual with profound and multiple learning disabilities would want. Through sharing a sensory story with someone and noting their reactions carefully over time you can learning things like: they prefer the smell of lemons to the smell of roses, they enjoy the bang of a drum more than the ringing of a bell. These small insights can be used to personalise their care in a way that will be meaningful to them, for example by purchasing citrus shower gel rather than a floral one, or by using a drum as an alarm clock rather than a buzzer. Though small these things are immensely valuable to a person’s quality of life.

Supporting Joanna’s Sensory Story Project

I want sensory stories to be available for families to share at home, that’s my motivation for the project. The project ends at 5:22am EDT on May 21st, please have a look before then. In exchange for backing the project you receive a reward of your choosing; there are many things on offer including sensory stories themselves. Come and join us.

To read more about Joanna’s Sensory Story Project and for further information on how to get involved in her Kickstarter project, click here to visit the Sensory Play Tray blog.

Sensory Stories are vital for reaching out to children with additional needs, especially those with communication issues who find it hard to express their understanding of the world around them through speech. Technology has progressed in leaps and bounds over the past decade, and now provides children with communication issues a new and immediate way to express their needs and wants through touch screen interaction, rather than relying on speech.

After you’ve checked out Joanna’s Kickstarter project, why not have a look through our informative posts that cover some fantastic apps to aid communication and our compendium of iPad apps that use augmentative and alternative communication to aid self-expression?

iPad helps American Boy find his Voice

iPad helps American Boy find his Voice

The benefits of iPad apps and technology for those living with disabilities proven again

Hunter benefits for iPad AAC support

Hunter in a speech therapy session

Despite the iPad being popular with absolutely everybody, we are convinced they help and support learning and communication for people with disabilities. Hunter Harrison is a five year old boy uses his iPad to communicate. Hunter lives with a neuromuscular disability which effects his motor abilities including those needed for verbal communication. Despite this, Hunter is learning to read, knows his numbers, letters, colours and shapes and will be attending mainstream school in September.

Hunter needs a communication system that works. It’s clear he has the facilities to flourish in a mainstream classroom environment. This view is shared by Jane Kleinert from the University of Kentucky who has been working with Hunter. She highlights how popular the iPad has been for use in classrooms, particularly with pupils with autism. The adaptability of the device is one of its most popular features.

Access to AAC Devices Limited, despite iPad affordability

iPad and Proloquo2go

iPad featuring Proloquo2go AAC app

Research in the US has shown that less than 50% of children who require AAC support have access to it. We don’t have statistics for the UK but we’re sure they won’t be significantly different. Access to AAC devices is essential for supporting communication development in children with disabilities. Professor Kleinert and a UK colleague are working together to develop an initiative to build communication systems for children with disabilities. The scheme has allowed Hunter his own iPad loaded up with the popular Proloquo2Go App. The app allowed Hunter to find ways to communicate  but over time it has also led to improvement in his oral speech.

Unfortunately in America, the leading funding options won’t supply iPads as they restrict their funds to dedicated instruments designed for communication. The iPad doesn’t fit this category. However, dedicated AAC devices are often heavy and extremely expensive. The iPad of course has many portability and cost advantages and the success Hunter has achieved is something that every child should have access to. This video shows Hunter in action:

Trabasack can be used successfully as a low cost iPad or communication aid mount for more info click here

Communication Aids: Communication Cards

Communication Aids: Communication Cards

If you have problems with verbal communication, perhaps due to learning difficulties, deafness, cerebral palsy, or stroke, you may already have normal communication aids or methods. But what if the person you’re trying to communicate with doesn’t understand BSL or Makaton, or your Lightwriter‘s batteries are flat? Or you may just have communication problems occasionally, perhaps due to fatigue, or if there’s too much noise.


Communication Cards, from Stickman Communications, are the answer. These sturdy laminated communication aids cover most situations in a non-clinical, light-hearted (but never offensive) way. In the bank, the post office, shops, and a multitude of other situations, Communication Cards will be invaluable.

You start by ordering a Starter Pack, which has a “Thank You” card on a keyring style holder. Then you add whatever other cards are going to be helpful for you. Your selection will come as a set on the keyring.

There is a wide selection of Communication Cards available, including one for writing your own message and one for your personal data. Some are specific to certain conditions – for instance one has a brief description of Hypermobility Syndrome – but most of them could be used by any of us.


Each card is 11cm by 7.8cm, and your set will come with stickers to put on the cards, so you can easily find the one you need. They can slip into a bag or handbag, or even into your pocket, or clip onto a wheelchair using a carabiner clip.

Communication Cards have only been available for a short time, but as this tweet shows, their popularity is growing fast by word of mouth.

@ Do you know @ she makes amazing communication cards you can flash at ppl if you need? Do you live with anyone?
Imogen May

New Communication Card designs are coming out all the time, so keep an eye on the website to see what’s new!  These innovative and attractive communication aids are great fun, and can be a lifesaver in an emergency. A highly recommended product.

Cartoonist artist and communication card designer Hannah Ensor signs a book at a book signing event.

Communication card graphic artist Hannah Ensor at a recent book signing event

Visit for the full range of products or follow @stickmancrips to keep up with all Hannah’s news.

Honduran Electronics Whizz Creates Homemade EyeTracker Computer Interface

Honduran Electronics-Whizz Creates Homemade Eye-Tracking Computer Interface

Cruz demonstrating his Eyeboard System

Honduran teenager Luis Cruz has used his electronics and programming know-how to create a new homemade eyeball tracker. The device allows users with motor disabilities to enter text into their computer with eye movements as opposed to keystrokes. The device, known as the Eyeboard system, is not a new concept but Cruz has taken things a step further, he’s managed to create a pair of spectacles including his technology for an affordable price (less than £200). The hope is that this homemade eyetracker will mean easier communication will become affordable in developing countries and areas where those with such needs may have no means of communication currently.

A hands-free eye-tracking controlled computer can truly revolutionise the lives of people with specific motor disabilities. It can be the easiest and most effective aid to communication for these individuals and Cruz’ technology is further widening the access to communication aids. Up until this point, eyeball tracking computer interfaces have been too expensive for most, retailing at over £6,000. If Cruz’ can get this new development made as cheaply for others it is really going to help people who need eye-tracking equipment around the world.

Homemade Eyetracker

Over the last twelve months Cruz has been refining and perfecting the Eyeboard system. His system uses some very specific scientific and electronics formulae to result in the basic eye-ball tracking computer interface. Cruz’ system is extremely basic, hence its cheap cost but it allows for letters to be input via eye movement rather than typed in which is a big achievement for a very affordable software system. Cruz developed both the hardware device and the software system needed to turn the eye movements into letters and therefore communication.

Open Source Eye Gaze Technology

The Eyeboard system is a very new development and not yet widely available. Despite this, Cruz is confident he can produce the hardware cheaply, with his prototype spectacles costing under £200. He’s also decided to release his software as open source to speed up development of tools to further speed up users’ communication. Cruz’ development is groundbreaking and is something that can hopefully be rolled out to significantly improve the communication and daily lives of people across the world.

This video further explains Cruz’ homemade eyetracker device and how beneficial it can be for the future of communication:

Barriers to Communication – Part II

Barriers to Communication – Part II


We recently posted the first part of our friend and customer Markinsutton‘s recent blog post discussing the barriers to communication he has faced and now we’re back with the second and final part. We left the final part where Mark had talked us through the difficulties he has faced with spoken and written language…

“With just touching on those two topics of communication I want to highlight a few more barriers. Some are linked to what I have just written so sorry if I repeat myself. As I have said I have a hearing loss and this presents a problem. People can’t see deafness or the understanding just how confusing it is for me and others to have to try and process the sounds we have heard into a sentence, and then translate that into a reply. Even those who know me best and have a great understanding of deafness I found to be the worse to construct sentences that I will be able to understand at the correct pitch, volume and tone. Putting too much into a sentence leaves me confused and having to ask again as I am still trying to process the first part of the speech before being able to process the second part. It’s not that I haven’t heard them just I cannot concentrate that long to take in all the information, like I have said speech is a complex language structure and having the ability to understand that when you have a hearing loss just makes it so much harder. There are many factors that can affect this. The environment is the main one for me. This is the case for most people with a hearing loss or not. Try and listen to a conversation when you have a lot of background noise or lots going on around you, It’s very difficult.

This brings me onto a form of communication I have found to be fairly affective for me but presents so many barriers it not as good as it could be. That is the ability to use sign language. Sign Language is very expressive and personally I feel has a much simpler grammar structure to it than spoken or written language. The barriers become known when there is no one who can sign around you. The ability to form the hand shapes for you to form the complex signs. My motor skills are very poor I prefer to watch sign language more than sign myself as it gives me a much simpler view to what has been said to me. There is also the normal barriers of having the ability to see the person also understanding their own ability to sign. Unless you get a professional interpreter sometimes it’s very hard to follow someone who is using sign language as mistakes are common.  The other part of using sign language over other forms of communication is the heighten awareness of gesture and facial expression.

Body language is one of the most true forms of communication I find as people say so much with their bodies and don’t even realize it. This presents a whole new barrier as it is very confusing to see someone say one thing and then say a different thing with their body. People often lie with what they say but find it very hard to lie with their body and facial expressions give it away. I use this a lot but it leaves me feeling confused and isolated as I don’t know if what I have heard is what the person is saying are the same thing. 90% of my work is working with people who have little or no verbal speech and little or no hearing. Not having the ability to understand grammar and English they us gesture and facial expressions to communicate their needs and wants. I have found my best conversations have been with these people who use this form of communication that demonstrates to me the more complex you make communication the less effective it becomes..

Mark makes some fantastic comments and we particularly want to reiterate his concluding comment “the more complex you make communication the less effective it becomes” and this why, at Trabasack, we are committed to supporting and advocating accessible technology, design for all concepts and we thank Mark for allowing us to share his post.

Barriers to Communication – Part I

The Barriers to Communication – Part 1

with Thanks to Markinsutton

Mark at The Reading Festival

At Trabasack we are keen to always be at the forefront of developments and new technology in assistive communication technology and communication aids and we also value the opinions of those who experience difficulty and overcome the problem of communication due a range of differences and conditions.

Markinsutton is a great friend of ours and also a Trabasaxon so when we read his latest blog post discussing the barriers of communication, we felt it was important to share it on this blog for a wider audience:

“Someone recently asked me what are the barriers to communication? Well I going to knock myself out and tell you just what barriers I face and how they affect me on a personal level. I may also touch on other barriers that don’t affect me but I can see causing a problem for those I work with.

Communication has always been my biggest barrier in life and one that I have struggled to overcome since a little boy with no speech. Growing up I found that speech is only one part of the barrier and really hasn’t been the hardest to overcome. Most of what we say is total rubbish anyway and I have communicated better with people around me without the use of speech. Before I move on to methods I use as my communication I thought it would be good to touch onto the barriers with speech.

Speech and the spoken language!

The real problem with speech is the speed it works at, there is no delay or time to correct what you have said. Once you have opened your mouth and started to speak a word, you have already expressed so much information. Someone like me who has a problem with speech this becomes even more difficult. Take a simple word that most would open a dialog with another person, “hello” first barrier for me is remembering to swallow so I don’t end up choking on my own saliva when I open my mouth, worse still end up dribbling what saliva down the front of me that will portray a whole different message to the person I am saying hello to. The next barrier is just how I say the word. Tone, pitch and volume are all concepts that having a hearing loss are alien to me. Most people say that I have a London accent but understanding accents and how people talk is another barrier I do not understand. When do you say “Hello” and why, does the person really want to make conversation with me and if so why? Should I reply with a hello back, what do I say next? Should I ask “how are you?” you get the picture. The spoken language has a whole set of rules that are very different depending on who you are with, where and when. I could write a whole book on the barriers that I face on speech alone but I am sure there are 100 of books out there on this topic. All I know is sometimes it’s really too much effort and is hard and complex form of communication to manage within the spilt second you open your mouth.

Written language, there are many forms of written language and I am aware that I have not touch on many other barriers of communication before talking about this subject. The reason why I have jumped from speech to written is because it’s the most common two forms of communication that people understand but is far the most common two used. For example the biggest use of communication I use is silence. The amount of times I have done nothing to portray a message across is much more than any other form of communication I use and is much more effective than trying something else. The biggest barrier to me for me and most people with the written language is spelling and grammar. Just how do we construct a sentence? What words do we use? How do we put them together? There is also the other greatest barrier with the written language that is the ability and time to read it. If you have got this far within reading this passage on communication you are doing well. I am one who likes to write as I find it gives me time to express my ideas and thoughts onto paper. I also feel it can be the most effective form of communication. You only have to ask William Shakespeare that!

Another barrier with written text is speed of trying to turn it into spoken word. In my experience this has never worked. AAC devices are great for expressing your needs and wants but that is pretty much about all. Having a conversation via an AAC device is next to impossible to do. The speed which one can input text and translate that to speech is far too slow. This is something I have noticed being chairman of a charity that supports children with AAC devices is becoming easier with new technology but still has a long way to go before it can replace speech. There is also the biggest barrier with the written form of communication that is the ability to read it. There are so many people today that are unable to read this is why most of my blogs and emails I send in audio format as well. The age of texting (SMS) messages has made this worse in my view as people have tried to cram in too much within a text message and information gets lost along the way.  Text messages have opened up a whole new world allowing us to communicate on a much more equal level but presents a whole new set of rules and barriers within its own way.


Drop by to read the next part of Mark’s story.

Trabasack is available from these Communication Aid companies (to add your company to the list, please email duncan{at}