An unsuspecting grandmother received a shock after Apple’s AI left her an X-rated message. 

Louise Littlejohn, 66, from Dunfermline in Scotland, had received an innocuous voice message from a car dealership in Motherwell on Wednesday. 

But Apple’s AI-powered Visual Voicemail tool – which gives users text transcriptions of voice calls – completely botched the transcription. 

The jumbled text left on her iPhone asked if she had ‘been able to have sex’ and called her a ‘piece of s***’. 

Confusingly, it said: ‘Just be told to see if you have received an invite on your car if you’ve been able to have sex.’ 

It continued: ‘Keep trouble with yourself that’d be interesting you piece of s*** give me a call.’  

Mrs Littlejohn described Apple’s dodgy AI transcription as ‘obviously quite inappropriate’ but thankfully saw the funny side. 

‘Initially I was shocked – astonished – but then I thought that is so funny,’ she told BBC News. 

Your browser does not support iframes.

Apple’s AI-powered Visual Voicemail service gives users text transcriptions of voice messages – but an unsuspecting grandmother received a shock with a botched transcription has filled with obscenities (file photo)

‘The garage is trying to sell cars, and instead of that they are leaving insulting messages without even being aware of it. It is not their fault at all.’ 

The actual voicemail, from Lookers Land Rover garage in Motherwell, had invited the grandmother to an upcoming event.

The Lookers Land Rover staff worker had said the event ran ‘between the sixth’ of March, but the AI heard this as ‘been having sex’. 

Even more bizarrely, it appears the technology heard ‘feel free to’ as ‘you piece of s***’ while largely rendering the rest of the message unintelligible.  

According to an expert, the AI tool may have struggled with the worker’s Scottish accent or background noise at the garage. 

‘All of those factors contribute to the system doing badly,’ said Peter Bell, a professor of speech technology at the University of Edinburgh. 

‘The bigger question is why it outputs that kind of content. 

‘If you are producing a speech-to-text system that is being used by the public, you would think you would have safeguards for that kind of thing.’ 

Apple says: 'Transcription is limited to voicemails in English received on your iPhone with iOS 10 or later'

Apple’s AI-powered Visual Voicemail service gives users text transcriptions of voice calls – but Apple admits that an accurate transcription ‘depends on the quality of the recording’

The original voice message 

‘Hi Mrs Littlejohn, it is ____ here from Lookers Land Rover in Lanarkshire. I hope you are well. 

Just a wee call to see if you have received your invite to our new car INAUDIBLE event that we do have on between the sixth and tenth of March. 

Just a wee call to see if it is something you were looking to come along to, and to see if we can confirm an appointment slot that would be suitable for yourself. 

If it is something you would be interested in, feel free to give me a call on ____, ask for myself ____ INAUDIBLE. Thank you.’

Apple says on its website that Visual Voicemail is limited to voicemails in English received on an iPhone running iOS 10 or later. 

However, it admits that even in English an accurate transcription ‘depends on the quality of the recording’. 

Unfortunately, it seems AI and speech recognition systems have difficulty understanding some accents more than others.

According to a report by TechTarget, they’re not being trained on a sufficient variety of audio data, which can be especially frustrating for people during day-to-day interactions such as automated customer calls. 

Apple and Lookers Land Rover garage declined to comment on the controversy, but it’s not the first time the trillion-dollar tech giant has been involved in an AI cock-up.

Last month, iPhone users discovered a voice-to-text glitch that transcribed the word ‘Trump’ when they said the word ‘racist’. 

The tech giant was forced to respond to the scandal, with a spokesperson admitting they are ‘aware of an issue’ and rushed to roll out a fix.  

And earlier this year, Apple pulled a new iPhone feature after just three months as users slammed it for spreading misinformation. 

The BBC filed a complaint to Apple after the tech giant’s AI generated a false headline stating Luigi Mangione shot himself

Apple removed its AI notification summaries for news and entertainment apps after the system falsely reported a news article. 

The summary of the BBC article suggested that Luigi Mangione, 26, the alleged assassin of the CEO of UnitedHealthcare, had shot himself. 

AI technology fabricating or making up information – a problem often described as ‘hallucinations’ within the industry – has been frequent. 

Last year, Google’s AI Overviews tool gave out botched – and highly dangerous – advice, including using ‘gasoline to make a spicy spaghetti dish’, eating rocks and putting glue on your pizza. 

AI tools like ChatGPT and Google’s Gemini are ‘irrational’ and prone to making simple mistakes, study finds 

You might expect AI to be the epitome of cold, logical reasoning. 

But experts suggest that artificial intelligent tools are even more illogical than humans.

Researchers from University College London put seven of the top AIs through a series of classic tests designed to test human reasoning. 

Even the best-performing AIs were found to be irrational and prone to simple mistakes, with most models getting the answer wrong more than half the time. 

Some of the tools even refused to answer logic questions on ‘ethical grounds’, despite the questions being entirely innocent. 

Share.
Exit mobile version