Ai... Why your family needs a Safe Word now!

By Rob Neely

I was privileged to be a speaker at the inaugural Scams Summit held In Sydney this week and it brought froward some key learnings, the most important of which was that you family will need a secret safe word, and you need it now, this week.

Matt Barrie who is an award winning technology entrepreneur and Chief Executive of Freelancer Limited warned a stunned room that Ai (artificial intelligence) is no longer some sci-fi, Big Brother-type stuff.

Now, people can scam your loved ones with “your” voice in phone calls to trick them into giving thousands of dollars by making them believe that you’re in danger.

On top of that, and its already available, Ai can reproduce you, yes, the real you in a zoom call.

It can make you smile, it can make you walk and talk, and the people at the other end will not know its not you. Forget about working from home with a office like top but pyjamas on under the desk, Ai can now attend meetings for you while you are at the beach, and I kid you not.

But more on that later.

I am using this blog to warn everyone that if you are not aware of the number calling you phone, then do not answer it.

Let it go to voicemail as if they are genuine they will leave a message.

The reason is that all the scammers need from you to create your voice is the word “Hello”.

Only months ago (ie March 2023) they needed to engage in some conversation with you to capture enough audio to clone your voice, now they just need you to answer and say ‘hello’.

Scammers will use audio files of your voice and upload them into programs that can replicate how you sound and make “you” say anything they wish.

Imposter scams have been around for a long time, however with generative Ai its impossible to tell if it’s your son, your daughter your mum or your dad. The Ai on the end of the line can hold full conversations in real time, think Siri but with your mums genuine voice, and now Ai can add intonation to the voice, they can make it sound happy like – Happy Friday! or sad like – Sorry about your loss.

On the sidelines of the summit, I saw a video, of a woman explaining how her 82-year-old grandma received a call and could hear “her” in the background “distraught, screaming, as if I had been kidnapped.”

She went on to say that luckily her grandmother called the police and they told her to ring her granddaughter back, which she did and everything was thankfully fine.

In fact my own mother (who is aware of different scams because of what I do) received a similar call and realised it was most likely a scam, however it still prompted her to call the Australian Federal Police before she called me. At 85 my mother was more concerned that if they knew her number perhaps they knew where she lived.

Meanwhile, the scammers are hoping that their victims will panic and not call anyone, and instead give thousands away to ensure the safety of their family and friends.

Another speaker at the Summit Detective Inspector Warren Lysaght, NSW Police said artificial intelligence cybercrime is hard to stop.

Make no mistake readers you would be horrified if you understood how believable and accurate someone’s AI voice can be.

This is why having a safe word with those close to you can be good so that you can know if the call is genuine.

A safe word is a word that only your immediate family know, don’t write it down, don’t leave it in notes on your phone, don’t email or whatsapp or send it in messenger to you family members create it and pass it on face to face.

And then at the same time make sure that your family knows to ask the caller (scammer) on the other end of the line ‘what is the family safe word’.

AI cannot know your safe word so the scam will become undone.

Make sure your kids if they are youngsters or young adults do not share the family safe word with their friends, it has to be a one off word that your family will forever remember.

Its not just me warning you about Ai, recently Apple co-founder Steve Wozniak is among the tech creators who have warned about the dangers of AI and its ability to make scams and misinformation harder to detect. He said this month, “AI is so intelligent, it’s open to the bad players, the ones that want to trick you about who they are.”

So how do you tell a real person from a voice clone? The best response is to distrust the voice and verify the story. And the easiest way to do that is to contact the relative directly. If you don’t get a response, try to get in touch with them through another family member or friend.

This is in conjunction with a rise in “deepfake” videos is posing an immediate concern and not just for scammers.

In simple terms, Deepfakes are Ai-generated videos and images that can alter or fabricate the reality of people, events, and objects.

This technology is a type of artificial intelligence that can create or manipulate images, videos, and audio that look and sound realistic but are not authentic.

Deepfake technology is becoming more sophisticated and accessible every day. It can be used for various purposes, such as in entertainment, education, research, or art.

However, it can also pose serious risks to individuals and society, such as spreading misinformation, violating privacy, damaging reputation, impersonating identity, and influencing public opinion.

As stated earlier in this piece, Ai can now attend Zoom meetings on your behalf, and your co-workers would not be able to tell its not you.

I will cover this subject in another article very soon.