… unless carriers like AT&T open up to researchers and improve their networks. While the FCC is applying pressure in this regard, any decent solution is likely years away, and may be hamstrung by net neutrality-esque message discrimination concerns. Nonetheless, fake robocalls and texts are a crisis.
Borrowing from my last post:
Right now, any robocall hacker in the world can instantly take over your phone’s screen, knocking you out of your mobile gaming experience, disrupting you as you check out at the store, or breaking your concentration as you try and type out an email.
And to quickly recap recent disinformation campaigns in the US:
Fake news dominated the US during the 2016 presidential election. Facebook’s top 20 fake news stories in the final three months of the election got more views and engagement than the top 20 stories from real outlets.
Such efforts are organized and widespread, from individuals looking to profit to state-run facilities full of hundreds of specialists, there solely to create discord by sharing lies. One known as the Internet Research Agency was very prolific. A primary goal has been to discourage voter turnout.
Carriers have not been under pressure like social media companies have been
The onus to tackle fake news has largely been placed on social media platforms, because that’s where there was massive evidence of it happening. The efficacy of these post-election efforts varies by platform, with Twitter doing a poorer job than Facebook in taking action on fake posts, for example.
But social networks like Facebook, Twitter, and YouTube are public forums with authenticated users: unlike WhatsApp, old-school text messages, and phone calls, which we can call dark networks.
Dark networks are impossible to scrutinize at face-value, because information is shared privately. In India, where WhatsApp is massively popular, outsiders can only grasp the problem’s size by observing society-scale symptoms.
Robocalls and texts: spoofable and targetable
Calls and texts are the most dangerous communication medium as far as election interference is concerned because they allow spoofable and targetable communications. Any hacker in the world can (trivially) pretend to be calling or texting from someone else’s number.
Why are calls and texts the most dangerous?
• leave no public trace of the communication ever happening (unlike Facebook or even WhatsApp). Carriers have this info but don’t share.
• don’t rely on viral sharing to reach the end user
• have a 100% chance of annoying the recipient (great for pissing off potential voters)
• because they are spoofable, they trick the recipient into blaming a specific party
• the protocol is, in its current carrier-level implementations, incapable of permanently banning bad actors from repeat offenses
• by volume, they are among the most widespread form of fake communication
• Regarding texts: unlike robocalls, which Google has been addressing with its Call Screen feature, texts do not have an obvious way of gatekeeping/filtering robotexts. Clever AI can’t solve this issue
• Regarding texts: unlike robocalls (which most people don’t answer), texts will almost always show up on a lock screen, delivering the message. Blocking specific numbers doesn’t help because the protocol is so broken the robotexters can use any number.
A true story from personal experience: I donated $5 to a political candidate before the midterms. I then proceeded in the days leading up to the election to receive dozens of SMS messages from the campaign, even after telling them to stop.
It’s probably the campaign’s fault for over-messaging me, but on the other hand, it may have been state-level actors armed with numbers from a hacked database. Who can say? Clearly voter rolls and donor offices are hacked often by nationstates. In any case, I didn’t donate again to the candidate.
With voice mimicing AI basically here, allowing convincing conversations in real time, we must–as a country and an industry–get ahead of this and demand carriers upgrade their networks to disallow spoofable communications.