Astroscreen raises $1M to detect social media manipulation with machine learning

 In Fundings & Exits, Startups, TC, TechCrunch - Funding & Exits, Technology News

In an era of social media manipulation and disinformation, we could sure use some help from innovative entrepreneurs. Social networks are now critical to how the public consumes and shares the news. But these networks were never built for an informed debate about the news. They were built to reward virality. That means they are open to manipulation for commercial and political gain.

Fake social media accounts – bots (automated) and ‘sock-puppets’ (human-run) – can be used in a highly organized way to spread and amplify minor controversies or fabricated and misleading content, eventually influencing other influencers and even news organizations. And brands are hugely open to this threat. The use of such disinformation to discredit brands has the potential for very costly and damaging disruption when up to 60% of a company’s market value can lie in its brand.

Astroscreen is a startup which uses machine learning and disinformation analysts to detect social media manipulation. It’s now secured $1M in initial funding to progress its technology. And it has a heritage which suggests it at least has a shot at achieving this.

Its techniques include coordinated activity detection, linguistic fingerprinting and fake account and botnet detection.

The funding round was led by Speedinvest, Luminous Ventures, UCL Technology Fund, which is managed by AlbionVC in collaboration with UCLB, AISeed, and the London Co-investment Fund.

Astroscreen CEO Ali Tehrani previously founded a machine-learning news analytics company which he sold in 2015 before fake news gained widespread attention. He said: “While I was building my previous start-up I saw at first-hand how biased, polarising news articles were shared and artificially amplified by vast numbers of fake accounts. This gave the stories high levels of exposure and authenticity they wouldn’t have had on their own.”

Astroscreen’s CTO Juan Echeverria, whose Ph.D. at UCL was on fake account detection on social networks, made headlines in January 2017 with the discovery of a massive botnet managing some 350,000 separate accounts on Twitter.

Ali Tehrani also thinks social networks are effectively holed-below the waterline on this whole issue: “Social media platforms themselves cannot solve this problem because they’re looking for scalable solutions to maintain their software margins. If they devoted sufficient resources, their profits would look more like a newspaper publisher than a tech company. So, they’re focused on detecting collective anomalies – accounts and behavior that deviate from the norm for their userbase as a whole. But this is only good at detecting spam accounts and highly automated behavior, not the sophisticated techniques of disinformation campaigns.”

Astroscreen takes a different approach, combining machine-learning and human intelligence to detect contextual (instead of collective) anomalies – behavior that deviates from the norm for a specific topic. It monitors social networks for signs of disinformation attacks, informing brands if they’re under attack at the earliest stages and giving them enough time to mitigate the negative effects.

Lomax Ward, partner, Luminous Ventures, said: “The abuse of social media is a
significant societal issue and Astroscreen’s defence mechanisms are a key part of the solution.”

Recent Posts

Leave a Comment