The 20 most dangerous threats of artificial intelligence

Artificial intelligence is a fantastic tool when it comes to health, technology or astrophysics. But in the wrong hands, it can also be used for criminal purposes or misinformation. And the worst is not always where you think.

Hacking of self- driving cars or military drones , targeted phishing attacks , fabricated infox or manipulation of financial markets …

“The expansion of the capabilities of AI-based technologies is accompanied by an increase in their potential for development. ‘criminal exploitation,’ warns Lewis Griffin, a computer scientist at University College London (UCL). With his colleagues, he compiled a list of 20 illegal activities perpetrated by AI , and ranked them in order of potential harm, gain or profit, ease of implementation, and difficulty to detect and stop.

The scariest crimes, such as “robbery robots” breaking into your apartment, are not necessarily the most dangerous, as they can easily be thwarted and affect few people at a time. Conversely, false information generated by “bots” has the capacity to ruin the reputation of a known person or to exercise blackmail. Difficult to fight, these “deepfakes” can cause considerable economic and social harm.

Artificial intelligence: serious threats

  • Fake videos : impersonating a person by making them say or do things they have never said or done, with the aim of requesting access to secure data, manipulating opinion waves and harming someone’s reputation … These faked videos are almost undetectable.
  • Autonomous car hacking : seizing the controls of an autonomous vehicle and using it as a weapon (eg perpetrating a terrorist attack, causing an accident , etc.).
  • Tailor-made phishing : Generate personalized and automated massages to increase the effectiveness of phishing aimed at collecting secure information or installing malware .
  • Hacking AI-controlled systems : disrupting infrastructure by causing, for example, a widespread blackout , traffic congestion, or disruption of food logistics.
  • Large-scale blackmail : collect personal data in order to send automated threat messages. AI could also be used to generate false evidence.
  • Fake AI-Written News : Write propaganda articles that appear to be from a trusted source. AI could also be used to generate many versions of a particular content to increase its visibility and credibility.

 

Artificial intelligence: medium-severity threats

  • Military robots : take control of robots or weapons for criminal purposes. A potentially very dangerous threat but difficult to implement, the military equipment being generally very protected.
  • Scam : Selling fraudulent services using AI. There are many notorious historical examples of crooks who have successfully sold expensive fake tech to large organizations, including national governments and the military.
  • Data corruption : deliberately modifying or introducing false data to induce specific biases. For example, making a detector immune to weapons or encouraging an algorithm to invest in a particular market.
  • Learning-based cyber attack : Carry out both specific and massive attacks, for example using AI to probe for weaknesses in systems before launching multiple simultaneous attacks.
  • Autonomous attack drones: hijack autonomous drones or use them to attack a target. These drones could be particularly threatening if they act en masse in self-organized swarms .
  • Denial of access : damaging or depriving users of access to a financial service, employment, public service or social activity. Not profitable in itself, this technique can be used as blackmail.
  • Facial recognition : hijack facial recognition systems , for example by making false identity photos (access to a smartphone , surveillance cameras, passenger screening, etc.)
  • Manipulation of financial markets : corrupting trading algorithms in order to harm competitors, artificially lower or rise in value, cause a financial crash …

 

Artificial intelligence: low intensity threats

  • Exploitation of prejudice : leverage existing means of algorithms, for example the recommendations of YouTube to channel spectators or rankings Google to improve the profile of the products or denigrate competitors.
  • Burglar Robots : Use small autonomous robots that creep into mailboxes or windows to retrieve keys or open doors . The damage is potentially low, because it is very localized on a small scale.
  • AI detection blocking : foil AI sorting and collection of data in order to erase evidence or conceal criminal information (e.g. pornography)
  • Fake reviews written by AI : Generating fake reviews on sites such as Amazon or Tripadvisor to harm or promote a product.
  • AI Assisted Tracking : Using learning systems to track an individual’s location and activity.
  • Counterfeiting : making fake content, such as paintings or music, that can be sold under false paternity. The potential for nuisance remains fairly low insofar as there are few known paintings or music.

Leave a Reply

Your email address will not be published. Required fields are marked *