A deepfake of an explosion on the Pentagon that precipitated the inventory market to dip exemplified the misinformation dangers of generative AI.
On Monday, a seemingly AI-generated picture of what appeared like an explosion outdoors of the Pentagon circulated on Twitter. The Arlington Police Division rapidly debunked the picture tweeting, “There may be NO explosion or incident going down at or close to the Pentagon reservation, and there’s no rapid hazard or hazards to the general public.”
However not earlier than the inventory market dipped by 0.26 p.c earlier than bouncing again, based on Insider(opens in a brand new tab).
It is unclear how the picture was created, but it surely has the telltale indicators of an AI-generated picture. The fencing in entrance of the constructing is blurred and the columns seem like completely different widths. Any social media sleuth accustomed to recognizing photoshopped pictures of celebrities and influencers would have observed this, however as generative AI continues to enhance, deepfakes might be more durable to identify.
Even with Arlington PD’s fast response, Twitter’s mess of a verification system compounded the difficulty. One of many accounts that tweeted the picture was a verified account impersonating a Bloomberg information feed. That account, referred to as @BloombergFeed, has since been suspended.
Different accounts that tweeted the picture had been @DeItaone and the account Russian state-media owned web site RT. Now that anybody pays to change into verified on Twitter, conditions like this are the right storm of misinformation.
A faux Twitter account shares a faux picture that results in actual penalties. Welcome to 2023.
Originally posted 2023-05-22 18:36:52.