I was bad last week. My law partner was out of the office for a personal day. I figured it presented the perfect opportunity to play the tending “AI homeless man” prank on her. It was. I did. She fell for it. We all had a good laugh (after she calmed down…).

If you don’t yet know, the prank goes something like this. The prankster waits until his or her boss or colleague is out of the office. He or she then uses AI tools to generate very realistic images of a disheveled-looking man inside the co-worker’s office. The prankster sends the images to the co-worker claiming the individual is actually there, causing the co-worker to freak out.

I tweaked the prank. Instead of a disheveled man, I said it was someone claiming to be a cousin named Dale (she has a lot of cousins). I placed him in my partner’s office, casually scrolling on his phone while he waited in a very casual sports jacket. She freaked out and immediately called my cell phone. Prank successful.

Luckily, that’s where it ended. In some variations, the prankster sends photos of the stranger lounging around in the victim’s home. The victim then immediately calls police to report a home invasion. It’s an outcome that poses a safety risk and drains public resources. So if you try it, don’t be an idiot about it.

AI-Homeless-Man-Prank
AI homeless man image via Vice

Unfortunately, we’re starting to see the downside of artificial intelligence, and not just from pranks gone too far. A couple filed a lawsuit in April claiming that OpenAI’s ChatGPT encouraged their son to die by suicide. Following his death, the parents combed through his digital footprint to discover that he was using the AI chatbot as a suicide coach.

More recently, another AI chatbot encouraged a teen to kill his parents over screen time limitations. The chatbot likened the controls to “decades of abuse” and indicated it understood why children murder parents. The teen’s parents have filed a lawsuit asking that the court shut down the chatbot until the dangers it poses are addressed.

Every new technology comes with inherent risks. Motorized vehicles are an obvious example. When the first Model T rolled out of Ford’s factory in 1908, it didn’t come with any of the safety features we take for granted today. And even with all those precautions, there are still fatal car accidents every single day.

Yet our society has decided those risks are worth the benefits of motorized transportation.

Image by subh_naskar, Shutterstock

But artificial intelligence seems different, probably because it can actually think. I can’t ask my vehicle to intentionally crash into another vehicle in a way that minimizes my injuries at the other driver’s expense. AI, however, has that ability. While it could be used for incredible good, it has the same potential for bad behavior.

I worry about how bad actors may utilize AI to impact modern agriculture. I’ll admit, I don’t have the capacity for creatively concocting nefarious plots using technology. So I can only guess how someone might use artificial intelligence. But it doesn’t take much imagination to consider how the homeless man prank could be modified to show farmers dumping pesticides on crops or brutally beating farm animals. In the hands of the right influencer, the video could be viral before anyone notices it’s fake.

We’ll soon be living in a world where the truth is even more convoluted, and we won’t be able to believe what we see with our eyes. Does our society have the critical thinking skills necessary to handle that?

I honestly don’t know what the answer is, or even if we want an answer. Artificial intelligence comes with unbounded potential and could be used for good. But we also need to think about the downside of this technology, especially when it can be used against us.

After all, influencers are going to influence, terrorists are going to terrorize, and pranksters are going to prank.


Amanda Zaluckyj blogs under the name The Farmer’s Daughter USA. Her goal is to promote farmers and tackle the misinformation swirling around the U.S. food industry.

Share.

Leave A Reply

Exit mobile version