Skip to main content

The Shipwreck Rose: Artificial Stupidity

Thu, 10/14/2021 - 09:39

I’m old enough to remember when the voice of a computer — a computer with a voice — was the stuff of science fiction. Hal 9000 from “2001: A Space Odyssey” talked in tones that were sinister in their detachment: It was obvious from his breathy, overly calm and overly reassuring tones (he sounded like a yoga instructor) that Hal had ill intentions.

It’s 2021 and the voices of artificial intelligence that call our landlines attempting grand larceny never sound as human as Hal did. The archvillain phoning and bothering me this month is “Olivia from the cancellation department,” who talks in a metallic voice so robotically disturbing it makes the hair on your arms stand up. But other than that, I find that what computers actually have to say — the content of their messages — is usually more silly than sinister. We haven’t been overtaken by machines quite yet. The poems, stories, and songs penned by A.I. stink, and that is reassuring.

Here, for example, is a non-gem of a limerick written by the inhuman brain of a system called LimGen. LimGen can rhyme — sort of? — but she doesn’t have an Irish sense of humor, does she?

 

There was a loud waitress named Jacque,

Who poured all her coffee in a shake.

But the moment she stirred,

She was struck by a bird,

Then she saw it fly towards the lake.

 

The first country song composed by artificial intelligence was “You Can’t Take My Door.” Botnik Studios, a “human-machine entertainment” company, programmed a recursive neural network to listen to endless hours of Nashville country-pop songs like “Knockin’ Boots” and “Cry Pretty,” analyzing and imitating them and producing the immortal chorus:

No you can’t take my door

I don’t wanna love you anymore

Won’t let my heart be my face

Barbed whiskey good and whiskey straight

No you can’t take my door

 

Artificial intelligence is still in the gawky throes of adolescence, trying to become itself — not yet fully individuated, not talking much sense, its voice cracking and squeaking — and while it does that, it is simultaneously creating portraits of each of us, of its human operators and consumers, that parallel our own real selves but that are, again, frequently so off base as to be farcical. I am talking about the portrait of us created by algorithms that track our computer searches and social media habits, recording everything we do. I’m sure the A.I. folks have a name for these shadow identities, these hologram selves — is there a name yet for the shadow self? I just mean, we each have a profile that exists out there in the cloud, a not-really-us that is the person Netflix and Amazon are addressing when they suggest a book or recommend a movie, that Facebook is talking to when it hand-picks an advertisement for our feed.

I’m old enough to remember when you could go whole hours, conceivably days, without being subjected to an advertisement. All you had to do was turn off the television and put down the newspaper. Today we swim in a nonstop stream of forever advertising that is delivered to us (or, to my point, delivered to our shadow identity) via email, social media, and streaming platforms. If you keep a “smart device” on your bedside table — like an Amazon Nest, which offers bedtime stories and weather reports — you will be receiving targeted ads even as you drift to sleep.

We mostly ignore the sales pitches that flow into our inboxes. These marketing emails are written by humans, of course (by junior public-relations staff raised on Slender Man and “Hannah Montana”), but they are targeted to our shadow selves. We’re getting them because we accidentally clicked and joined something we didn’t mean to click and join, or just because Google is trying to guess what we’ll buy next. Here are some of the inadvertently comical subject-headers from marketing emails I’ve received recently:

“How often should you really shower?”

“I just found out canned pumpkin isn’t pumpkin at all.”

“Is your child a Chucky?”

“How to keep bagged salad fresh for as long as possible.” (Ew.)

“8 Corn Mazes Near Orlando for Fun with the Kids.” (I can’t think of anything less fun. They grow corn in central Florida?)

“Save $40 on pure vanilla extract!” (How much are we spending on vanilla extract that we need to save $40 on it?)

“Princess Charlene visits hospital for ‘final procedure’ before she can return to Monaco.” (There’s a princess named Charlene?)

For whatever reason, probably a wayward campaign donation during the recent crisis in which we temporarily saved American democracy, I constantly get emails from candidates in states where I cannot vote and never have voted. Research must have indicated that grim non sequiturs are an effective way to get the attention of left-leaning political donors, because these emails unfailingly have doomy subject headings designed to provoke anxiety. Here are some 100-percent-genuine headers from actual emails I received this month from an Eeyore of a candidate running for office in Virginia:

“Sorry, I have some bad news.”

“Are we blowing this?”

“I wish I had better news.”

“I can’t stop worrying about this.”

“Sorry to be emailing so late.”

“Sorry to bother you so early in the morning.”

“Sorry.”

“Is anyone even reading this?” (No.)

And, more than once: “This is my last email.” (It wasn’t.)

I’ve been a Facebook user since my days in rural Canada, when my kids were small. I’m not proud to admit it, but it was a lifeline connecting me to old friends in far places, back when I lived out in the cold Atlantic among the rocks and pines, and — like all of us who remain caught in the Facebook trap — I have had more than a few good chuckles over the weird things that Facebook A.I. thinks I’d want to buy or do.

Around 2014, Facebook thought I might need a selection of the gold-embellished red-felt fezzes sported by the International Brotherhood of Shriners. Not long after, in 2015, Facebook ads tried to sell me the satin aprons and white gloves worn as clandestine ceremonial regalia by the Freemasons. I swear I’m not making this up. In 2016, Facebook suggested I “like” a page for “Luciferian Voudo Witchcraft.” (I beg your pardon, Facebook? Who, sir, do you think I am?) Circa 2017, Facebook kept suggesting I join online dating sites for “Christian seniors.” I’m not sure what I had been clicking on to give the algorithms the idea I was a Christian Freemason and practicing Satanist enjoying my twilight years, but I was going through a divorce at the time.

More recent targeted ads make it clear that A.I. still doesn’t understand me. Only last week, the internet thought I might want to buy a brown slouchy sweatshirt with the words “Coordinator of This Entire Sh!tshow” printed on the front in a handwriting font popularized at T.J. Maxx in 2005. It’s true that I am the coordinator of this entire sh!tshow, but I don’t wear brown and hell would freeze over before I left the house wearing any item of clothing emblazoned with either a humorous slogan or inspirational quote.

Still, you can observe the algorithms steadily improving. One recent Facebook ad had me truly pegged. It was downright creepy: Yes, Mark Zuckerberg, yes, I would indeed like to buy tea towels from a British outfit called the Radical Tea Towel Company featuring the face of Tom Paine or Frederick Douglass. Yes, please.

We may be in trouble.


Your support for The East Hampton Star helps us deliver the news, arts, and community information you need. Whether you are an online subscriber, get the paper in the mail, delivered to your door in Manhattan, or are just passing through, every reader counts. We value you for being part of The Star family.

Your subscription to The Star does more than get you great arts, news, sports, and outdoors stories. It makes everything we do possible.