According to “Newsweek” and MSNBC, Stanford University researchers have reported startling accuracy in predicting sexual orientation by using artificial intelligence (AI) analysis of facial features in photographs.
(I wonder if this also works on PAINTINGS? Could the Mona Lisa’s enigmatic smile have involved thinking, “Only 500 more years until Ellen gets a talk show”?)
The researchers feel that some members of the LGBT community have been “shooting the messenger” in their reactions to the study. They say they were merely exploring the biological influences on human sexual orientation, while offering a cautionary tale about potential misuse of technology. Nonetheless, algorithms that could reveal whether someone (a suitor, an employee, a stranger) is gay or straight are creating a firestorm.
A broad cross-section of Americans is concerned that such technology could fall into “the wrong hands,” but who exactly constitutes “the right hands”? Homeland Security? Are they expecting an invasion of show tunes and lumberjack shirts? (“A stereotype just crashed into the Empire State Building!”)
Whatever became of good old-fashioned methods of determining sexual orientation? (“She didn’t like my hairy back or the 48 payments I owe on my ‘72 Ford Pinto. Gotta be a lesbian!”)
What a strange, contradictory world we’ve carved out for ourselves in the 21st century! We want antibiotic-free meat, pesticide-free vegetables and all-natural remedies — but we let artificial INTELLIGENCE pick who gets bullied, blackmailed and fired.
But don’t kid yourself into thinking this is just about sexuality. Computers can also make educated guesses about IQ, political leanings, weaknesses, your real opinion of your boss and more.
People who are not breaking the law or carrying a contagious disease are entitled to their own timetable for divulging (or not divulging) their secrets. Who came up with the notion that we must create a real-life version of Wonder Woman’s golden lasso of Truth? What’s next? (“Cool! You just genetically engineered a giant seahorse like the ones Aquaman rides!”)
There’s no good scenario here. I would fear Orwellian technology that is 100 percent accurate in analyzing sexual orientation and other traits. But it will be almost as bad if people become desensitized to the disclaimers (“Nine percent chance the analysis is WRONG? Sure, sure, whatever”) and start making life-changing ASSUMPTIONS about other people.
The homily used to be “When you assume, you make an ass out of you and me.” Now it’s “When you assume, you make a jailer out of you and a prisoner out of me.”
Let’s picture a government agent faced with a moral quandary. (“The computer is 91 percent sure this guy is a potential terrorist who needs to be locked up for life; but, based on his eyebrows, there’s a 98 percent probability he would buy a boatload of my daughter’s Girl Scout cookies if I let him go…”)
We already face chilling privacy issues with algorithms in the hands of just a few agencies or corporations. Quality control will go right out the window when every Tom, Dick and Harry (yeah, right! You ain’t foolin’ nobody, “Harry”) starts mass-producing software for interpreting images.
It’ll be like those shoddy knockoff eclipse-viewing glasses. (“Sir, I really don’t think we have a case for termination. First, the snapshot is the only evidence. Second, the suspect has been with the company for 20 years. Third, the suspect the AI scanned is the company COFFEE POT!”)
Danny Tyree welcomes email responses at firstname.lastname@example.org and visits to his Facebook fan page “Tyree’s Tyrades.” Danny’s weekly column is distributed exclusively by Cagle Cartoons Inc. newspaper syndicate.