Can Pit Bulls Be Trusted?
The dog in the photo above is Eddie. I know that because he’s mine. He can be trusted. I know that, too, because he’s mine. In fact, he can be trusted so completely, the cat in the photo trusts him. The cat’s name is Sammy. I know that because he’s mine, as well.
I know Eddie can be trusted because I know how he was trained — with repetition, with positive reinforcement, with rewards, with love, and with foresight. Sammy knows Eddie can be trusted because Eddie lets him be the boss. That’s all Sammy really cares about.
I know Eddie’s not a Pit Bull. But I don’t think there’s anything special about Pit Bulls. I’ve known 40-pound Pit Bulls that have attacked and mauled entire families. I’ve known 75-pound Pit Bulls that are more sweetly docile and unconditionally affectionate than Eddie and Sammy.
Is it possible the brains of Pit Bulls contain synapses that fire randomly, inexplicably, and chaotically, causing the dogs to go off without cause or provocation? Sure it is. Anything’s possible. Is it likely? Not so much. It’s more likely that vicious, aggressive Pit Bulls — or any errant pets for that matter — are owned by people who didn’t train them properly.
Who’s a Good Boy?
I’m on about all this because I read a LinkedIn post called, “Can Artificial Intelligence Be Trusted?” I took the answer to be self-evident. But the author took 850 words to say, “It depends.” Here are 68 of them:
Any AI tool can be hijacked by not-so-well-meaning humans (including political extremists, Russians, or angry ex-spouses) and it can be “taught” to behave like a racist, sexist, bully, or any other dysfunctional personality type. A more serious problem … has to do with the degree to which our AI-driven chatbots and expert systems might simply “learn” the biases, misconceptions, and mistakes that characterize human beings’ very non-artificial intelligence.
Is it possible that AI tools contain electrons that fire randomly, inexplicably, and chaotically, causing the tools to go off without cause or provocation? Sure it is. Anything’s possible. Is it likely? Not so much. It’s more likely that racist, sexist, bullying, or otherwise dysfunctional AI tools were programmed by people who probably have no business training Pit Bulls, either.
I’m No Savant
Is it possible the things I take to be common sense could actually be complex postulations of labyrinthine logic? Sure it is. Anything’s possible. Is it likely? Not so much.
If you don’t want your AI tools to come back to bite you, train them like you’d train your Pit Bull. That’s just common sense.
Even Eddie and Sammy know that.
Selfie © Eddie and Sammy. All rights reserved.