We continue from Twitter and Elon Musk Part 2.
(CNN) Musk suspends journalists from Twitter. Quoting,
Elon Musk suspended several journalists who cover him from Twitter, alleging they violated his new policy against disclosing his jet’s location.
Doxing is hate speech, wishing ill for a group or individual; hate speech requires the same sanction. We could hope that, personally touched, Musk will come to realize this. The principle obstacle is that Musk, like many who are supremely talented in a concentrated way, may think the world is populated by minor versions of himself. If Musk realizes, what are his options?
Part 2 discusses bot detection, which stands in front of the far more difficult problem of post moderation. Let’s finish and carry the wisdom forward. The Turing Test is pass-fail. If your silicon interrogator is well trained, it may work pretty well. False negatives, “not a human”, are inevitable; an idiot may be mistaken for a machine. Likewise, a bot that pushes the state of the art may appear witty, urbane, sophisticated, backing out of a carefully laid trap with a conversational gambit of its own.
It gets harder. A bot can have a human component. This is likely to be the case when a bot solves a pictorial CAPTCHA, a “Completely Automated Public Turing test to tell Computers and Humans Apart”. You know these as the 3×3 board with “Click on each square which contains a traffic light.” This is very hard for a computer to solve, and trivial for a human. Poor people in Third World countries solve these as piecework, a few mils/CAPTCHA.
This works for bots because the pictorial CAPTCHA is independent of culture. Now our problem becomes “Is the subject a Francophone?” Our silicon interrogator must be recast in multiples, one for each culture.. There are many. In the Battle of the Bulge, not knowing baseball trivia could get you shot.
In an initially active mode, the interrogator has these advantages:
- It initiates.
- It can choose the subject.
- It can guide the conversation.
- It can make demands.
Now we come to post moderation. The above list is void.The interrogator’s role is reduced to the passive, vigilant watchfulness of a monitor, more stress on the heuristics of AI post moderation. Here, the user is in control. The test result is more than pass/fail; there are choices. In order of severity:
- Warning.
- Post annotated.
- Post deleted.
- User suspended.
- Call the cops!
This requires that our AI moderator actually know things. Suppose a user writes,
I hate pumpkins. I'm going to smash every pumpkin I see. Watch out, pumpkins. I'm coming for you. Your time is over! Big Pumpkin is taking a broomstick from Newark at XX:XX.
It could be a joke. Or “pumpkin” could be a recently minted code word of a hate group or individual. How much does the monitor have to know to make the distinction? There are three answers, views of AI that have almost nothing to do with each other:
- Symbolic. The moderator is a conventional computer, in which the world is represented by symbols that can be manipulated with logic that we supply, can watch and understand. Rules, like a legal code, are required.
- Neural. The moderator is a silicon neural net that can be trained on pumpkin-esque examples. We do not supply the logic, and the neural net can’t tell us how it decides. If it works, we live in happy ignorance. It is trained on examples, knowing nothing of rules.
- Miscellaneous pattern classifiers of lesser power.
Symbolic is precise and obtuse. Neural is imprecise, and what we think of as imaginative. Neither is fit to steward human beings. I would like to think this horror lims the far future.
It’s not. It’s the sound of a freight train horn, when your car is stalled on a grade crossing.