Nemtsov Murder, Analysis Notes

Some types of open source intelligence require more diligence than thought. A few years back, one person diligently counted weapons systems  appearing in photos of the Syrian conflict, revealing a continuing flow of Russian arms, denied, of course, by the Russians.

The fuzziness of the initial Yemen picture places the problem at the opposite pole. If you are unfortunate enough to witness a brawl outside your window, with the fighters ranging all over the street, the first question of your roving eye is, “Who is fighting who?” This question comes before before all others of conflict, as the most basic “What’s happening?”,  which must be answered to know what further to look at.  More than most problems, the first look requires resort to the primitive wiring, the gestalts, that inhabit the minds of most of us.

The analysis of the Nemtsov murder presents another difficulty. More than most, it appears to include elements of “judgment”, which has come under much scrutiny by Philip Tetlock. According to the Wikipedia article, which he probably would have corrected if in error, “Tetlock’s conclusion is that even crude extrapolations are more reliable than human predictions in every domain his study observed, confirming the claims of other psychological research.”

The fact of whether Putin murdered Nemtsov is part of the future. So it appears that, in deciding whether Putin ordered the murder of Nemtsov, we should dehumanize the analysis as much as possible. This is fine with me. It provides a reason to proceed with objectification of the procedure as much as possible. And if the result appears crudely unsubtle, so what? At least it’s better than human judgment.

Solving real-world problems seems to involve mainly flat logic. Deductive reasoning, the complicated chaining of the classic murder mystery novel, finds little application. I can’t think of any, although the machine on which I am typing this is an agglomeration of the most complicated logic chaining imaginable. So why isn’t it a big part of human existence? Because humans aren’t very good at it. Most problems are created by other humans. It is useless to analyze a problem they created by use of a system of reasoning they do not use. Note the use of a Sheffer stroke in the last sentence. Did you enjoy it?

The mental defect of the conspiracy nut could result from the use of classical logic, which does not have a mechanism of weights. Things are either true of false. The digital domain, which includes both classical logic, and computers, is unforgiving of errors. A single error, caused either by something mistaken to be a fact, or a cosmic ray flipping RAM cell, propagates. The result is typically a blue screen,  a trip to jail, a mental institution, or skid row.

So it is plausible that problems involving “complicated judgment” should be approached by logically simple methods. The secret sauce is in how the elements are weighted. Benjamin Franklin, a folk-avatar of good judgment,  advocated adding  the pros and cons, with some estimation of the relative importance of each. But why would such a simple scheme work so well?

The folkloric  “trust your first instinct” has been shown to be a fallacy. But there is something close by in the instinctual space, the way the problem is partitioned into facts. Each of the pros and cons of Franklin’s tally is taken as independent of the others. This is what gives value to the sum. It is an important mathematical concept. N homogenous equations of N unknowns can be solved if each of the equations is independent of the other. If an equation is not independent, it is a restatement of what is already known — analogous to  fallacious overweight of a particular fact.

This suggests that the brain may have a natural facility to partition what is known about a problem into facts tending towards equal weight. But if you’ve listened to enough rants, you know that a basic technique of argument, of thought, and just plain venting is to say and think the same thought over and over, with minor variations. So the proposed natural facility of partitioning is easily damaged by personality and emotional needs. One cause may be cognitive dissonance, the stress caused by desire to protect investment in belief systems.

Until the posthumous publication of the famous theorem of Thomas Bayes, nothing could do better. With advances continuing to the present, a machine can be programmed to make better decisions than the same machine performing a simple sum. The key concept, which attempts to meld probability theory with Franklin’s tally, is to sum all the ways an event could happen, including all the possible chains of cause-and-effect, with each element of the sum weighted by the chance it is correct. This kind of logic is now found at all levels, from the basic path-integral formulation of quantum mechanics, all the way to the models that predict power grid stability and financial market behavior. But you, as a human, must be cautious. The mathematical models we create, empowered by Bayes, are riddled with all kinds of errors and riddles, painfully confronted every time the power grid goes dark, or we have a financial market meltdown.

Some publications describe a method taught to analysts in the U.S. intelligence community  that employs the summing of  probabilistic paths to a future event. Perhaps, with some event spaces, such sophistication produces superior results.

But you could do worse than follow old Ben Franklin’s method. If you do better, let me know.