Big Head Press


L. Neil Smith's
THE LIBERTARIAN ENTERPRISE
Number 639, October 2, 2011

"9/11 was a private sector crime."


Previous Previous Table of Contents Contents Next Next

Libertarian Law: The Math of Probable Cause
by DataPacRat
[email protected]

Bookmark and Share

Report to AttackWatch

Attribute to L. Neil Smith's The Libertarian Enterprise

At about 1:45pm today (+/- 5 minutes), I had a thought which may very well be entirely novel, explanatory, and predictive. I've spent the last few hours trying to poke at it, and haven't found any obvious flaws in it, so now I'm seeing if I can explain it.

0. Intro.

Combining a few disparate bits of math may give us some useful conclusions about the nature of legal systems, and why some systems are more effective and stable than others.

Don't worry if you don't like numbers; nothing here is more complicated than multiplication, and I'll work through that for you.

1. Laplace's Sunrise Formula.

There's a sort of math called "Bayesian Induction" which can help people figure out how strongly you should hold a belief given a collection of evidence of various strengths; and Bayesianism seems to be the closest we can come to Solomonoff Induction, the most accurate method possible if we had unlimited computing power. However, the Bayesian approach involves 'updating' ones beliefs based on new evidence, rather than saying how confident one should be in one's beliefs before performing a Bayesian update. Fortunately, there's another bit of math that covers that, known as Laplace's Sunrise Formula, or the Rule of succession.

It was originally created to answer the question, "Knowing only how many times the sun has risen, what are the odds of it rising tomorrow?", but can also be applied to a coin with an unknown bias somewhere between 100% heads and 100% tails, or a bag containing chips of one or more colours. It has two inputs—the total number of tests so far (the number of days, or coinflips, or chips drawn from the bag), and the number of successful tests (the number of sunrises, or coins flipped heads, or blue chips). It has one output—the probability that the next test will be successful. For large numbers of trials, the formula gives at least roughly the expected odds—if you flip a coin a million times and it comes up heads 500,000 times, the formula gives you approximately 50% odds that the next flip will also be heads. But the interesting part is that it also applies to low numbers of trials—or even single tests—or even none at all.

The formula is: (Successes + 1) / (TotalTrials + 2) = ProbabilityNextTrialSucceeds

If I have no prior evidence for how a coin is weighted, then before I make my first test, the formula gives me (0+1) / (0+2) = 1/2 = 50% odds that it will go either way, which agrees with common sense that if you don't know, then all options are equally likely. After the first flip, if it's heads, the expected odds shift to 2/3 that the next will be heads; if it's tails, the odds shift to 1/3. This is a rather interesting case, because some experimenting has been done with certain creatures evolved with the pressure of natural selection to adapt to make the best possible choice in such tests—and they do, in fact, treat this as being what the actual odds are, implying that this is, in fact, the most accurate odds.

2. Probable Cause.

The math in this part doesn't have anything to do with the previous part—they don't come together until the next part.

The standard of proof in civil lawsuits is usually 'preponderance of the evidence'—if it's more likely than not that you broke my window, you can be deemed responsible for paying for it. However, for criminal charges, the standard is 'beyond a reasonable doubt', and it can be required that every member of a jury agrees that that standard has been met.

Some clever people have done some analysis of such court cases, and in general, this standard of proof seems to be met when people are 75% confident. That is, on average, if a jury thinks there's at least a 25% chance someone is innocent, they don't convict.

3. Putting 'Em Together.

Several conceptions of libertarian legal systems suggest eliminating the 'criminal' category altogether, treating all harm done to others as civilian torts. While working on the fundamentals of a libertarian code of laws, I was considering the same, but wanted to be sure that doing so would not eliminate some vital piece of social infrastructure - even if the actual contribution made to society wasn't necessarily what most people thought it was, the way that one of the most important uses of a free market is to establish what the prices of things are.

So I wondered, why does a criminal conviction require a higher standard of proof than that required to pay civil compensation? And I realized that at a 75% certainty level, the most important information created by a jury isn't necessarily whether or not the accused actually committed a crime—but whether or not they will do so in the future.

Some clever people working with SETI have determined that our own existence does not provide any evidence for or against the existence of life on other planets; if we did not exist, then we would not be around to ask the question in the first place. In a parallel case, we can never really know when someone is presented with a good opportunity to commit a criminal act and refrains from doing so; we can only know of those cases in which, presented with such an opportunity, somebody does perform a criminal act. Therefore, using Laplace's formula, we cannot assume that an accused has had any other such opportunities—we can only act on the basis that they have had the one opportunity that we know about, and that they took advantage of that opportunity. Laplace's formula thus tells us that we can be 2/3rds confident that, if presented with a similar opportunity in the future, they will again take advantage of it.

If we combine that 2/3rds confidence with the 75% confidence required for a jury conviction, we then see that the information generated by a criminal trial is our confidence that a convicted person will reoffend:

2/3 * 3/4 = 2/4 = 1/2 = 50%

This implies that a criminal conviction's value isn't in determining whether they committed a previous crime, but in demonstrating that they are more likely than not to commit another one in the future. This is unexpected and somewhat startling. It could very well be false; but, if true, has certain further implications.

For example, it could be suggested that if this information is important, then the closer any given society adheres to it as a standard, the better the foundations of their free market will be, and thus the more wealthy, prosperous, and stable that society will be.

For another example, this could suggest the main rationale for criminal punishments beyond civil recompense, the reason some punishments end up being utilized more than others, and which punishments should be chosen in a given legal system to accomplish any particular goals. Fines in excess of simple damages could be expected to be useful to pay the damages of the convictee's future criminal actions. Imprisonment could remove the convictee from having the opportunity to reoffend, and rehabilitation could remove the desire. Both of those, plus corporal punishment and the death penalty, could be based on the theory that increasing the negative consequences will make a future reoffense less likely. That doesn't mean that this latter theory is correct; if it isn't, then it could be expected that the more stable and prosperous a society is, the less likely these punishments will be applied.

4. Libertarian Law

In libertarian terms, the usual list of what actions are considered an 'initiation of force' are "force, fraud, and threat". The latter is usually interpreted as being a person stating that they are going to commit an act of force or fraud—but it is possible that this is not the most useful interpretation. If, using the best evidence-gathering and analysis techniques possible, you learned that it was at least 50% likely that an individual was going to initiate force against you, then it just might be moral to use the necessary amount of retaliatory force against that individual to limit their freedom to initiate force against you. If it was at least 50% likely that that individual was going to initiate force against someone in a society, then even without necessarily knowing who the particular target is going to be, it might be moral for the members of that society to act in their common defense.

Or maybe it isn't—but now, at least, the question can be asked, with a better knowledge of its mathematical underpinnings.

Addendum:

As an example of a prediction, based on the idea that criminal law's stability comes from its use as a way to identify the people who threaten to cause harm in the future: In addition to using data of instances where one individual did cause harm to others, the system could also use data of instances where one individual tried to cause harm to others but did not succeed. For example, if I try to steal your wallet and fail, then I haven't caused you any damages; but if it can be proven beyond a reasonable doubt that I made the attempt, then that would provide the same information about my future willingness as a successful attempt would have, and thus the more successful legal systems will likely treat 'attempted crimes' and 'conspiracy to commit crimes' as being on a similar level to actual acts of causing harm.

As a second example, if it can be demonstrated that harm was done, but not by any conscious attempt, then while the person who caused the harm may still be liable for the actual damages, there may not be any reason to expect the perpetrator to be any more likely to commit such acts in the future than anyone else who suffers an accident, and so the more successful legal systems will likely take into account the perpetrator's intentions. It is also plausible that there could be still another category, of people who cause harm to others not by conscious intent, but by a disregard for the consequences of their actions that is severe enough that such a person is as likely to repeat their behaviour as if they were a more ordinary criminal.

The above are, admittedly, somewhat 'retrodictive' predictions. For a true prediction, I offer this: As we gather more data about how the human brain works, and how minds go about making decisions, then the most successful criminal legal systems will be those which take advantage of the data to determine which minds are most likely to cause harm in the future, and use the countering methods that have the best statistics at preventing such re-offenses; rather than any criminal justice system which uses any other methods, such as being based on any particular political ideology or philosophy about how criminals 'should' be treated.

Was that worth reading?
Then why not:


TLE AFFILIATE

Big Head Press