By J R – https://www.flickr.com/photos/jmrosenfeld/3639249316, CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=37298033

Some of the surplus of military equipment leftover from America’s foreign adventures were given to American police forces. While this might have seemed to be a good idea at the time, it did lead to infamous images of war ready police squaring off against unarmed civilians. This is the sort of image one would expect in a dictatorship but are not supposed to see in a democracy.

This images helped start a debate about the appropriateness of police equipment, methods and operations. The Obama regime responded by putting some restrictions on the military hardware that could be transferred to the police, although many of the restrictions were on gear that the police had, in general, never requested. In his first term, Trump decided to lift the Obama ban and  then attorney general Jeff Sessions touted this as a rational response to crime and social ills. As Sessions sees it, “(W)e are fighting a multi-front battle: an increase in violent crime, a rise in vicious gangs, an opioid epidemic, threats from terrorism, combined with a culture in which family and discipline seem to be eroding further and a disturbing disrespect for the rule of law.” Perhaps Sessions believes that arming the police with tanks and grenade launchers will help improve family stability and shore up discipline. With Trump’s promise to forcibly deport millions of migrants, we are likely to see a militarized police forcer operating alongside the actual military.

While it might be tempting to dismiss Trump and Session having engaged in a mix of macho swagger and the view that bigger guns solve social ills, there is a real issue about what is appropriate equipment for the police.

The key factor in determining the appropriate armaments for police is the role that the police are supposed to play in society. In a democratic state aimed at the good of the people (the classic Lockean state) the role of the police is to protect and serve the people. On this view, the police do need armaments suitable to combat domestic threats to life, liberty and property. In general, these threats would usually involve engaging untrained and unarmored civilian opponents equipped with light arms (such as pistols and shotguns). As such, the appropriate weapons for the police would also be light arms and body armor.

Naturally enough, the possibility of unusual circumstances must be kept in mind. Since the United States is awash in guns, the police do face opponents well-armed opponents. The police might have to go up against experienced (or fanatical) opponents, perhaps within a fortified defensive position. They are also sometimes called upon to go up against rioters.  In such cases, the police would justly require riot gear and military grade equipment. However, these should be restricted to specially trained special units, such as SWAT.

It might be objected that the police should be equipped with this sort of equipment, just in case they need it. I certainly see the appeal to this. A rational combat mindset is to be ready for anything and to meet resistance with overwhelming force. But that points to the problem: to the degree the police adopt a combat mindset, they are moving away from being police and towards being soldiers. Given the distinction between the missions, having police operating like soldiers with military equipment is a danger to civil society. Defeating an enemy in war is different from protecting and serving.

There is also the problem that military equipment is more dangerous than standard police weapons. While a pistol can kill, automatic weapons can do much more damage. The police, unlike soldiers, are presumed to be engaging fellow citizens and the objective is supposed to be to use as little force as possible. They are supposed to be policing rather than subjugating.

But the view that the police should serve and protect the good of the people is not the only possible view. As can be seen around the world, some states regard the police as tools of repression and control. These police operate as the military, only with their fellow citizens as enemies. If the police are regarded as tools of the ruling class and exist to maintain their law and order, then a militarized police force makes sense. Militaries serve as an army against the people of other countries, serving the will of their rulers. Same basic role, but different targets.

It could be argued that while this is something practiced by repressive states, it is also suitable for a democratic state. Jeff Sessions characterizes policing as a battle, and one could argue the is right. As Trump likes to say, one might think there are enemies within America that must be defeated in the war on crime. On this view, the police are to engage these enemies in a way analogous to the military engaging a foreign foe and thus it makes sense that they would need military grade equipment. They are a military force serving military objectives. This lines up with the criticism that the police are often an occupying army in poor neighborhoods, but this is regarded as a feature rather than a flaw as that is the function of the police.

While I do think the militarization of the police impacts their behavior (I would be tempted to use a tank if I had one), my main concern is not with what weapons the police have access to, but the attitude and moral philosophy behind how they are armed. That is, my concern is not so much that the police have the weapons of an army, but that they are regarded more as an army to be used against citizens than as protectors of life, liberty and property. As this is being written, the police have been deployed against striking Amazon workers and critics point to this as an example of how the police force serves as domestic army for the rich.

https://dukeroboticsys.com/

Taking the obvious step in done technology, Duke Robotics developed a small armed drone called the Tikad. Israel also developed a sniper drone that it is using in Gaza. These drones differ from earlier armed drones, like the Predator, in that they are small and relatively cheap. As with many other areas of technology, the main innovations are in ease of use and lower cost. This makes the small armed drones more accessible than previous drones, which is both good and bad.

On the positive side, the military and police can deploy more drones and reduce human casualties (at least for the drone users). For example, the police could send a drone in to observe and possibly engage during a hostage situation and not put officers in danger.

On the negative side, the lower cost and ease of use means that armed drones are easier to deploy by terrorists, criminals and oppressive states. The typical terrorist group cannot afford a drone like the Predator and might have difficulty in finding people who can operate and maintain such a complicated aircraft. But smaller armed drones can be operated and serviced by a broader range of people. This is not to say that Duke Robotics should be criticized for doing the obvious as people have been thinking about arming drones since drones were invented.

Inexpensive gun drones do raise the usual concerns associated with remotely operated weapons. The first is the concern that operators of drones can be more aggressive than forces that are physically present and at risk of the consequences of engaging in violence. However, it can also be argued that an operator is less likely to be aggressive because they are not in danger and the literal and metaphorical distance will allow them to respond with more deliberation. For example, a police officer operating a drone might elect to wait longer to confirm that a suspect is pulling a gun than they would if they were present. Then again, they might not as this would be a training and reaction issue with a very practical concern about training officers to delay longer when operating a drone and not delaying too long in person.

A second concern is accountability. A drone allows the operator anonymity and assigning responsibility can be difficult. In the case of the military and police, this can be addressed by having a system of accountability. After all, military and police operators would usually be known to the relevant authorities. That said, drones can be used in ways that are difficult to trace to the operator and this would be true in the case of terrorists. The use of drones would allow terrorists to attack from safety and in an anonymous manner, which are matters of concern.

However, it must be noted that while the first use of a gun armed drone in a terrorist attack would be something new, it would not be significantly different from the use of a planted bomb or other distance weapons. This is because such bombs allow terrorists to kill from a safe distance and make it harder to identify the terrorist. But, just as with bombs, the authorities would be able to investigate the attack and stand some chance of tracing a drone back to the terrorist. Drones are in some ways less worrisome than bombs as a drone can be seen and is limited in how many targets it can engage. In contrast, a bomb can be hidden and can kill many in an instant, without a chance of escape or defense.  A gun drone is also analogous in some ways to a sniper rifle in that it allows engagement at long ranges. However, the drone does afford far more range and safety than even the best sniper rifle.

In the United States, it is currently not legal to arm your drone. While the people have the right to keep and bear arms, this does not extend to operating armed drones. The NRA does not seem interested in fighting for the right to arm drones, but that could changes.

In closing, there are legitimate concerns about cheap and simple gun drones. While they will not be as radical a change as some might predict, they will make it easier and cheaper to engage in violence at a distance and in anonymous killing. As such, they will make ideal weapons for terrorists and oppressive governments. However, they do offer the possibility of reduced human casualties, if used responsibly. In any case, their deployment is inevitable, so the meaningful questions are about how they should be used and how to defend against their misuse. The question about whether they should be used is morally interesting, but pragmatically irrelevant since are being used.

Since the US is experiencing a drone panic as this is being written, I’ll close with a few rational points. First, of course people are seeing drones. As comedians have pointed out, you can buy them at Walmart. Drones are everywhere. Second, people are regularly mistaking planes and even stars for drones. Third, as has been pointed out and as should be obvious, if a foreign power were secretly operating drones in the US, then they would turn the lights off. Fourth, no harm seems to have been done by the drones, so it is a panic over nothing. But it is reasonable to be concerned with what drones are being used for as corporations and the state are not always acting for the public good.

 

Imagine a twenty-sided die (a d20 as it is known to gamers) being rolled. In the ideal the die has a 1 in 20 chance of rolling a 20 (or any number). It is natural to think of the die as being a locus of chance, a random number generator whose roll cannot be predicted. While this is an appealing view of dice, there is a question about what random chance amounts to.

One way to look at the matter is that if a d20 is rolled 20 times, then one of those rolls will be a 20. Obviously enough, this is not true. As any gamer will tell you, the number of 20s rolled while rolling 20 times varies. This can be explained by the fact that dice are imperfect and roll some numbers more than others. There are also the influences of the roller, the surface on which the die lands and so on. As such, a d20 is not a perfect random number generator. But imagine there could be a perfect d20 rolled under perfect conditions. What would occur?

One possibility is that each number would come up within the 20 rolls, albeit at random. As such, every 20 rolls would guarantee a 20 (and only one 20), thus accounting for the 1 in 20 chances of rolling a 20. This seems problematic. There is the obvious question of what would ensure that each of the twenty numbers were rolled once (and only once). Then again, that this would occur is only a little weirder than the idea of chance itself.

But a small number of random events (such as rolling a d20 only twenty times) will deviate from what probability dictates. It is also well-established that as the number of rolls increases, the closer the outcomes will match the expected results. This principle is known as the law of large numbers. As such, getting three 20s or no 20s in a series of 20 rolls would not be surprising. But as the number of rolls increases, the closer the results will be to the expected 1 in 20 outcomes for each number. So, the 1 in 20 odds of getting a 20 with a d20 does not seem to mean that 20 rolls will guarantee one and only one 20, it means that with enough rolls about 1 in 20 of all the rolls will be 20s. This does not say much about how chance works beyond noting that chance seems to play out “correctly” over large numbers.

One way to look at this is that if there were an infinite number of d20 rolls, then 5% of the infinite number of rolls would be 20s. One might wonder what 5% of infinity would be; would it not be infinite as well? Since infinity is such a mess, a more manageable approach would be to use the largest finite number (which presumably has its own problems) and note that 5% of that number of d20 rolls would be 20s.

Another approach would be that the 1 in 20 chance means that if all 1 in 20 chance events were formed into sets of 20, sets could be made from all the events that would have one occurrence each of the 1 in 20 events. Using dice as an example, if all the d20 rolls in the universe were known (perhaps by God) and collected into sets of numbers, they could be dived up into sets of twenty with each number in each set. So, while my 20 rolls would not guarantee a 20, there would be one 20 out of every 20 rolls in the universe. There is still the question of how this would work. One possibility is that random events are not random and this ensures the proper distribution of events such as dice rolls.

It could be claimed that chance is a bare fact, that a perfect d20 rolled in perfect conditions would have a 1 in 20 chance of producing a specific number. On this view, the law of large numbers might fail. If chance were real, it would not be impossible for results to be radically different than predicted. That is, there could be an infinite number of rolls of a perfect d20 with no 20 ever being rolled. One could even imagine that since a 1 can be rolled on any roll, someone could roll an infinite number of consecutive 1s. Intuitively this seems impossible. It is natural to think that in an infinity every possibility must occur (and perhaps do so perfectly in accord with the probability). But this would only be a necessity if chance worked a certain way, perhaps that for every 20 rolls in the universe there must be one of each result. Then again, infinity is a magical number, so perhaps this guarantee is part of the magic.

While there are ongoing efforts to revise the Confederate States of America story from one of slavery to one of state’s rights, secession from the Union was because of slavery. At the time of succession, the leaders explicitly said this was their primary motivation. This is not to deny there were other motivations, such as concerns about state’s rights and economic factors. The Confederacy’s moral and economic foundation was slavery. This is a rejection of the principle that all men are created equal, a rejection of the notion of liberty, and an abandonment of the idea that the legitimacy of government rests on the consent of the governed. In short, the Confederacy was an explicit rejection of the professed values of the United States. Other than white supremacy.

While the Confederacy lost and the union was reformed, its values survived and are now manifested by the alt-right and increasingly the right. This is shown by their defense of Confederate monuments, their use of Confederate flags, and their racism. They are aware of the moral foundations of their movement.

While the value system of the Confederacy embraced white supremacy and accepted slavery as a moral good, it did not accept genocide. That is, the Confederacy advocated enslaving blacks rather than exterminating them. Extermination was something the Nazis eventually embraced.

The Nazis took over the German state and plunged the world into war. Like the Confederate states, the Nazis embraced the idea of white supremacy and rejected equality and liberty. The Nazis also made extensive use of slave labor. Unlike the Confederate states, the Nazis engaged in a systematic effort to exterminate those they regarded as inferior. This does mark a moral distinction between the Confederate States of America and Nazi Germany. This is a distinction between degrees of evil.

While the Nazis were once regarded by most Americans as a paradigm of evil, many in the alt-right embrace their values and some do so explicitly and openly, identifying as neo-Nazis. Some claim they do not want to exterminate what they say are other races but want to have racially pure states. For example, some on antisemites on the right support Israel because they see it as a Jewish state; a place where all the Jews should be. In their ideal world, each state would be racially pure. This is why the alt-right is sometimes also known as the white nationalists. The desire to have pure states can be seen as morally better than the desire to exterminate, but this is a distinction in evils rather than one between good and bad.

Based on the above, the modern alt-right (and increasingly the American right) is the inheritor of the Confederate States of America and Nazi Germany. While this might seem a matter of mere historic interest, it has important implications. One is that it provides grounds that the members of the alt-right should be regarded as on par with members or supporters of ISIS or other enemy foreign terrorist groups. This is in contrast with seeing the alt-right as being entirely domestic.

Those who join or support Isis (and other such groups) are seen as different from domestic hate groups. This is because ISIS (and other such groups) are foreign and conflict with the United States. This applies even when the ISIS supporter is an American who lives in America. This perceived difference has numerous consequences, including legal ones. It also has consequences for free speech. While advocating the goals and values of ISIS in the United States would be a threat and could result in criminal charges, the alt-right is protected by the right to free speech. This is illustrated by the fact that the alt-right can get permits to march in the United States, while ISIS supporters and similar groups cannot. One can imagine the response if ISIS or Hamas supporters applied for permit or engaged in a march.

While some hate groups are truly domestic in that they are not associated with foreign organizations at war with the United States, the alt-right cannot make this claim. At least they cannot to the degree they are connected to the Confederate States of America and the Nazis. Both are foreign powers who were at war with the United States. As such, the alt-right should be seen as on par with other groups that affiliate themselves with foreign groups engaged in war with the United States.

An obvious reply is that the Confederacy and the Nazis were defeated and no longer exist. On the one hand, this is true. The Confederacy was destroyed, and the states rejoined the United States. The Nazis were defeated and while Germany still exists, it is not controlled by the Nazis. At least not yet. On the other hand, the Confederacy and the Nazis do persist in the form of groups that preserve their values and ideology here in the United States. To use the obvious analogy, the defeat of ISIS and its territorial losses did not end the group. It will persist as long as it has supporters, and the United States has not switched to a policy of tolerating ISIS members and supporters simply because ISIS no longer has territory.

 The same holds true for those supporting or claiming membership in the Confederacy or the Nazis. They are supporters of foreign powers that are enemies of the United States and are thus on par with ISIS supporters and members in that they are agents of the enemy. This is not to say that the alt-right is morally equivalent to ISIS in terms of its actions. Overall, ISIS is worse. But what matters in this context, is the expression of allegiance to the values and goals of a foreign enemy—something ISIS supporters and alt-right members who embrace the Confederacy or Nazis have in common.

Briefly put, right-to-try laws give terminally ill patients the right to try experimental treatments that have completed Phase 1 testing but have yet to be approved by the FDA. Phase 1 testing involves assessing the immediate toxicity of the treatment. This does not include testing its efficacy or its longer-term safety. Roughly put, passing Phase 1 just means that the treatment does not immediately kill or significantly harm patients.

On the face of it, no sensible person would oppose the right-to-try.  This right is that people who have “nothing to lose” are given the right to try treatments that might help them. The bills and laws use the rhetorical narrative that the right-to-try laws would give desperate patients the freedom to seek medical treatment that might save them and this would be done by getting the FDA and the state out of the way. This is powerful rhetoric that appeals to compassion, freedom and a dislike of the government. As such, it is not surprising that few people dare oppose the right-to-try. However, the matter does deserve proper critical consideration.

One way to look at it is to consider an alternative reality in which the narrative is spun with a different rhetorical charge, a negative spin rather than positive. Imagine, for a moment, if the rhetorical engines had cranked out a tale of how the bills would strip away the protection of the desperate and dying to allow predatory companies to use them as Guinea pigs for their untested treatments. If that narrative had been sold, people would probably be opposed to such laws. But rhetorical narratives, positive or negative, are logically inert and are irrelevant to the merits of the right-to-try. How people feel about the proposals is also logically irrelevant as well. What is needed is a cool examination of the matter.

On the positive side, the right-to-try does offer people the chance to try treatments that might help them. It is hard to argue that terminally ill people do not have a right to take such risks. That said, there are still some concerns.

One concern is that there is an established mechanism allowing patients access to experimental treatments. The FDA already has as system that approves most requests. Somewhat ironically, when people argue for the right-to-try by using examples of people successfully treated by experimental methods, they are showing that the existing system already allows such access. This raises the question about why the laws are needed and what they change.

The main change is usually to reduce the role of the FDA. Without such laws, requests to use experimental methods must go through the FDA (which seems to approve most requests).  If the FDA routinely denied treatments, then such laws would seem needed. However, the FDA does not seem to be the problem as they generally do not roadblock the use of experimental methods for the terminally ill. This leads to the question of is limiting patient access.

The main limiting factors are those that impact almost all treatment access: costs and availability. While the right-to-try grants the negative right to choose experimental methods, they do not grant the positive right to be provided with those methods. A negative right is a liberty, and one is free to act upon it but is not provided with the means to do so. The means must be acquired by the person. A positive right is an entitlement, and the person is free to act and is provided with the means of doing so. In general, the right-to-try does little or nothing to ensure that treatments are provided. For example, public money is usually not allocated to pay for them. As such, the right-to-try is like the right-to-healthcare: you are free to get it if you can pay for it. Since the FDA does not roadblock access to experimental treatments, the bills and laws would seem to do little or nothing new to benefit patients. That said, the general idea of right-to-try seems reasonable and is already practiced. While few are willing to bring them up in public discussions, there are some negative aspects to the right-to-try. I will turn to some of those now.

One obvious concern is that terminally ill patients do have something to lose. Experimental treatments could kill them earlier or they could cause suffering. As such, it does make sense to have limits on the freedom to try. At least for now it is the job of the FDA and medical professionals to protect patients from such harms even if the patients want to roll the dice.

This concern can be addressed by appealing to freedom of choice, provided patients can provide informed consent. This does create a problem: as little is known about the treatment, the patient cannot be well informed about the risks and benefits. But, as I have argued often elsewhere, I accept that people have a right to make such choices, even if these choices are self-damaging. I apply this principle consistently, so I accept that it grants the right-to-try, the right to get married, the right to eat poorly, the right to use drugs, and so on.

The usual counters to such arguments from freedom involve arguments about how people must be protected from themselves, arguments that such freedoms are “just wrong” or arguments about how such freedoms harm others. The idea is that moral or practical considerations override the freedom of the individual. This can be a reasonable counter, and a strong case can be made against allowing people the right to engage in a freedom that could harm or kill them. However, my position on such freedoms requires me to accept that a person has the right-to-try, even if it is a bad idea. That said, others have an equally valid right to try to convince them otherwise and the FDA and medical professionals have an obligation to protect people, even from themselves.

A philosophical problem is determining what can, and perhaps more importantly cannot, be owned. There is considerable dispute over this subject and an example is the debate over whether people can be owned. A more recent example is the debate over ownership of genes. While each dispute needs to be addressed on its own merits, it is worth considering the broader question of what can and what cannot be property. It must be noted that this is not just about legal ownership.

Addressing this subject begins with the foundation of ownership, which justifies the claim that one owns something. This is the philosophical problem of property. Most people are probably unaware this is a philosophical problem as people tend to accept the current system of ownership, though people do criticize its particulars. But, to simply assume the existing system of property is correct (or incorrect) is to beg the question and the problem of property needs to be addressed without simply assuming it has been solved.

One practical solution to the problem of property is to argue property is a convention. This can be formalized convention (such as laws) or informal convention (such as traditions) or a combination of both. One reasonable view is property legalism, that ownership is defined by the law. On this view, whatever the law defines as property is property. Another reasonable view is that of property relativism, that ownership is defined by cultural practices (which can include the laws). Roughly put, whatever the culture accepts as property is property. These approaches correspond to the moral theories of legalism (that the law determines morality) and ethical relativism (that culture determines morality).

The conventionalist approach seems to have the virtues of being practical and of avoiding mucking about in philosophical disputes. If there is a dispute about what (or who) can be owned, the matter is settled by the courts, by force of arms or by force of persuasion. There is no question of what view is right as winning makes the view right. While this approach does have its appeal, it is not without problems.

Trying to solve the problem of property with the conventionalist approach does lead to a dilemma: the conventions are either based on some foundation or they are not. If the conventions are not based on a foundation other than force (of arms or persuasion), then they are arbitrary. If this is the case, the only reasons to accept such conventions are practical, such as to avoid harm (such as being killed) or to profit.

If the conventions have a foundation, then the problem is determining what it might be. One approach is to argue that people have a moral obligation to obey the law or follow cultural conventions. While this would provide a basis for a moral obligation to accept the conventions, these conventions would still be arbitrary. Roughly put, those under the conventions would have a reason to accept whatever conventions exist, but no reason to accept a specific convention over another. This is analogous to the ethics of divine command theory, the view that what God commands is good because He commands it and what He forbids is evil because He forbids it. As should be expected, the “convention command” view of property suffers from problems analogous to those suffered by divine command theory, such as the arbitrariness of the commands and the lack of justification beyond obedience to authority.

One classic moral solution to the problem of property is offered by utilitarianism. On this view, the theory of property that creates more positive value than negative value for the morally relevant beings would be the morally correct practice. It does make property a contingent matter since radically different conceptions of property can be thus justified depending on the changing balance of harms and benefits. So, for example, while a capitalistic conception of property might be justified at a certain place and time, that might shift in favor of a socialist conception. As always, utilitarianism leaves the door open for intuitively horrifying practices that manage to fulfill that condition. However, this approach also has an intuitive appeal in that the view of property that creates the greatest good would be the morally correct view of property.

A classic attempt to solve the problem of property is offered by John Locke. He begins with the view that God created everyone and gave everyone the earth in common. While God does own us, He is cool about it and effectively lets each person own themselves. As such, I own myself and you own yourself. From this, as Locke sees it, it follows that each of us owns our labor.

For Locke, property is created by mixing one’s labor with the common goods of the earth. To illustrate, suppose we are washed up on an island owned by no one. If I collect wood and make a shelter, I have mixed my labor with the wood, thus making the shelter my own. If you make a shelter with your labor, it is thus yours. On Locke’s view, it would be theft for me to take your shelter and theft for you to take mine.

This labor theory of ownership quickly runs into problems, such as working out a proper account of mixing of labor and what to do when people are born on a planet on which everything is already claimed and owned. However, the idea that the foundation of property is that each person owns themselves is an intriguing one and does have some interesting implications about what can (and cannot) be owned. One implication would seem to be that people are owners and cannot be owned. For Locke, this would be because each person is owned by themselves, and ownership of other things is conferred by mixing one’s labor with what is common to all.

It could be contended that people create other people by their labor (literally in the case of the mother) and thus parents own their children. A counter to this is that although people do engage in sexual activity that results in the production of other people, this should not be considered labor in the sense required for ownership. After all, the parents just have sex and then the biological processes do all the work of constructing the new person. One might also play the metaphysical card and contend that what makes the person a person is not manufactured by the parents but is something metaphysical like the soul or consciousness (for Locke, a person is their consciousness and the consciousness is within a soul).

Even if it is accepted that parents do not own their children, there is the obvious question about manufactured beings that are like people such as intelligent robots or biological constructs. These beings would be created by mixing labor with other property (or unowned materials) and thus would seem to be things that could be owned. Unless, of course, they are owners like humans.

One approach is to consider them analogous to children. It is not how children are made that makes them unsuitable for ownership, it is what they are. On this view, people-like constructs would be owners rather than things to be owned. The intuitive counter is that people-like manufactured beings would be property like anything else that is manufactured. The challenge is, of course, to show that this would not entail that children are property. After all, considerable resources and work can be expended to create a child (such as IVF, surrogacy, and perhaps someday artificial wombs), yet intuitively they would not be property. This does point to a rather important question: is it what something is that makes it unsuitable to be owned or how it is created?

 

Before getting into the discussion, I am not a medical professional and what follows should be met with due criticism and you should consult an expert before embarking on changes to your exercise or nutrition practices. Or you might die. Probably not. But maybe.

As any philosopher will tell you, while the math used in science is deductive (the premises are supposed to guarantee the conclusion with certainty) scientific reasoning is inductive (the premises provide some degree of support for the conclusion that is less than complete). Because of this, science suffers from what philosophers call the problem of induction. In practical terms, this means that no matter how careful the reasoning and no matter how good the evidence, the inference can still be false. The basis is that inductive reasoning involves a “leap” from the premises/evidence (what has been observed) to the conclusion (what has not been observed). Put bluntly, inductive reasoning always has a chance to lead to a false conclusion. But this appears unavoidable as life seems inductive.

Scientists and philosophers have tried to make science entirely deductive. For example, Descartes believed he could find truths that he could not doubt and then use valid deductive reasoning to generate a true conclusion with absolute certainty. Unfortunately, this science of certainty is the science of the future and (probably) always will be. So, we are stuck with induction.

The problem of induction applies to the sciences that study nutrition, exercise and weight loss and the conclusions made in these sciences can always be wrong. This helps explain why recommendations change relentlessly.

While there are philosophers of science who would disagree, science is a matter of trying to figure things out by doing the best we can do at this time. This is limited by the available resources (such as technology) and human epistemic capabilities. As such, whatever science is presenting now is almost certainly at least partially wrong; but the wrongs often get reduced over time. But sometimes they increase. This is true of all the sciences. Consider, for example, the changes in physics since Thales got it started. This also helps explain why recommendations about diet and exercise change often.

While science is sometimes idealized as a field of pure reason outside of social influences, science is also a social activity. Because of this, science is influenced by social factors and human flaws. For example, scientists need money to fund their research and can be vulnerable to corporations looking to “prove” claims that are in their interest. As another example, scientific subjects can become issues of political controversy, such as race, evolution and climate change. This politicization tends to be bad for science and anyone who does not profit from manufacturing controversy. As a final example, scientists can be motivated by pride and ambition to fake or modify their findings. Because of these factors, the sciences dealing with nutrition and exercise are, to a meaningful degree, corrupted and this makes it difficult to make a rational judgment about which claims are true. One excellent example is how the sugar industry paid scientists at Harvard to downplay the health risks presented by sugar and play up those presented by fat. Another illustration is the fact that the food pyramid endorsed by the US government has been shaped by the food industries rather than being based entirely on good science.

Given these problems it might be tempting to abandon mainstream science and go with whatever food or exercise ideology one finds appealing. That would be a bad idea. While science suffers from these problems, mainstream science is better than the nonscientific alternatives. They tend to have all the problems of science without any of its strengths. So, what should one do? The rational approach is to accept the majority opinion of qualified and credible experts. One should also keep in mind the above problems and approach the science with due skepticism.

So, what does the best science of today say about weight loss? First, humans evolved as hunter-gatherers and getting enough calories was a challenge. Humans tend to be very good at storing energy in the form of fat which is one reason the calorie rich environment of modern society contributes to obesity. Crudely put, it is in our nature to overeat because that once meant the difference between life and death.

Second, while exercise does burn calories, it burns far less than many imagine. For most people, most of the calorie burning is a result of the body staying alive. As such, while exercising more could help a person lose weight, the calorie impact of exercise is surprisingly low. That said, you should exercise (if you can) if only for the health benefits.

Third, hunger is a function of the brain, and the brain responds differently to different foods. Foods high in protein and fiber create a feeling of fullness that tends to turn off the hunger signal. Foods with a high glycemic index (like cake) tend to stimulate the brain to cause people to consume more calories. As such, manipulating your brain is an effective way to increase the chance of losing weight. Interestingly, as Aristotle argued, habituation to foods can train the brain to prefer foods that are healthier. You can train yourself to prefer things like nuts, broccoli and oatmeal over cookies, cake, and soda. This takes time and effort but can be done.

Fourth, weight loss has diminishing returns: as one loses weight, one’s metabolism slows, and less energy is needed. As such, losing weight makes it harder to lose weight, which is something to keep in mind.  Naturally, all these claims could be disproven tomorrow, but they seem reasonable now.

 

Central to our American mythology is the belief a person can rise to the pinnacle of success from the depths of poverty. While this does happen, poverty presents an undeniable obstacle to success. Tales within this myth of success present an inconsistent view of poverty:  the hero is praised for overcoming the incredible obstacle of poverty while it is also claimed that anyone with gumption should be able to succeed. The achievement is thus claimed to be heroic yet easy and expected.

Outside of myths, poverty is difficult to overcome. There are the obvious challenges of poverty. For example, a person born into poverty will not have the same educational opportunities as the affluent. As another example, they will have less access to technology such as computers and high-speed internet. As a third example, there are the impacts of diet and health care. These necessities are expensive, and the poor have less access to good food and good care. There is also research by scientists such as Kimberly G. Noble  that suggests a link between poverty and brain development.

While the most direct way to study the impact of poverty and the brain is by imagining the brain, this is expensive. However, research shows a correlation between family income and the size of some surface areas of the cortex. For children whose families make under $50,000 per year, there is a strong correlation between income and the surface area of the cortex. While greater income is correlated with greater cortical surface area, the apparent impact is reduced once the income exceeds $50,000 a year. This suggests, but does not prove, that poverty has a negative impact on the development of the cortex and this impact is proportional to the degree of poverty.

Because of the cost of direct research on the brain, most research focuses on cognitive tests that indirectly test the brain. Children from lower income families perform worse than their more affluent peers in their language skills, memory, self-control and focus. This performance disparity cuts across ethnicity and gender.

As would be expected, there are individuals who do not conform to this general correlation and there are children from disadvantaged families who perform well on the tests and children from advantaged families who do poorly. Knowing the economic class of a child does not automatically reveal what their individual capabilities are. However, there is a correlation in terms of populations rather than individuals. This needs to be remembered when assessing anecdotes of successful rising from poverty. As with all appeals to anecdotal evidence, they do not outweigh statistical evidence.

To use an analogy, boys tend to be stronger than girls but knowing that Sally is a girl does not mean that Sally is certainly weaker than Bob the boy. An anecdote about how Sally is stronger than Bob also does not show that girls are stronger than boys; it just shows that Sally is unusual in her strength. Likewise, if Sally lives in poverty but does exceptionally well on the cognitive tests and has a normal cortex, this does not prove that poverty does not have a negative impact on the brain. This leads to the question as to whether poverty is a causal factor in brain development.

As the saying goes, correlation is not causation. To infer that because there is a correlation between poverty and cognitive abilities then there must be a causal connection would be to fall victim to a causal fallacy. One possibility is that the correlation is a mere coincidence and there is no causal connection. Another possibility is that there is a third factor that is causing both and poverty and the cognitive abilities are both effects.

There is also the possibility that the causal connection has been reversed. That is, it is not poverty that increases the chances a person has less cortical surface (and corresponding capabilities). Rather, it is having less cortical surface area that is a causal factor in poverty.

This view does have some appeal. As noted above, children in poverty tend to do worse on tests for language skills, memory, self-control and focus. These are the capabilities that are useful for success, and people who are less capable will tend to be less successful. Unless, of course, they are simply born into “success.” To use an analogy, there is a correlation between running speed and success in track races. It is not losing races that makes a person slow. It is being slow that causes a person to lose races.

Despite the appeal of this interpretation, to rush to the conclusion that it is cognitive abilities that cause poverty would be as much a fallacy as just rushing to the conclusion that poverty must influence brain development. Both views appear plausible, and it is possible that causation is going in both directions. The challenge is to sort the causation. The obvious approach is to conduct the controlled experiment suggested by Noble: providing an experimental group of low-income families with an income supplement and providing the control group with a relatively tiny supplement. If the experiment is conducted properly and the sample size is large enough, the results would be statistically significant and provide an answer to the question of the causal connection.

Intuitively, it makes sense that an adequate family income would have a positive impact on the development of children. After all, adequate income would allow access to adequate food, care and education. It would also tend to have a positive impact on family conditions, such as emotional stress. This is not to say that just “throwing money at poverty” is a cure all; but reducing poverty is a worthwhile goal regardless of its connection to brain development. If it does turn out that poverty does have a negative impact on development, then those who claim to be concerned with the well-being of children should be motivated to combat poverty. It would also serve to undercut another American myth, that the poor are stuck in poverty simply because they are lazy. If poverty has the damaging impact on the brain it seems to have, then this would help explain why poverty is such a trap.

 

While there are many moral theories, two of the best known are utilitarianism and deontology. John Stuart Mill presents the paradigm of utilitarian ethics: the morality of an action is dependent on the happiness and unhappiness it creates for the morally relevant beings. Moral status, for this sort of utilitarian, is defined in terms of the being’s capacity to experience happiness and unhappiness. Beings count to the degree they can experience these states. A being that could not experience either would not count, except to the degree that what happened to it affected beings that could experience happiness and unhappiness. Of course, even a being that has moral status merely gets included in the utilitarian calculation. As such, all beings are means to the ends of maximizing happiness and minimizing unhappiness.

Kant, the paradigm deontologist, rejects the utilitarian approach.  Instead, he contends that ethics is a matter of following the correct moral rules. He also contends that rational beings are ends and are not to be treated merely as means to ends. For Kant, the possible moral statuses of a being are binary: rational beings have status as ends, non-rational beings are mere objects and are thus means. As would be expected, these moral theories present two different approaches to the ethics of slavery.

For the classic utilitarian, the ethics of slavery would be assessed in terms of the happiness and unhappiness generated by the activities of slavery. On the face of it, an assessment of slavery would seem to result in the conclusion that slavery is morally wrong. After all, slavery typically generates unhappiness on the part of the enslaved. This unhappiness is not only a matter of the usual abuse and exploitation a slave suffers, but also the general damage to happiness that arises from being regarded as property rather than a person. While the slave owners are clearly better off than the slaves, the practice of slavery is often harmful to the happiness of the slave owners as well; one might argue they deserve such suffering and could avoid it by not being slave owners. As such, the harms of slavery would seem to make it immoral on utilitarian grounds.

For the utilitarian the immorality of slavery is contingent on its consequences: if enslaving people creates more unhappiness than happiness, then it is wrong. However, if enslaving people were to create more happiness than unhappiness, then it would be morally acceptable. A reply to this is to argue that slavery, by its very nature, would always create more unhappiness than happiness. As such, while the evil of slavery is contingent, it would always turn out to be wrong.

An interesting counter is to put the burden of proof on those who claim that such slavery would be wrong. That is, they would need to show that a system of slavery that maximized happiness was morally wrong. On the face of it, showing that something that created more good than bad is still bad would be challenging. However, there are numerous appeal to intuition arguments that aim to do just that. The usual approach is to present a scenario that generates more happiness than unhappiness, but intuitively seems to be wrong or at least makes one feel morally uncomfortable. Ursula K. Le Guin’s classic short story “The Ones Who Walk Away from Omelas” is often used in this role, for it asks us to imagine a utopia that exists at the cost of the suffering of one person.  There are also other options, such as arguing within the context of another moral theory. For example, a natural rights theory that included a right to liberty could be used to argue that slavery is wrong because it violates rights, even if happened to be a happiness maximizing slavery.

A utilitarian can also “bite the bullet” and argue that even if such slavery might seem intuitively wrong, this is a mere prejudice on our part, most likely fueled by examples the unhappy slaveries that pervade history. While utilitarian moral theory can obviously be applied to the ethics of slavery, it is not the only word on the matter. As such, I now turn to the Kantian approach.

As noted above, Kant divides reality into two distinct classes of beings. Rational beings exist as ends and to use them solely as means would be, for Kant, morally wrong. Non-rational beings, which includes non-human animals, are mere objects. Interestingly, as I have noted in other essays and books, Kant argues that animals should be treated well because treating them badly can incline humans to treat other humans badly. This, I have argued elsewhere, gives animals an ersatz moral status.

On the face of it, under Kant’s theory the very nature of slavery would make it immoral. If persons are rational beings and slavery treats people as objects, then slavery would be wrong. First, it would involve treating a rational being solely as a means. After all, it is difficult to imagine that enslaving a person is consistent with treating them as an end rather than just as a means. Second, it would also seem to involve a willful category error by treating a rational being (which is not an object) as an object. Slavery would thus be fundamentally incoherent because it purports that non-objects (people) are objects.

Since Kantian ethics do not focus on happiness and unhappiness, even a deliriously happy system of slavery would still be wrong for Kant. Kant does, of course, get criticized because his system relegates non-rational beings into the realm of objects, thus lumping together squirrels and stones, apes and asphalt, tapirs and twigs and so on. As such, if non-rational beings could be enslaved, then this would not matter morally (unless doing so impacted rational beings in negative ways). The easy and obvious reply to this concern is to argue that non-rational beings could not be enslaved because slavery is when people are taken to be property and non-rational beings are not people on Kant’s view. Non-rational animals can be mistreated and harmed, but they cannot be enslaved.

It is, of course, possible to have an account of what it is to be a person that extends personhood beyond rational beings. For example, opponents of abortion often contend the zygote is a person despite its obvious lack of rationality. Fortunately, it would be easy enough to create a modification of Kant’s theory in which what matters is being a person (however defined) rather than being a rational being.

Thus, utilitarian ethical theories leave open the possibility that slavery could be morally acceptable while under a Kantian account slavery would always be morally wrong.