While there are ongoing efforts to revise the Confederate States of America story from one of slavery to one of state’s rights, secession from the Union was because of slavery. At the time of succession, the leaders explicitly said this was their primary motivation. This is not to deny there were other motivations, such as concerns about state’s rights and economic factors. The Confederacy’s moral and economic foundation was slavery. This is a rejection of the principle that all men are created equal, a rejection of the notion of liberty, and an abandonment of the idea that the legitimacy of government rests on the consent of the governed. In short, the Confederacy was an explicit rejection of the professed values of the United States. Other than white supremacy.

While the Confederacy lost and the union was reformed, its values survived and are now manifested by the alt-right and increasingly the right. This is shown by their defense of Confederate monuments, their use of Confederate flags, and their racism. They are aware of the moral foundations of their movement.

While the value system of the Confederacy embraced white supremacy and accepted slavery as a moral good, it did not accept genocide. That is, the Confederacy advocated enslaving blacks rather than exterminating them. Extermination was something the Nazis eventually embraced.

The Nazis took over the German state and plunged the world into war. Like the Confederate states, the Nazis embraced the idea of white supremacy and rejected equality and liberty. The Nazis also made extensive use of slave labor. Unlike the Confederate states, the Nazis engaged in a systematic effort to exterminate those they regarded as inferior. This does mark a moral distinction between the Confederate States of America and Nazi Germany. This is a distinction between degrees of evil.

While the Nazis were once regarded by most Americans as a paradigm of evil, many in the alt-right embrace their values and some do so explicitly and openly, identifying as neo-Nazis. Some claim they do not want to exterminate what they say are other races but want to have racially pure states. For example, some on antisemites on the right support Israel because they see it as a Jewish state; a place where all the Jews should be. In their ideal world, each state would be racially pure. This is why the alt-right is sometimes also known as the white nationalists. The desire to have pure states can be seen as morally better than the desire to exterminate, but this is a distinction in evils rather than one between good and bad.

Based on the above, the modern alt-right (and increasingly the American right) is the inheritor of the Confederate States of America and Nazi Germany. While this might seem a matter of mere historic interest, it has important implications. One is that it provides grounds that the members of the alt-right should be regarded as on par with members or supporters of ISIS or other enemy foreign terrorist groups. This is in contrast with seeing the alt-right as being entirely domestic.

Those who join or support Isis (and other such groups) are seen as different from domestic hate groups. This is because ISIS (and other such groups) are foreign and conflict with the United States. This applies even when the ISIS supporter is an American who lives in America. This perceived difference has numerous consequences, including legal ones. It also has consequences for free speech. While advocating the goals and values of ISIS in the United States would be a threat and could result in criminal charges, the alt-right is protected by the right to free speech. This is illustrated by the fact that the alt-right can get permits to march in the United States, while ISIS supporters and similar groups cannot. One can imagine the response if ISIS or Hamas supporters applied for permit or engaged in a march.

While some hate groups are truly domestic in that they are not associated with foreign organizations at war with the United States, the alt-right cannot make this claim. At least they cannot to the degree they are connected to the Confederate States of America and the Nazis. Both are foreign powers who were at war with the United States. As such, the alt-right should be seen as on par with other groups that affiliate themselves with foreign groups engaged in war with the United States.

An obvious reply is that the Confederacy and the Nazis were defeated and no longer exist. On the one hand, this is true. The Confederacy was destroyed, and the states rejoined the United States. The Nazis were defeated and while Germany still exists, it is not controlled by the Nazis. At least not yet. On the other hand, the Confederacy and the Nazis do persist in the form of groups that preserve their values and ideology here in the United States. To use the obvious analogy, the defeat of ISIS and its territorial losses did not end the group. It will persist as long as it has supporters, and the United States has not switched to a policy of tolerating ISIS members and supporters simply because ISIS no longer has territory.

 The same holds true for those supporting or claiming membership in the Confederacy or the Nazis. They are supporters of foreign powers that are enemies of the United States and are thus on par with ISIS supporters and members in that they are agents of the enemy. This is not to say that the alt-right is morally equivalent to ISIS in terms of its actions. Overall, ISIS is worse. But what matters in this context, is the expression of allegiance to the values and goals of a foreign enemy—something ISIS supporters and alt-right members who embrace the Confederacy or Nazis have in common.

Briefly put, right-to-try laws give terminally ill patients the right to try experimental treatments that have completed Phase 1 testing but have yet to be approved by the FDA. Phase 1 testing involves assessing the immediate toxicity of the treatment. This does not include testing its efficacy or its longer-term safety. Roughly put, passing Phase 1 just means that the treatment does not immediately kill or significantly harm patients.

On the face of it, no sensible person would oppose the right-to-try.  This right is that people who have “nothing to lose” are given the right to try treatments that might help them. The bills and laws use the rhetorical narrative that the right-to-try laws would give desperate patients the freedom to seek medical treatment that might save them and this would be done by getting the FDA and the state out of the way. This is powerful rhetoric that appeals to compassion, freedom and a dislike of the government. As such, it is not surprising that few people dare oppose the right-to-try. However, the matter does deserve proper critical consideration.

One way to look at it is to consider an alternative reality in which the narrative is spun with a different rhetorical charge, a negative spin rather than positive. Imagine, for a moment, if the rhetorical engines had cranked out a tale of how the bills would strip away the protection of the desperate and dying to allow predatory companies to use them as Guinea pigs for their untested treatments. If that narrative had been sold, people would probably be opposed to such laws. But rhetorical narratives, positive or negative, are logically inert and are irrelevant to the merits of the right-to-try. How people feel about the proposals is also logically irrelevant as well. What is needed is a cool examination of the matter.

On the positive side, the right-to-try does offer people the chance to try treatments that might help them. It is hard to argue that terminally ill people do not have a right to take such risks. That said, there are still some concerns.

One concern is that there is an established mechanism allowing patients access to experimental treatments. The FDA already has as system that approves most requests. Somewhat ironically, when people argue for the right-to-try by using examples of people successfully treated by experimental methods, they are showing that the existing system already allows such access. This raises the question about why the laws are needed and what they change.

The main change is usually to reduce the role of the FDA. Without such laws, requests to use experimental methods must go through the FDA (which seems to approve most requests).  If the FDA routinely denied treatments, then such laws would seem needed. However, the FDA does not seem to be the problem as they generally do not roadblock the use of experimental methods for the terminally ill. This leads to the question of is limiting patient access.

The main limiting factors are those that impact almost all treatment access: costs and availability. While the right-to-try grants the negative right to choose experimental methods, they do not grant the positive right to be provided with those methods. A negative right is a liberty, and one is free to act upon it but is not provided with the means to do so. The means must be acquired by the person. A positive right is an entitlement, and the person is free to act and is provided with the means of doing so. In general, the right-to-try does little or nothing to ensure that treatments are provided. For example, public money is usually not allocated to pay for them. As such, the right-to-try is like the right-to-healthcare: you are free to get it if you can pay for it. Since the FDA does not roadblock access to experimental treatments, the bills and laws would seem to do little or nothing new to benefit patients. That said, the general idea of right-to-try seems reasonable and is already practiced. While few are willing to bring them up in public discussions, there are some negative aspects to the right-to-try. I will turn to some of those now.

One obvious concern is that terminally ill patients do have something to lose. Experimental treatments could kill them earlier or they could cause suffering. As such, it does make sense to have limits on the freedom to try. At least for now it is the job of the FDA and medical professionals to protect patients from such harms even if the patients want to roll the dice.

This concern can be addressed by appealing to freedom of choice, provided patients can provide informed consent. This does create a problem: as little is known about the treatment, the patient cannot be well informed about the risks and benefits. But, as I have argued often elsewhere, I accept that people have a right to make such choices, even if these choices are self-damaging. I apply this principle consistently, so I accept that it grants the right-to-try, the right to get married, the right to eat poorly, the right to use drugs, and so on.

The usual counters to such arguments from freedom involve arguments about how people must be protected from themselves, arguments that such freedoms are “just wrong” or arguments about how such freedoms harm others. The idea is that moral or practical considerations override the freedom of the individual. This can be a reasonable counter, and a strong case can be made against allowing people the right to engage in a freedom that could harm or kill them. However, my position on such freedoms requires me to accept that a person has the right-to-try, even if it is a bad idea. That said, others have an equally valid right to try to convince them otherwise and the FDA and medical professionals have an obligation to protect people, even from themselves.

A philosophical problem is determining what can, and perhaps more importantly cannot, be owned. There is considerable dispute over this subject and an example is the debate over whether people can be owned. A more recent example is the debate over ownership of genes. While each dispute needs to be addressed on its own merits, it is worth considering the broader question of what can and what cannot be property. It must be noted that this is not just about legal ownership.

Addressing this subject begins with the foundation of ownership, which justifies the claim that one owns something. This is the philosophical problem of property. Most people are probably unaware this is a philosophical problem as people tend to accept the current system of ownership, though people do criticize its particulars. But, to simply assume the existing system of property is correct (or incorrect) is to beg the question and the problem of property needs to be addressed without simply assuming it has been solved.

One practical solution to the problem of property is to argue property is a convention. This can be formalized convention (such as laws) or informal convention (such as traditions) or a combination of both. One reasonable view is property legalism, that ownership is defined by the law. On this view, whatever the law defines as property is property. Another reasonable view is that of property relativism, that ownership is defined by cultural practices (which can include the laws). Roughly put, whatever the culture accepts as property is property. These approaches correspond to the moral theories of legalism (that the law determines morality) and ethical relativism (that culture determines morality).

The conventionalist approach seems to have the virtues of being practical and of avoiding mucking about in philosophical disputes. If there is a dispute about what (or who) can be owned, the matter is settled by the courts, by force of arms or by force of persuasion. There is no question of what view is right as winning makes the view right. While this approach does have its appeal, it is not without problems.

Trying to solve the problem of property with the conventionalist approach does lead to a dilemma: the conventions are either based on some foundation or they are not. If the conventions are not based on a foundation other than force (of arms or persuasion), then they are arbitrary. If this is the case, the only reasons to accept such conventions are practical, such as to avoid harm (such as being killed) or to profit.

If the conventions have a foundation, then the problem is determining what it might be. One approach is to argue that people have a moral obligation to obey the law or follow cultural conventions. While this would provide a basis for a moral obligation to accept the conventions, these conventions would still be arbitrary. Roughly put, those under the conventions would have a reason to accept whatever conventions exist, but no reason to accept a specific convention over another. This is analogous to the ethics of divine command theory, the view that what God commands is good because He commands it and what He forbids is evil because He forbids it. As should be expected, the “convention command” view of property suffers from problems analogous to those suffered by divine command theory, such as the arbitrariness of the commands and the lack of justification beyond obedience to authority.

One classic moral solution to the problem of property is offered by utilitarianism. On this view, the theory of property that creates more positive value than negative value for the morally relevant beings would be the morally correct practice. It does make property a contingent matter since radically different conceptions of property can be thus justified depending on the changing balance of harms and benefits. So, for example, while a capitalistic conception of property might be justified at a certain place and time, that might shift in favor of a socialist conception. As always, utilitarianism leaves the door open for intuitively horrifying practices that manage to fulfill that condition. However, this approach also has an intuitive appeal in that the view of property that creates the greatest good would be the morally correct view of property.

A classic attempt to solve the problem of property is offered by John Locke. He begins with the view that God created everyone and gave everyone the earth in common. While God does own us, He is cool about it and effectively lets each person own themselves. As such, I own myself and you own yourself. From this, as Locke sees it, it follows that each of us owns our labor.

For Locke, property is created by mixing one’s labor with the common goods of the earth. To illustrate, suppose we are washed up on an island owned by no one. If I collect wood and make a shelter, I have mixed my labor with the wood, thus making the shelter my own. If you make a shelter with your labor, it is thus yours. On Locke’s view, it would be theft for me to take your shelter and theft for you to take mine.

This labor theory of ownership quickly runs into problems, such as working out a proper account of mixing of labor and what to do when people are born on a planet on which everything is already claimed and owned. However, the idea that the foundation of property is that each person owns themselves is an intriguing one and does have some interesting implications about what can (and cannot) be owned. One implication would seem to be that people are owners and cannot be owned. For Locke, this would be because each person is owned by themselves, and ownership of other things is conferred by mixing one’s labor with what is common to all.

It could be contended that people create other people by their labor (literally in the case of the mother) and thus parents own their children. A counter to this is that although people do engage in sexual activity that results in the production of other people, this should not be considered labor in the sense required for ownership. After all, the parents just have sex and then the biological processes do all the work of constructing the new person. One might also play the metaphysical card and contend that what makes the person a person is not manufactured by the parents but is something metaphysical like the soul or consciousness (for Locke, a person is their consciousness and the consciousness is within a soul).

Even if it is accepted that parents do not own their children, there is the obvious question about manufactured beings that are like people such as intelligent robots or biological constructs. These beings would be created by mixing labor with other property (or unowned materials) and thus would seem to be things that could be owned. Unless, of course, they are owners like humans.

One approach is to consider them analogous to children. It is not how children are made that makes them unsuitable for ownership, it is what they are. On this view, people-like constructs would be owners rather than things to be owned. The intuitive counter is that people-like manufactured beings would be property like anything else that is manufactured. The challenge is, of course, to show that this would not entail that children are property. After all, considerable resources and work can be expended to create a child (such as IVF, surrogacy, and perhaps someday artificial wombs), yet intuitively they would not be property. This does point to a rather important question: is it what something is that makes it unsuitable to be owned or how it is created?

 

Before getting into the discussion, I am not a medical professional and what follows should be met with due criticism and you should consult an expert before embarking on changes to your exercise or nutrition practices. Or you might die. Probably not. But maybe.

As any philosopher will tell you, while the math used in science is deductive (the premises are supposed to guarantee the conclusion with certainty) scientific reasoning is inductive (the premises provide some degree of support for the conclusion that is less than complete). Because of this, science suffers from what philosophers call the problem of induction. In practical terms, this means that no matter how careful the reasoning and no matter how good the evidence, the inference can still be false. The basis is that inductive reasoning involves a “leap” from the premises/evidence (what has been observed) to the conclusion (what has not been observed). Put bluntly, inductive reasoning always has a chance to lead to a false conclusion. But this appears unavoidable as life seems inductive.

Scientists and philosophers have tried to make science entirely deductive. For example, Descartes believed he could find truths that he could not doubt and then use valid deductive reasoning to generate a true conclusion with absolute certainty. Unfortunately, this science of certainty is the science of the future and (probably) always will be. So, we are stuck with induction.

The problem of induction applies to the sciences that study nutrition, exercise and weight loss and the conclusions made in these sciences can always be wrong. This helps explain why recommendations change relentlessly.

While there are philosophers of science who would disagree, science is a matter of trying to figure things out by doing the best we can do at this time. This is limited by the available resources (such as technology) and human epistemic capabilities. As such, whatever science is presenting now is almost certainly at least partially wrong; but the wrongs often get reduced over time. But sometimes they increase. This is true of all the sciences. Consider, for example, the changes in physics since Thales got it started. This also helps explain why recommendations about diet and exercise change often.

While science is sometimes idealized as a field of pure reason outside of social influences, science is also a social activity. Because of this, science is influenced by social factors and human flaws. For example, scientists need money to fund their research and can be vulnerable to corporations looking to “prove” claims that are in their interest. As another example, scientific subjects can become issues of political controversy, such as race, evolution and climate change. This politicization tends to be bad for science and anyone who does not profit from manufacturing controversy. As a final example, scientists can be motivated by pride and ambition to fake or modify their findings. Because of these factors, the sciences dealing with nutrition and exercise are, to a meaningful degree, corrupted and this makes it difficult to make a rational judgment about which claims are true. One excellent example is how the sugar industry paid scientists at Harvard to downplay the health risks presented by sugar and play up those presented by fat. Another illustration is the fact that the food pyramid endorsed by the US government has been shaped by the food industries rather than being based entirely on good science.

Given these problems it might be tempting to abandon mainstream science and go with whatever food or exercise ideology one finds appealing. That would be a bad idea. While science suffers from these problems, mainstream science is better than the nonscientific alternatives. They tend to have all the problems of science without any of its strengths. So, what should one do? The rational approach is to accept the majority opinion of qualified and credible experts. One should also keep in mind the above problems and approach the science with due skepticism.

So, what does the best science of today say about weight loss? First, humans evolved as hunter-gatherers and getting enough calories was a challenge. Humans tend to be very good at storing energy in the form of fat which is one reason the calorie rich environment of modern society contributes to obesity. Crudely put, it is in our nature to overeat because that once meant the difference between life and death.

Second, while exercise does burn calories, it burns far less than many imagine. For most people, most of the calorie burning is a result of the body staying alive. As such, while exercising more could help a person lose weight, the calorie impact of exercise is surprisingly low. That said, you should exercise (if you can) if only for the health benefits.

Third, hunger is a function of the brain, and the brain responds differently to different foods. Foods high in protein and fiber create a feeling of fullness that tends to turn off the hunger signal. Foods with a high glycemic index (like cake) tend to stimulate the brain to cause people to consume more calories. As such, manipulating your brain is an effective way to increase the chance of losing weight. Interestingly, as Aristotle argued, habituation to foods can train the brain to prefer foods that are healthier. You can train yourself to prefer things like nuts, broccoli and oatmeal over cookies, cake, and soda. This takes time and effort but can be done.

Fourth, weight loss has diminishing returns: as one loses weight, one’s metabolism slows, and less energy is needed. As such, losing weight makes it harder to lose weight, which is something to keep in mind.  Naturally, all these claims could be disproven tomorrow, but they seem reasonable now.

 

Central to our American mythology is the belief a person can rise to the pinnacle of success from the depths of poverty. While this does happen, poverty presents an undeniable obstacle to success. Tales within this myth of success present an inconsistent view of poverty:  the hero is praised for overcoming the incredible obstacle of poverty while it is also claimed that anyone with gumption should be able to succeed. The achievement is thus claimed to be heroic yet easy and expected.

Outside of myths, poverty is difficult to overcome. There are the obvious challenges of poverty. For example, a person born into poverty will not have the same educational opportunities as the affluent. As another example, they will have less access to technology such as computers and high-speed internet. As a third example, there are the impacts of diet and health care. These necessities are expensive, and the poor have less access to good food and good care. There is also research by scientists such as Kimberly G. Noble  that suggests a link between poverty and brain development.

While the most direct way to study the impact of poverty and the brain is by imagining the brain, this is expensive. However, research shows a correlation between family income and the size of some surface areas of the cortex. For children whose families make under $50,000 per year, there is a strong correlation between income and the surface area of the cortex. While greater income is correlated with greater cortical surface area, the apparent impact is reduced once the income exceeds $50,000 a year. This suggests, but does not prove, that poverty has a negative impact on the development of the cortex and this impact is proportional to the degree of poverty.

Because of the cost of direct research on the brain, most research focuses on cognitive tests that indirectly test the brain. Children from lower income families perform worse than their more affluent peers in their language skills, memory, self-control and focus. This performance disparity cuts across ethnicity and gender.

As would be expected, there are individuals who do not conform to this general correlation and there are children from disadvantaged families who perform well on the tests and children from advantaged families who do poorly. Knowing the economic class of a child does not automatically reveal what their individual capabilities are. However, there is a correlation in terms of populations rather than individuals. This needs to be remembered when assessing anecdotes of successful rising from poverty. As with all appeals to anecdotal evidence, they do not outweigh statistical evidence.

To use an analogy, boys tend to be stronger than girls but knowing that Sally is a girl does not mean that Sally is certainly weaker than Bob the boy. An anecdote about how Sally is stronger than Bob also does not show that girls are stronger than boys; it just shows that Sally is unusual in her strength. Likewise, if Sally lives in poverty but does exceptionally well on the cognitive tests and has a normal cortex, this does not prove that poverty does not have a negative impact on the brain. This leads to the question as to whether poverty is a causal factor in brain development.

As the saying goes, correlation is not causation. To infer that because there is a correlation between poverty and cognitive abilities then there must be a causal connection would be to fall victim to a causal fallacy. One possibility is that the correlation is a mere coincidence and there is no causal connection. Another possibility is that there is a third factor that is causing both and poverty and the cognitive abilities are both effects.

There is also the possibility that the causal connection has been reversed. That is, it is not poverty that increases the chances a person has less cortical surface (and corresponding capabilities). Rather, it is having less cortical surface area that is a causal factor in poverty.

This view does have some appeal. As noted above, children in poverty tend to do worse on tests for language skills, memory, self-control and focus. These are the capabilities that are useful for success, and people who are less capable will tend to be less successful. Unless, of course, they are simply born into “success.” To use an analogy, there is a correlation between running speed and success in track races. It is not losing races that makes a person slow. It is being slow that causes a person to lose races.

Despite the appeal of this interpretation, to rush to the conclusion that it is cognitive abilities that cause poverty would be as much a fallacy as just rushing to the conclusion that poverty must influence brain development. Both views appear plausible, and it is possible that causation is going in both directions. The challenge is to sort the causation. The obvious approach is to conduct the controlled experiment suggested by Noble: providing an experimental group of low-income families with an income supplement and providing the control group with a relatively tiny supplement. If the experiment is conducted properly and the sample size is large enough, the results would be statistically significant and provide an answer to the question of the causal connection.

Intuitively, it makes sense that an adequate family income would have a positive impact on the development of children. After all, adequate income would allow access to adequate food, care and education. It would also tend to have a positive impact on family conditions, such as emotional stress. This is not to say that just “throwing money at poverty” is a cure all; but reducing poverty is a worthwhile goal regardless of its connection to brain development. If it does turn out that poverty does have a negative impact on development, then those who claim to be concerned with the well-being of children should be motivated to combat poverty. It would also serve to undercut another American myth, that the poor are stuck in poverty simply because they are lazy. If poverty has the damaging impact on the brain it seems to have, then this would help explain why poverty is such a trap.

 

While there are many moral theories, two of the best known are utilitarianism and deontology. John Stuart Mill presents the paradigm of utilitarian ethics: the morality of an action is dependent on the happiness and unhappiness it creates for the morally relevant beings. Moral status, for this sort of utilitarian, is defined in terms of the being’s capacity to experience happiness and unhappiness. Beings count to the degree they can experience these states. A being that could not experience either would not count, except to the degree that what happened to it affected beings that could experience happiness and unhappiness. Of course, even a being that has moral status merely gets included in the utilitarian calculation. As such, all beings are means to the ends of maximizing happiness and minimizing unhappiness.

Kant, the paradigm deontologist, rejects the utilitarian approach.  Instead, he contends that ethics is a matter of following the correct moral rules. He also contends that rational beings are ends and are not to be treated merely as means to ends. For Kant, the possible moral statuses of a being are binary: rational beings have status as ends, non-rational beings are mere objects and are thus means. As would be expected, these moral theories present two different approaches to the ethics of slavery.

For the classic utilitarian, the ethics of slavery would be assessed in terms of the happiness and unhappiness generated by the activities of slavery. On the face of it, an assessment of slavery would seem to result in the conclusion that slavery is morally wrong. After all, slavery typically generates unhappiness on the part of the enslaved. This unhappiness is not only a matter of the usual abuse and exploitation a slave suffers, but also the general damage to happiness that arises from being regarded as property rather than a person. While the slave owners are clearly better off than the slaves, the practice of slavery is often harmful to the happiness of the slave owners as well; one might argue they deserve such suffering and could avoid it by not being slave owners. As such, the harms of slavery would seem to make it immoral on utilitarian grounds.

For the utilitarian the immorality of slavery is contingent on its consequences: if enslaving people creates more unhappiness than happiness, then it is wrong. However, if enslaving people were to create more happiness than unhappiness, then it would be morally acceptable. A reply to this is to argue that slavery, by its very nature, would always create more unhappiness than happiness. As such, while the evil of slavery is contingent, it would always turn out to be wrong.

An interesting counter is to put the burden of proof on those who claim that such slavery would be wrong. That is, they would need to show that a system of slavery that maximized happiness was morally wrong. On the face of it, showing that something that created more good than bad is still bad would be challenging. However, there are numerous appeal to intuition arguments that aim to do just that. The usual approach is to present a scenario that generates more happiness than unhappiness, but intuitively seems to be wrong or at least makes one feel morally uncomfortable. Ursula K. Le Guin’s classic short story “The Ones Who Walk Away from Omelas” is often used in this role, for it asks us to imagine a utopia that exists at the cost of the suffering of one person.  There are also other options, such as arguing within the context of another moral theory. For example, a natural rights theory that included a right to liberty could be used to argue that slavery is wrong because it violates rights, even if happened to be a happiness maximizing slavery.

A utilitarian can also “bite the bullet” and argue that even if such slavery might seem intuitively wrong, this is a mere prejudice on our part, most likely fueled by examples the unhappy slaveries that pervade history. While utilitarian moral theory can obviously be applied to the ethics of slavery, it is not the only word on the matter. As such, I now turn to the Kantian approach.

As noted above, Kant divides reality into two distinct classes of beings. Rational beings exist as ends and to use them solely as means would be, for Kant, morally wrong. Non-rational beings, which includes non-human animals, are mere objects. Interestingly, as I have noted in other essays and books, Kant argues that animals should be treated well because treating them badly can incline humans to treat other humans badly. This, I have argued elsewhere, gives animals an ersatz moral status.

On the face of it, under Kant’s theory the very nature of slavery would make it immoral. If persons are rational beings and slavery treats people as objects, then slavery would be wrong. First, it would involve treating a rational being solely as a means. After all, it is difficult to imagine that enslaving a person is consistent with treating them as an end rather than just as a means. Second, it would also seem to involve a willful category error by treating a rational being (which is not an object) as an object. Slavery would thus be fundamentally incoherent because it purports that non-objects (people) are objects.

Since Kantian ethics do not focus on happiness and unhappiness, even a deliriously happy system of slavery would still be wrong for Kant. Kant does, of course, get criticized because his system relegates non-rational beings into the realm of objects, thus lumping together squirrels and stones, apes and asphalt, tapirs and twigs and so on. As such, if non-rational beings could be enslaved, then this would not matter morally (unless doing so impacted rational beings in negative ways). The easy and obvious reply to this concern is to argue that non-rational beings could not be enslaved because slavery is when people are taken to be property and non-rational beings are not people on Kant’s view. Non-rational animals can be mistreated and harmed, but they cannot be enslaved.

It is, of course, possible to have an account of what it is to be a person that extends personhood beyond rational beings. For example, opponents of abortion often contend the zygote is a person despite its obvious lack of rationality. Fortunately, it would be easy enough to create a modification of Kant’s theory in which what matters is being a person (however defined) rather than being a rational being.

Thus, utilitarian ethical theories leave open the possibility that slavery could be morally acceptable while under a Kantian account slavery would always be morally wrong.

 

While slavery is still practiced around the world, it is now broadly seen as evil. While apologists for slavery are relatively few, there remains the question as to why slavery is evil.

It is tempting to define the wrongness of slavery in terms of exploitation and abuse. While such abuse and exploitation are wrong, they are not adequate in explaining the wrongness of slavery itself. This is because abuse and exploitation can exist apart from slavery, thus showing that these are not sufficient conditions for slavery. Being abused and exploited does not entail that one is a slave. Examples of such abuse and exploitation are, unfortunately, abundant. If you work for a living, you are most likely exploited but you are almost certainly not a slave. Countless people suffer abuse in relationships from the very people who should be kind to them.

Abuse and exploitation are also not necessary conditions of slavery. That is, a person can be enslaved without being abused or exploited. As noted in an earlier essay, there have been slaves who have enjoyed considerable power and status. Despite their status and power, such slavery is still wrong. As such, it is not the abuse or exploitation that makes slavery wrong.

This is not to say that abuse and exploitation do not matter. When present, they compound the basic evil of slavery and make the bad even worse. Slavery is also strongly connected to abuse and exploitation. The belief that enslaved people are property makes it easy for others to justify and get away with abuse and exploitation. While free people are abused or exploited, they usually have more legal protection than the enslaved. So, while the abuse and exploitation matter, slavery serves as an enabler of mistreatment, and this contribute to the wrongness of slavery. 

What makes slavery morally wrong, then, is that it is perceived as transforming people into objects that can be owned. The claim of ownership over another person is the denial of their personhood and all that goes with it. For those with liberal Lockean inclinations, this denial of personhood is a denial of the basic rights to life, liberty and property. Since a slave is supposed to be property, their life supposedly belongs to the owner. Hence, slaveowners usually see themselves as having the right to kill or harm their slaves. I do not deny that slaves are sometimes protected by laws and slavery does come in degrees. But every form of slavery must assume that the owner has ownership over the life of the slave and can use compulsion to maintain slavery.

Slavery, by its very nature, is a violation of a person’s liberty. They are denied freedom of choice and denied agency. As the owner sees it, they have the right to make decisions for their property such as what work they do, who they have sex with, and what faith they might follow. This is not to say that slaves do not have some freedom or that free people are completely free. It is to say that the freedoms of slaves are limited and often restricted to minor decisions. As noted above, slavery does admit of degrees and in the past some favored or high-status slaves might enjoy considerable liberty. For example, a Mamluk ruler might enjoy greater liberty than a non-slave in their empire. It can be objected that such a slave would be a slave in name only. After all, a person of such status and power would be far better off than most despite being a slave. The challenge to those who argue that slavery is inherently wrong is to show that such an exalted slave is still wronged by their slavery. One approach is to appeal to the intuition that however exalted, the slave is still a slave. That is, regarded as property rather than a free person and this is inherently wrong.

Being regarded as property, slaves often cannot own property of their own. After all, being owned entails that their owner owns what they own. There are, of course, exceptions to this and sometimes slaves are paid for their work and can even eventually buy themselves out of slavery. While this does show, once again, that there are diverse types of slavery, the idea that a person should need to buy themselves seems absurd on the face of it.

Thus, while slavery does enable a multitude of evils, the core evil of slavery is the belief that a person can be owned as an object.

 

The term “robot” and the idea of a robot rebellion were introduced by Karel Capek in Rossumovi Univerzální Roboti. “Robot” is derived from the Czech term for “forced labor” which was itself based on a term for slavery. Robots and slavery are thus linked in science-fiction. This leads to a philosophical question: can a machine be a slave? Sorting this matter out requires an adequate definition of slavery followed by determining whether the definition can fit a machine.

In simple terms, slavery is the ownership of a person by another person. While slavery is often seen in absolute terms (one is either enslaved or not), there are degrees of slavery in that the extent of ownership can vary. For example, a slave owner might grant their slaves some free time or allow them some limited autonomy. This is analogous to being ruled under a political authority in that there are degrees of being ruled and degrees of freedom under that rule.

Slavery is also often characterized in terms of forcing a person to engage in uncompensated labor. While this account does have some appeal, it is flawed. After all, it could be claimed that slaves are compensated by being provided with food, shelter and clothing. Slaves are sometimes even paid wages and there are cases in which slaves have purchased their own freedom using these wages. The Janissaries of the Ottoman Empire were slaves yet were paid and enjoyed a socioeconomic status above many of the free subjects of the empire.  As such, compelled unpaid labor is not the defining quality of slavery. However, it is intuitively plausible to regard compelled unpaid labor as a form of slavery in that the compeller purports to own the laborer’s time without consent or compensation.

Slaves are also often presented as powerless and abused, but this is not always the case. For example, the slave soldier Mamluks were treated as property that could be purchased, yet  enjoyed considerable status and power. The Janissaries, as noted above, also enjoyed considerable influence and power. There are free people who are powerless and routinely abused. Thus, being powerless and abused is neither necessary nor sufficient for slavery. As such, the defining characteristic of slavery is the claiming of ownership; that the slave is property.

Obviously, not all forms of ownership are slavery. My running shoes are not enslaved by me, nor is my smartphone. This is because shoes and smartphones lack the moral status required to be considered enslaved. The matter becomes more controversial when it comes to animals.

Most people accept that humans have the right to own animals. For example, a human who has a dog or cat is referred to as the pet’s owner. But there are people who take issue with the ownership of animals. While some philosophers, such as Kant and Descartes, regard animals as objects, other philosophers argue they have moral status. For example, some utilitarians accept that the capacity of animals to feel pleasure and pain grants them moral status. This is typically taken as a status that requires their suffering be considered rather than one that morally forbids their being owned. That is, it is seen as morally acceptable to own animals if they are treated well. There are even people who consider any ownership of animals to be wrong but their use of the term “slavery” for the ownership of animals seems more metaphorical than a considered philosophical position.

While I think that treating animals as property is morally wrong, I would not characterize the ownership of most animals as slavery. This is because most animals lack the status required to be enslaved. To use an analogy, denying animals religious freedom, the freedom of expression, the right to vote and so on does not oppress animals because they are not the sort of beings that can exercise these rights. This is not to say that animals cannot be wronged, just that their capabilities limit the wrongs that can be done to them. So, while an animal can be wronged by being cruelly confined, it cannot be wronged by denying it freedom of religion.

People, because of their capabilities, can be enslaved. This is because the claim of ownership over them is a denial of their rightful status. The problem is working out exactly what it is to be a person and this is something that philosophers have struggled with since the origin of the idea of persons. Fortunately, I do not need to provide such a definition when considering whether machines can be enslaved and can rely on an analogy to make my case.

While I believe that other humans are (usually) people, thanks to the problem of other minds I do not know that they are really people. Since I have no epistemic access to their (alleged) thoughts and feelings, I do not know if they have the qualities needed to be people or if they are just mindless automatons exhibiting an illusion of the personhood that I possess. Because of this, I must use an argument by analogy: these other beings act like I do, I am a person, so they are also people. To be consistent, I need to extend the same reasoning to beings that are not humans, which would include machines. After all, without cutting open the apparent humans I meet, I have no idea whether they are organic beings or machines. So, the mere appearance of being organic or mechanical is not relevant, I must judge by how the entity functions. For all I know, you are a machine. For all you know, I am a machine. Yet it seems reasonable to regard both of us as people.

While machines can engage in some person-like behavior now, they cannot yet pass this analogy test. That is, they cannot consistently exhibit the capacities exhibited by a known person, namely me. However, this does not mean that machines could never pass this test. That is, behave in ways that would be sufficient to be accepted as a person if it that behavior was done by an organic human.

A machine that could pass this test would merit being regarded as a person in the same way that humans passing this test merit this status. As such, if a human person can be enslaved, then a robot person could also be enslaved.

It is, of course, tempting to ask if a robot with such behavior would really be a person. The same question can be asked about humans, thanks to that problem of other minds.

 

A common theme of dystopian science fiction is the enslavement of humanity by machines. Emma Goldman, an anarchist philosopher, also feared human servitude to the machines. In one of her essays on anarchism, she asserted that:

Strange to say, there are people who extol this deadening method of centralized production as the proudest achievement of our age. They fail utterly to realize that if we are to continue in machine subserviency, our slavery is more complete than was our bondage to the King. They do not want to know that centralization is not only the death-knell of liberty, but also of health and beauty, of art and science, all these being impossible in a clock-like, mechanical atmosphere.

When Goldman was writing in the 1900s, the world had just entered the industrial age, and the technology of today was but a dream of visionary writers. The slavery she envisioned was not of robot masters ruling over humanity, but humans compelled to work long hours in factories, serving the machines to serve the human owners of these machines. That this is still applicable today needs no argument.

The labor movements of the 1900s helped reduce the extent of this servitude, at least in Western countries. As the rest of the world industrialized the story of servitude to the machine played out over and over. While the point of factory machines was to automate work so few could do the work of many, it is only recently that “true” automation has taken place, which is having machines doing the work instead of humans. For example, robots that assemble cars do what humans used to do. As another example, computers instead of human operators now handle phone calls.

In the eyes of utopians, this progress was supposed to free humans from tedious and dangerous work, allowing them freedom to engage in creative and rewarding labor. The reality is a dystopia. While automation has replaced humans in some tedious, low paying and dangerous jobs, automation has also replaced humans in what were once considered good jobs. Humans also continue to work in tedious, low paying and dangerous jobs because human labor is still cheaper or more effective than automation. For example, fast food chains do not use robots to prepare food. This is because cheap human labor is readily available. The dream that automation would free humanity remains a dream. Machines have mostly pushed humans out of jobs into other jobs, sometimes ones more suited for machines. If human well-being were considered important, this would not be happening.

Humans still work jobs like those condemned by Goldman. But, thanks to technology, humans are even more closely supervised and regulated by machines. For example, there is software designed to monitor employee productivity. As another example, some businesses use workplace cameras to watch employees. Obviously enough, these can be dismissed as not being enslaved by the machines and defenders would say it is good human resource management ensuring that human workers are operating efficiently. At the command of other humans, of course.

One technology that looks like servitude to the machine is warehouse picking, such as that done by Amazon. Employees. Amazon and other companies have automated some of the picking process, making use of robots in various tasks. But, while a robot might bring shelves to human workers, the humans are the ones picking the products for shipping. Since humans tend to have poor memories and get bored with picking, human pickers have been automated. They are told by computers what to do, then they tell the computers what they have done. That is, the machines are the masters, and humans are doing their bidding.

It is easy enough to argue that this sort of thing is not enslavement by machines. First, the computers controlling the humans are operating at the behest of the owners of Amazon who are (presumably) humans. Second, humans are paid for their labors and are not owned by the machines (or Amazon). As such, any enslavement of humans by machines is metaphorical.

Interestingly, the best case for human enslavement by machines can be made outside of the workplace. Many humans are now ruled by their smartphones and tablets, responding to every beep and buzz of their masters, ignoring those around them to attend to the demands of the device, and living lives revolving around the machine.

This can be easily dismissed as a metaphor. While humans are said to be addicted to their devices, they do not meet the definition of “slaves.” They willingly “obey” their devices and could turn them off. They are free to do as they want, they just do not want to disobey their devices. Humans are also not owned by their devices, rather they own their devices. But it is reasonable to consider that humans are in a form of bondage their devices have, by the design of other humans, seduced people into making them the focus of their attention and thus have become the masters.