Military science fiction often includes powered exoskeletons, also known as exoframes, exosuits or powered armor. A basic exoskeleton is a powered framework providing the wearer with enhanced strength. In movies such as Edge of Tomorrow and video games such as Call of Duty Advanced Warfare the exoskeleton provides improved mobility and carrying capacity but do not provide much armor. In contrast, powered armor provides the benefits of an exoskeleton while also providing protection. The powered armor of Starship Troopers, The Forever War, Armor and Iron Man all serve as classic examples of this sort of gear. The Space Marines of Warhammer 40K and the Sisters of Battle also wear powered armor. While the sisters are “normal” humans, the Space Marines are enhanced super soldiers.

Because the exoskeletons of fiction provide soldiers with enhanced strength, mobility and carrying capacity, it makes sense that real militaries are interested in exoskeletons. While they have yet to be deployed on the battlefield, there are some ethical concerns about the augmentation of soldiers.

On the face of it, using exoskeletons in warfare seems morally unproblematic. An exoskeleton is analogous to any other vehicle, with the exception that it is worn rather than driven. A normal car or even a bicycle provides a person with enhanced mobility and carrying capacity and this is not immoral. In terms of the military context, an exoskeleton would be comparable to a Humvee or a truck, both of which seem morally unproblematic as well.

It might be objected that the use of exoskeletons would give wealthier nations an unfair advantage in war. The easy and obvious response to this is, unlike in sports and games, gaining an “unfair” advantage in war is not immoral. After all, there is no moral expectation that combatants will engage in a fair fight rather than taking advantage of such things as technology and numbers.

It might be objected that the advantage provided by exoskeletons would encourage countries that had them to engage in aggressions they would not otherwise engage in. The obvious reply is that despite the hype of video games and movies, any exoskeleton available soon would most likely not provide great advantage to infantry. As such, the use of exoskeletons would not seem morally problematic in this regard.

Another possible concern is what might be called the “Iron Man Syndrome” (to totally make something up). The idea is that soldiers equipped with exoskeletons might become overconfident (seeing themselves as being like Iron Man) put themselves and others at risk. After all, unless there are some amazing advances in armor technology that are unmatched by weapon technology, soldiers in powered armor will still be vulnerable to weapons capable of taking on light vehicle armor (which exist in abundance). However, this could be easily addressed by training. And experience.

A second point of possible concern is what could be called the “ogre complex” (also totally made up). An exoskeleton that dramatically boosts a soldier’s strength might encourage some people to act as bullies and abuse civilians or prisoners. While this might be a legitimate concern, it can be addressed by proper training and discipline.

There are, of course, the usual peripheral issues associated with new weapons technology that could have moral relevance. For example, it is easy to imagine a nation wastefully spending money on exoskeletons. However, such matters are not specific to exoskeletons and would not be moral problems for the technology as such.

Given the above, augmenting soldiers with exoskeletons poses no new moral concerns and is morally comparable to providing soldiers with trucks, tanks and planes.

Once and future presidential candidate Mike Huckabee once expressed his concern about the profanity flowing from the mouths of New York Fox News ladies: “In Iowa, you would not have people who would just throw the f-bomb and use gratuitous profanity in a professional setting. In New York, not only do the men do it, but the women do it! This would be considered totally inappropriate to say these things in front of a woman. For a woman to say them in a professional setting that’s just trashy!”

In response, Erin Gloria Ryan posted a piece on Jezebel.com. As might be suspected, the piece utilized the language that Mike dislikes and she started off with “listen up, cunts: folksy as balls probable 2016 Presidential candidate Mike Huckabee has some goddamn opinions about what sort of language women should use. And guess the fuck what? You bitches need to stop with this swearing shit.” While the short article did not set a record for OD (Obscenity Density), the author did make a good go at it.

I am not much for swearing. In fact, I used to say, “swearing is for people who don’t how to use words well.” That said, I do recognize there are proper uses for swearing.

While I generally do not favor swearing, there are exceptions in which it is not only permissible, but necessary. For example, when I was running college cross country, one of the other runners was looking super rough after a run. The coach asked him how he felt, and he said, “I feel like shit coach.” The coach corrected him by saying “no, you feel like crap.” He replied, “No, coach, I feel like shit.” And he was completely right. Inspired by the memory of this exchange, I will endeavor to discuss proper swearing. I am, of course, not developing a full theory of swearing.

I do agree with some of what Huckabee said, namely the criticism of swearing in a professional context. However, my professional context is academics, and I am doing my professional thing in front of students and other faculty. Not exactly a place where gratuitous f-bombing would be appropriate or even useful. It would also make me appear sloppy and stupid, as if I could not express ideas or keep the attention of the class or colleagues without the cheap shock theatrics of swearing.

I am open to the idea that such swearing could be appropriate in certain professional contexts. That is, that the vocabulary of swearing would be necessary to describe professional matters accurately and doing so would not make a person seem sloppy, disrespectful or stupid. Perhaps Fox News and Jezebel.com are such places.

While I was raised with certain patriarchal views, I have shed most of them but must confess I retain a psychological residue. Hearing a woman feels worse than hearing a man swear, but I know this just confirms I am an old man. If it is appropriate for a man to swear, the same right of swearing applies to a woman equally. I’m gender neutral about swearing, at least in principle.

Outside of the professional setting, I have a general opposition to casual and repetitive swearing. The main reason is that I look at words and phrases as tools. As with any tool, they have suitable and proper uses. While a screwdriver could be used to pound in nails, that is a poor use.  While a shotgun could be used to kill a fly, that is excessive and will cause needless collateral damage. Likewise, swear words have specific functions and using them poorly can show not only a lack of manners and respect, but a lack of artistry.

In general, the function of swear words is to serve as dramatic tools. They are supposed to shock and to convey something strong, such as great anger. To use them casually and constantly is like using a scalpel to cut everything from paper to salami. While it will work, the blade will grow dull from repeated use and will no longer function well when needed for its proper task. So, I reserve my swear words not because I am prudish, but because if I wear them out, they will not serve me when I really need them most. For example, if I were to say “we are fucked” all the time for any minor problem, then when a situation in which we are well and truly fucked arrives, I will not be able to use that phrase effectively. But, if I save it for when the fuck really hits the fan, then people who know me will know that it has gotten truly serious for I will have broken out the “it is serious” words.

As another example, swear words should be saved for when a powerful insult or judgment is needed. If I were to constantly call normal people “fuckers” or describe not-so-bad things as being “shit”, then I would have little means of describing very bad people and very bad things. While I generally avoid swearing, I do need those words from time to time, such as when someone really is a fucker or something truly is shit. Which is often the case these days.

Of course, swear words can be used for humorous purposes. This is not really my sort of thing, but their shock value can serve well to make a strong point or shock. However, if the words are too worn by constant use, then they can no longer serve their purpose. And, of course, it can be all too easy and inartistic to get a laugh simply by being crude. True artistry involves being able to get laughs using the same language one would use in front of grandpa in church. Of course, there is also artistry to swearing, but that is more than just doing it all the time.

I would not dream of imposing on others. Folks who wish to communicate using an abundance of swear words have every right to do so, just as someone is free to pound nails with a screwdriver or whittle with a scalpel. However, it does bother me a bit that these words are being dulled and weakened by excessive use. If this keeps up, we will need to make new words and phrases to replace them.

As a fan of fantasy, science fiction, and superheroes I have no difficulty in suspending disbelief for seemingly impossible things like wizards, warp drives and Wonder Woman. But, when watching movies and TV shows, I find myself being critical of the very unlikely. As a philosopher, I find interesting and in need of an explanation. I will use examples from the Hobbit movies and the Flash TV show. Because they vex me even years later.

The Hobbit movies include the standard fare in fantasy: wizards, magic swords, immortal elves, dragons, enchanted rings, and other such things that are (most likely) impossible in the actual world.  The Flash features a superhero who, in the opening sequence, explicitly claims to be the impossible. I, as noted above, have no problem accepting these aspects of fantasy and superhero “realities.”

Given my ready acceptance of the impossible, it seems odd that I am critical of other aspects of these movies and the TV show. In the case of the first Hobbit movie, my main complaint is about the encounter with goblins and their king. I have no issue with goblins, but with physics. I am not a physicist; but I am familiar with falling and gravity and those scenes were so implausible that they prevented me from suspending my disbelief.

In the case of the second Hobbit movie, I have issues with the barrel ride and the battle between the dwarfs and Smaug. In the case of the barrel ride, the events were so wildly implausible that I could not accept them. Ironically, the moves were too awesome and the fight too easy. It was like watching a video game being played in “god mode”: there was no feeling of risk, and the outcome was assured.

In the case of the battle with Smaug, every implausible step had to work perfectly to result in Smaug being in exactly the right place to have the gold “statue” spill on him. Paradoxically, the incredible difficulty of this made it seem too easy. Since everything so incredibly unlikely worked so perfectly it looked completely scripted. I had no feeling that any step could have failed. Obviously, every part of a movie is, by definition, scripted. But if the audience feels this, then the movie is doing a poor job.

In the case of The Flash, I have two main issues. The first is with how Flash fights his super opponents. It is established in the show that Flash can move so fast that anyone without super speed is effectively motionless relative to him. For example, in one episode he simply pulls all the keys from the Royal Flush gang’s motorcycles, and they can do nothing. However, when he fights a powerful villain, he is suddenly unable to use that ability. For example, when fighting the non-speedsters Captain Cold and Heatwave he runs around, barely able to keep ahead of their attacks. But these two villains are just normal guys with special guns. They have no super speed or ability to slow the Flash. Given the speed shown in other scenes, the Flash would be able to zip in and take their guns, just as he did with the keys. Since no reason was given as to why this would not work, the battles seem contrived, as if the writers could not think of a good reason why Flash would be unable to use an established ability, but just made it happen to fill up time with a fight.

The second issue is with the police response to the villains. In the same episode where Flash fights Captain Cold and Heatwave, the police are confronting the two villains yet are utterly helpless. Until one detective manages a lucky shot that puts the heat gun out of operation. The villains, however, easily get away. However, the fancy weapons are very short range, do not really provide any defensive powers and the users are just normal guys. As such, the police could have simply shot them, something real police are obviously willing to do. Yet, for no apparent reason, they do not. The only reason would seem to be that the writers could not come up with a plausible reason why they would not, yet needed them to not do that. This, of course, is not unique to the flash or these villains. The most obvious example is the Joker. He is just a guy, and it makes no sense, beyond his value as an IP, why he has not been shot. Now that I have set the stage, it is time to turn to philosophy.

In the Poetics Aristotle discusses the possible, the probable and the impossible. As he notes, a plot is supposed to go from the beginning, through the middle and to the end with plausible events. He does consider the role of the impossible and contends that “the impossible must be justified by artistic requirements, higher reality, or received opinion” and that that “a probable impossibility is preferable to an improbable possibility.”

In the case of the impossibilities of the Hobbit movies and the Flash TV show, these are justified by the artistic requirements of the fantasy and superhero genres: they, by their very nature, require the impossible. In the case of the fantasy genre, the impossibilities of magic and the supernatural must be accepted. Of course, it is easy to accept these things since it is not actually certain that the supernatural is impossible.

In the case of the superhero genre, the powers of heroes are usually impossible. However, they make the genre what it is. So to accept stories of superheroes is to willingly accept the impossible as plausible in that context. Divergence from reality is acceptable because of this.

Some of the events in the show I was critical of are not actually impossible, just incredibly implausible. For example, it is not impossible for the police to simply decide to just not use rifles against a criminal armed with a flamethrower. However, accepting this requires accepting that while the police in the show are otherwise like police in our world, they differ in one critical way: they are incapable of deploying snipers against people armed with exotic weapons. It is also not impossible that a person would make a life or death fight easier for the person trying to kill them by not using their abilities. However, accepting these things requires accepting things that do not improve the aesthetic experience, but rather detract by requiring the audience to accept the implausible without artistic justification.

To be fair, there is one plausible avenue of justification for these things. Aristotle writes that “to justify the irrational, we appeal to what is commonly said to be.” In the comics from which the Flash TV show is drawn, the battles between heroes and villains almost always go that way. So, the show mostly matches the comic reality. Likewise for the police. In the typical comic police are ineffective and rarely kill villains with sniper rifles, even when they easily could do so. As such, the show could be defended on the grounds that it is just following the rules of comics aimed at kids. That said, I think the show would be better if the writers were able to come up with reasonable justifications for why the Flash cannot use his full speed against the villain of the week and why the police are so inept against normal people with fancy guns.

In the case of the Hobbit movies, accepting the battle in the goblin caves would require accepting that physics is different in those scenes than it is everywhere else in the fantasy world. However, Middle Earth is not depicted elsewhere as having such wonky physics and the difference is not justified. In regard to the barrel ride battle and the battle with Smaug, the problem is the probability. The events are not individually impossible, but accepting them requires accepting the incredibly unlikely without justification or need. Those who have read the book will know that those events are not in the book and are not needed for the story. Also, there is the problem of consistency: the spectacular dwarfs of the barrels and Smaug fight are also the seemingly mundane dwarfs in so many other situations. Since these things detract from the movie, they should not have been included. Also, the Hobbit should have just been one movie.

There are two issues here.  The first is determining what is the worst thing that a person should express. The second is the issue of determining the worst thing that a person should be allowed to express. While these might seem to be the same issue, they are not. There is a distinction between what a person should do and what is morally permissible to prevent a person from doing. The focus will be on using the coercive power of the state in this role.

As an illustration of the distinction, consider the example of a person lying to her boyfriend about playing video games when she was supposed to be doing yard work. She should not lie to him (although there are exceptions). However, the police should not be sent to coerce her into telling the truth about this. So, she should not lie about playing video games, but the state should allow her to do this.

This view can be disputed and there are those who argue in favor of complete freedom from the state (such as anarchists) and those who argue that the state should control almost every aspect of life (totalitarians). However, the idea that there are some matters that are not the business of the state is an intuitively plausible position. What follows will rest on this assumption and the challenge will be to sort out these two issues.

A plausible and appealing approach is to take a utilitarian stance and use Mill’s principle of harm to determine the worst thing a person should express and should be allowed to express. The right of free expression is limited by the right of others not to be harmed in their life, liberty and property without adequate justification.

In the case of the worst thing that a person should express, I am speaking in the context of morality. There are, of course, non-moral meanings of “should.” To use the most obvious example, there is the pragmatic “should”: what a person should do in serving their practical self-interest. For example, a person should not tell her boss what she really thinks of him if doing so would cost her a job she desperately needs. To use another example, there is also the “should” of etiquette: what a person should do or not do to follow the social norms. For example, a person should not go without pants at a formal wedding, even to express his opposition to the tyranny of pants.

Returning to morality, it is reasonable to weigh the harm the expression generates against the right of free expression. Obviously enough, there is not an exact formula for calculating the worst thing a person should express, and this will vary according to the circumstances. For example, the worst thing one should express to a young child is different from the worst thing one should express to a jaded adult. In terms of the harms, these would include such harm as offending the person, scaring them, insulting them, and so on for harm that can be inflicted by mere expression.

While people do not have a right not to be offended, people do have a right not to be unjustly harmed by other people. To use an obvious example, men should not catcall women who do not want to be subject to this verbal harassment. This sort of behavior certainly offends, upsets and even scares many women and the right to free expression does not give men a moral pass that exempts them from what they should or should not do.

To use another example, people should not intentionally and willfully insult deeply held beliefs simply for the sake of insulting or provoking the person. While the person does have the right to mock the belief of another, the right of expression is not a moral free pass to be abusive.

As a final example, people should not engage in trolling. While a person does have the right to express his views to troll others, this is wrong. Trolling is, by definition, done with malice and contributes nothing of value to the conversation. As such, it should not be done.

While I have claimed that people should not unjustly harm others by expressing themselves, I have not made any claims about whether people should be allowed to express themselves in these ways. It is to this that I now turn.

If the principle of harm is a reasonable principle (which can be debated), then a plausible approach would be to use it to sketch out some boundaries. The first rough boundary was just discussed: this is the boundary between what people should express and what people should (morally) not. The second rough boundary begins at the point where other people should be allowed to prevent a person from expressing themselves and ends just before the point at which the state has the moral right to use its coercive power to prevent such expression.

This area is the domain of interactions between people that does not fall under the authority of the state yet still allows for preventing people from expressing their views. For example, people can be justly prevented from expressing their views in the workplace without the state being involved. To use a specific example, the administrators of my university have the right to prevent me from expressing certain things in my classes, although the specific restrictions can be debated. To use another example, a group of friends would have the right, among themselves, to ban someone from their group for saying racist, mean and spiteful things. As a final example, a blog administrator would have the right to ban a troll from her site.

The third boundary is the point at which the state can justly use its coercive power to prevent a person from engaging in expression. As with the other boundaries, this should be set (roughly) by the degree of harm that the expression would cause others. There are many easy and obvious examples where the state would be right to limit expression: threats of murder, damaging slander, incitements to violence against the innocent, and similar such unquestionably harmful expressions.

Matters do, of course, can get complicated quickly. Consider, for example, a person who does not call for the murder of Democrats but tweets his approval when they are assassinated. While this seems to be something a person should not do, it is not clear that it crosses the boundary that would allow the state to justly prevent the person from expressing this view. If the approval does not create sufficient harm, then it would seem to not warrant coercive action by the state.

As another example, consider the expression of racist views via social media. While people should not say such things, as long as they do not engage in actual threats, then it would seem that the state does not have the right to silence the person. This is because the expression of racist views (without threats) would not seem to generate enough harm to warrant state coercion. Naturally, it could justify action on the part of the person’s employer, friends and associates: they might be fired and shunned. Or might have been in the before time.

As a third example, consider a person who mocks the dominant or even official religion of a state. While the rulers of such states usually think they have the right to silence such an infidel, it is not clear that this would create enough unjust harm to warrant silencing the person. Being an American, I think that it would not, but I believe in both freedom of religion and the freedom to mock religion.  There is, of course, the matter of the concern that such mockery could provoke others to harm the mocker, thus warranting the state to stop the person for their own protection. However, the fact that people will act wrongly in response to expressions would not seem to warrant coercing the person into silence.

In general, I favor erring on the side of freedom: unless the state can show that silencing expression is needed to prevent real and unjust harm, the state does not have the moral right to silence expression.

I have merely sketched out a general outline of this matter and have presented three rough boundaries about what people should say and what they should be allowed to say. Much more work would be needed to develop a full and proper account.

Interestingly, the free 2-year college movement began with Republican Governor Bill Haslam of Tennessee. Other states followed his lead but the Trumpian war on education raises questions about the fate of free college. But is offering free 2-year college a good idea?   Having some experience in education, I will endeavor to assess this question.

First, there is no such thing as a free college education (in this context). Rather, free education for a student means the cost is shifted to others. After all, staff, faculty and administrators cannot work for free. The facilities of the schools need to be constructed and maintained. And so on, for all the costs of education.

One proposed way to make education free for students is to shift the cost to “the rich”, a group that is easy to target but somewhat harder to define. As might be suspected, I think this is a good idea. One reason is that I believe that education is one of the best investments a person and society can make. This is why I am fine with paying property taxes that go to K-12 education, although I have no children of my own. In addition to my moral commitment to education, I also look at it pragmatically: money spent on education means spending less on prisons and social safety nets. Of course, there is still the question of why the cost should be shifted to “the rich.”

One obvious answer is that they, unlike the poor and the tattered remnants of the middle class, have an overabundance of money. As economists have noted, an ongoing trend is wages staying stagnant while capital is doing well. This is illustrated by the stock market rebounding from the last crash while workers are doing worse than before that crash.

There is also the need to address income inequality. While many might reject arguments grounded in compassion or fairness, there are purely practical reasons to shift the cost. One is that the rich need the rest of us to keep the wealth, goods and services flowing to them (they need us far more than we need them, since we do not need them at all). Another is social stability. Maintaining a stable state requires the citizens to believe that they are better off with the status quo then they would be if they revolted. While deceit and force can keep citizens in line, these have limits to their effectiveness. It is in the pragmatic interest of the rich to help restore the faith of the middle class. One alternative is being put against the wall after the revolution. But in 2024 they seem to have decided to gamble on force and deceit to keep their wealth safe.

Second, the reality of education has changed over the years. In the not-so-distant past, a high-school education was sufficient for a decent job. I am from a small town in Maine and remember that people could get decent jobs at the paper mill with just a high school degree (or even without one). While there are still some decent jobs like that, they are increasingly rare.

While it might be a slight exaggeration, the two-year college degree seems somewhat equivalent to the old high school degree. That is, it is roughly the minimum education needed to have a good shot at a decent job. As such, the reasons that justify free (for students) public K-12 education would now justify free (for students) K-14 public education. And, of course, arguments against free (for the student) K-12 education would also apply. As a side note, I also support free trade schools, and these offer a good chance of getting a decent job.

Some might claim that the reason the two-year degree seems to be the new high school degree is because education has been declining. But there is also the fact that the world has changed. While I grew up during the decline of the manufacturing economy, we are now in the information economy (even manufacturing is high tech now) and more education is needed to operate in this economy.

It could, of course, be argued that a better solution would be to improve K-12 education so that a high school degree would once again suffice for a decent job in the information economy. This would, obviously enough, lessen the need to have free two-year college. This is certainly an option worth considering.

Third, the cost of college has grown absurdly since I was a student in the 1980s. Rest assured, though, that this has not been because of increased pay for professors. This has been partially addressed by a complicated and bewildering system of financial aid and loans. However, free two-year college would certainly address this problem in a simpler way.

That said, there is a concern that this would not reduce the cost of college. As noted above, it would merely shift the cost. A case can be made that this would increase the cost of college for those who are paying for it. One can argue that schools would have less incentive to keep costs down if the state was paying the bill.

It can be argued that it would be better to focus on reducing the cost of public education in a rational way that focuses on the core mission of colleges, namely education. One reason for the increase in college tuition is the massive administrative overhead that exceeds what is needed to run a school. Unfortunately, since the administrators are the ones who make financial choices, it is unlikely that they will thin their own numbers or reduce their salaries. While state legislatures increasingly apply magnifying glasses to the academic aspects of schools, administrators seem to get less attention. Perhaps because of some interesting connections between the state legislatures and upper-level school administrators. One obvious exception is the dismantling of the administrative apparatus and jobs in what the current regime defines as DEI.

Fourth, while conservative politicians have been critical of the state “giving away free stuff” to people who are not rich, liberals have also been critical of free two-year college. While liberals tend to favor the idea of the state giving people free stuff, some have taken issue with free stuff being given to everyone. After all, the usual proposal is not to make two-year college free for those who cannot afford it, but to make it free for everyone.

It is tempting to criticize free two-year college for everyone. While it makes sense to assist those in need, it can be argued that it is unreasonable to expend resources on people who can afford college. That money could be used to, for example, help people in need pay for four-year colleges. It can also be argued that the well-off would exploit the system.

One easy and obvious reply is that the same could be said of free (for the student) K-12 education. As such, the reasons that exist for free public K-12 education (even for the well-off) would apply to a free two-year college plan.

In regard to the well-off, they can already elect to go to lower cost state schools. However, the wealthy tend to pick the more expensive schools and usually opt for four-year colleges. The right-wing elites that bash colleges tend to be graduates of elite colleges and tend to send their children to such school. As such, I suspect that there would not be an influx of rich students into two-year programs trying to “game the system.” Rather, they would tend to continue to go to the most prestigious four year schools their money can buy.

Finally, a proposal for the rich to bear the cost of “free” college, should be looked at as an investment. The rich “job creators” will benefit from having educated “job fillers.” But the rich prefer that someone else pay the cost for them, such as how companies like Wal Mart rely on the state to provide food stamps and Medicaid to keep their underpaid workers alive

It can also be argued that because the college educated tend to get better jobs which will grow the economy. Most of this growth will go to the rich.  There would also be an increase in tax-revenues and although the rich are loath to pay taxes, they rely on the rest of us doing so. As such, the rich might find that an involuntary investment in education would provide an excellent return.

Overall, “free” two-year college seems to be a good idea, although one that will require proper implementation. Free four-year college funded by an “investment” by the rich is also a good idea, for the same reasons.

It has been argued that everyone is a little bit racist. Various studies have shown that black America are treated differently than white Americans. Examples of this include black students being more likely to be suspended, blacks being arrested at a higher rate than whites, and job applications with “black sounding” names being less likely to get callbacks than those with “white sounding” names. Interestingly, studies have shown that the alleged racism is not confined to white Americans and black Americans also seem to share this racism.

One study involved a simulator in which a participant takes on the role of a police officer and must decide to shoot or holster their weapon when confronted by simulated person. The study indicated that participants, regardless of race, shoot more quickly at blacks than whites and are more likely to shoot an unarmed black person than an unarmed white person. There are, of course, many other studies and examples that support the claim that everyone is a little bit racist.

Given the evidence, it would seem reasonable to accept that everyone is a little bit racist. However, there seems to be something problematic with claiming that everyone is racist, even if it is the claim that the racism is of the small sort.

One point of logical concern is that inferring that all people are at least a little racist based on such studies would be problematic. Rather, what should be claimed is that the studies indicate the presence of racism and that these findings can be generalized to the entire population. But this can be dismissed as a quibble about inductive logic.

Some might take issue with this claim because being accused of racism is offensive. Some, as also might be suspected, would take issue with this claim because they claim that racism has ended in America, hence people are not racist. Not even a little bit. Others might complain that the accusation is a political weapon that is wielded unjustly. I will not argue about these matters, but will instead focus on another concern, that of the concept of racism.

In informal terms, racism is prejudice, antagonism or discrimination based on race. Since studies show that people have prejudices linked to race and engage in discrimination along racial lines, it seems reasonable to accept that everyone is at least a bit racist.

To use an analogy, consider the matter of lying. A liar, put informally, is someone who makes a claim that she does not believe with the intention of getting others to accept it as true. Since people engage in this behavior, everyone is a little bit of a liar. That is, everyone has told a lie.

Another analogy would be to being an abuser. Presumably each person has been at least a bit mean or cruel to another person. This entails that everyone is at least a little bit abusive. The analogies could continue almost indefinitely, but it will suffice to stop with the result that we are all racist, abusive liars.

On the one hand, the claim is true. I have been prejudiced. I have lied. I have been mean to people I love. The same is likely to be true of even the very best of us. Since we have lied, we are liars. Since we have abused, we are abusers. Since we have prejudice and have discriminated based on race, we are racists.

On the other hand, the claim is problematic. After all, to judge someone to be a racist, an abuser, or a liar is to make a strong moral judgment of the person. For example, imagine the following conversation:

 

Sam: “Your friend Sally seems cool. You know her well, what is she like?”

Kelly: “She is a liar and a racist.”

Sam: “But…she seems so nice.”

Kelly: “She is. In fact, she’s one of the best people I know.”

Sam: “But you said she is a liar and a racist.”

Kelly: “Oh, she is. But just a little bit.”

Sam: “What?”

Kelly: “When she was in college, she lied to a creepy guy to avoid going on a date. She also said that when she was five, she briefly thought white people were racists and would not be friends with them. So, she is a liar and a racist.”

Sam: “I don’t think you know what those words mean.”

 

The point is, of course, that terms like “racist”, “abuser” and “liar” have a proper moral usage. Because these are such strong terms, they should be applied in cases to which they fit. For example, while anyone who lies is technically a liar, the designation of being a liar should only apply to someone who routinely engages in that behavior. That is, a person who has a moral defect in regard to honesty. Likewise, anyone who has a prejudice based on race or who has discriminated based on race is technically a racist. However, the designation of racist should be reserved for those who have the relevant moral defect. That is, racism is their way of being, as opposed to having some bias. As such, using the term “racist” (or “liar”) in claiming that “everyone is a little bit racist” (or “everyone is little bit of a liar”) either waters down the moral term or imposes too harsh a judgment on the person.

So, if the expression “we are all a little bit racist” should not be used, what should replace it? My suggestion is to speak instead of people being subject to biases. While saying “we all have biases” is less attention grabbing than “we are all a little bit racist”, it is a more honest description.

Photo by Céréales Killer

While the murders of twelve people at Charlie Hebdo were morally unjustifiable, one of the killers did attempt, in advance, to justify the attack. The  justification offered was that the attack was in accord with Islamic law. Since I am not a scholar of Islam, I will not address the issue of whether this is true or not. As an ethicist, I will address the matter of moral justification for the killings.

From the standpoint of the killers, the attack on Charlie Hebdo was presumably punishment. In general, punishment is aimed at retaliation for wrongs done, to redeem the wrongdoer or  for deterrence (this is the RRD model of punishment). Presumably the killers were aiming at both retaliation and deterrence and not redemption. From a moral standpoint, both retaliation and deterrence are supposed to be limited by a principle of proportionality.

In the case of retaliation, the punishment should correspond to the alleged crime. The reason for this is that disproportionate retaliation would not “balance the books” but instead create another wrong that would justify retaliation in response. This, of course, assumes that retaliation is justifiable in general, which can  be questioned.

In the case of deterrence, there is also a presumption in favor of proportionality. The main reason is the same as for retaliation: excessive punishment would, by definition, create another wrong. A standard counter to this is to argue that excessive punishment is acceptable on the grounds of its deterrence value on the view that the greater the punishment, the greater the deterrence.

While this does have a certain appeal, it also runs counter to common moral intuitions. For example, blowing up a student’s car for parking in a faculty parking space at university would deter students, but would be excessive. As another example, having the police execute people for speeding would tend to deter speeding, but this certainly seems unacceptable.

There is also the standard utilitarian argument that excessive punishment used for deterrence would create more harm than good. For example, allowing police to summarily execute anyone who resisted arrest would deter resistance, but the harm to citizens and society would outweigh the benefits. As such, it seems reasonable to accept that punishment for the purpose of deterrence should be proportional to the offense. There is, of course, still concern about the deterrence factor. A good guiding principle is that the punishment that aims at deterrence should be sufficient to deter, yet proportional to the offense. Deterring the misdeed should not be worse than the misdeed.

In the case of the people at Charlie Hebdo, their alleged offense was their satire of Mohammad and Islam via cartoons. On the face of it, death is a disproportionate punishment. After all, killing someone is vastly more harmful than insulting or offending someone.

A proportional response would have been something along the lines of creating a satirical cartoon of the staff, publishing an article critical of their cartoons or protesting these cartoons. That is, a proportional response to the non-violent expression of a view would be the non-violent expression of an opposing view.  Murder is obviously a vastly disproportionate response.

It could be replied that the punishment was proportional because of the severity of the offense. The challenge is, obviously enough, arguing that the offense was severe enough to warrant death.  On the face of it, no cartoon would seem to merit death. After all, no matter how bad a cartoon might be, the worst it can do is offend a person and this would not warrant death. However, it could be argued that the offense is not against just anyone but against God. That is, the crime is blasphemy or something similar. This would provide a potential avenue for justifying a penalty of death. It is to this that I now turn.

Religious thinkers who believe in Hell face the challenge of justifying eternal damnation. As David Hume noted, an infinite punishment for what must be finite offenses is contrary to our principles of justice. That is, even if a person sinned for every second of their life, they could not do enough evil to warrant an infinitely bad, infinitely long punishment. However, there is a clever reply to this claim.

In his classic sermon “Sinners in the Hands of an Angry God”, Jonathan Edwards says of sinners that “justice calls aloud for an infinite punishment of their sins.” He justifies the infinite punishment of sin on the grounds that since God is infinitely good, any sin against God is infinitely bad. As such, the punishment is proportional to the offense: infinite punishment for an infinitely bad crime.

It could be contended that creating cartoons mocking Mohammed and Islam are sins against an infinitely good God, thus warranting an infinite punishment and presumably justifying killing (which is much less than infinite punishment). Interestingly, the infinite punishment for sins would render punishing of sinners here on earth pointless for two reasons. First, if the sinner will be punished infinitely, then punishing him here would not increase his punishment. So, there is no point to it.  Second, if the sinner is going to be punished divinely, then punishment here would also be pointless. To use an analogy, imagine if someone proposed having a pre-legal system in which alleged criminals would be tried and, if found guilty, be given pointless sentences (such as being mildly scolded). The alleged criminals would then go on to the real legal system for their real punishment. This pre-legal system would be a pointless waste of time and resources. Likewise, if there is divine justice for sins, then punishing them here would be a pointless waste of time.

This, obviously enough, assumes that God is real, that He punishes and that He would punish people for something as minor as a cartoon. Attributing this to God would present him as a petty and insecure God who is overly concerned about snarky cartoons. People are most likely to react violently to mere mockery when they are strong enough to punish, but weak enough to be insecure. God should not be enraged by cartoons. But I could be wrong. If am, God will take care of matters in the afterlife and there is thus no reason to kill cartoonists.

If God does not exist, then the cartoons obviously cannot have offended God. In this case, the offense would be against people who believe in fiction. While they might be angry at being mocked, killing the cartoonists would be like enraged Harry Potter fans killing a cartoonist for mocking Daniel Radcliffe with a snarky cartoon. While they might be devoted to the world of Harry Potter and be very protective of Daniel Radcliffe, offensive cartoons mocking a real person and a fictional world would not warrant killing a cartoonist.

As such, if God is real, then He will deal with any offense against Him. As such, there is no justification for people seeking revenge in His name. If He is not real, then the offense is against the make-believe, and this does not warrant killing. Either way, such killings would be completely unjustified.

Photo by Céréales Killer

After the 2015 Charlie Hedbo murders in France, the discussion of group responsibility heated up.  Some contend that all Muslims were responsible for the actions the killers. Most people did not claim that all Muslims supported the killings, but there was a tendency to put a special burden of responsibility upon Muslims as a group. Some people did claim that the murders were evidence that Islam itself is radical and violent. This sort of “reasoning” is, obviously enough, the same sort used to condemn all Christians or Republicans based on the actions of a few.

To infer an entire group has a certain characteristic (such as being violent or prone to terrorism) based on the actions of a few involves committing the fallacy of hasty generalization. This “reasoning” also often includes the fallacy of suppressed evidence in that evidence contrary to the claim is ignored. For example, to condemn Islam as violent based on the actions of terrorists would be to ignore the fact that most Muslims are as peaceful as people of other faiths, such as Christians and Jews.

It might be objected that a group can be held accountable for the misdeeds of its members even when those misdeeds are committed by a few and even when these misdeeds are supposed to not match the beliefs of the group. For example, if I were to engage in sexual harassment while on the job, Florida A&M University can be held accountable for my actions. Thus, it could be argued, all Muslims are accountable for the killings in France and these killings provide just more evidence that Islam itself is a violent and murderous religion.

In reply, Islam (like Christianity) is not a monolithic faith with a single hierarchy over all Muslims. After all, there are diverse sects of Islam and many Muslim hierarchies. For example, the Muslims of Saudi Arabia do not fall under the hierarchy of the Muslims of Iran. 

As such, treating all of Islam as an organization with a chain of command and a chain of responsibility that extends throughout the entire faith would be absurd. To use an analogy, sports fans sometimes go on violent rampages after events. While the actions of violent fans should be condemned, the peaceful fans are not accountable for those actions. After all, while the fans are connected by their being fans of a specific team this is not enough to create accountability. As such, to condemn all of Islam based on what happened in France would be both unfair and unreasonable.

This, of course, raises the question of the extent to which even an organized group is accountable for its members. One intuitive guide is that the accountability of the group is proportional to the authority the group has over the individuals. For example, while I am a philosopher and belong to the American Philosophical Association, other philosophers have no authority over me. As such, they have no accountability for my actions. In contrast, my university has authority over my work life as a professional philosopher and hence can be held accountable should I, for example, sexually harass a student or co-worker.

The same principle should be applied to Islam (and any faith). Being a Muslim is analogous to being a philosopher in that there is a recognizable group. As with being a philosopher, merely being a Muslim does not make a person accountable for all other Muslims any more than being a Christian makes one accountable for the actions of every other Christian across time and space.

But just as I am employed by a university, a Muslim can belong to an analogous organization, such as a mosque or ISIS. To the degree that the group has authority over the individual, the group is accountable. So, if the killers in France were acting as members of ISIS or Al-Qaeda, then the group would be accountable. However, while groups like ISIS and Al-Qaeda might delude themselves into thinking they have legitimate authority over all Muslims, they obviously do not. After all, they are opposed by most Muslims.

 So, with a religion as vast and varied as Islam, it cannot be reasonably be claimed that there is a central earthly authority over its members and this would serve to limit the collective responsibility of the faith. Naturally, the same would apply to other groups with a similar lack of overall authority, such as Christians, conservatives, liberals, Buddhists, Jews, philosophers, runners, and satirists.

A look back at the American economy shows a “pastscape” of exploded economic bubbles. These include the housing bubble and the technological .com bubble. We are probably blowing up a new bubble.

In “The End of Economic Growth?” Oxford’s Carl Frey discusses the new digital economy and presents the value of select digital companies relative to the number of people they employ. One example is Twitch, which streams videos of people playing games (and people commenting on people playing games). Twitch was purchased by Amazon for $970 million. Twitch had 170 employees. Facebook bought WhatsApp for $19 billion. WhatsApp employed 55 people at the time of this acquisition. In an interesting contrast, IBM employed 431,212 people in 2013.

While it is tempting to explain the impressive value to employee ratio in terms of grotesque over-valuation, there are other factors involved. One, as Frey notes, is that digital businesses often require relatively little capital. WhatsApp started out with $250,000 and this was actually rather high for an app as the average cost to develop one was $6,453 at the time. As such, a relatively small investment can create a huge return.

Another factor is an old one, namely the efficiency of technology in replacing human labor. The development of the plow reduced the number of people required to grow food, the development of the tractor reduced it even more, and the refinement of mechanized farming has enabled an even more dramatic reduction in labor. While it is true that people must do work to create digital companies (writing the code, for example), much of the “labor” is automated and done by computers rather than people. AI companies also hold forth the promise of eliminating even more human labor, although one should consider the potential risk of letting AI do critical work.

A third factor is the digital aspect. Companies like Facebook, Twitch and WhatsApp do not make objects that need to be manufactured, shipped and sold. As such, they do not (directly) create jobs in these areas. These companies do make use of existing infrastructure: Facebook does need companies like Comcast to provide internet connection and companies like Apple to make the devices. But, rather importantly, they do not employ the people who work for Comcast and Apple (and even these companies employ relatively few people).

One of the most important components of the digital aspect is the multiplier effect. To illustrate this, consider two imaginary businesses in the health field. One is a walk-in clinic which I will call Nurse Tent. The other is a health app called RoboNurse. If a patient goes to Nurse Tent, the nurse can only tend to one patient at a time, and they can only work so many hours per day. As such, Nurse Tent will need to employ multiple nurses (as well as the support staff). In contrast, the RoboNurse app can be sold to billions of people and does not require the sort of infrastructure required by Nurse Tent. If RoboNurse takes off as a hot app, the developer could sell it for millions or even billions.

Nurse Tent could, of course, become a franchise (the McDonald’s of medicine). But being labor intensive and requiring considerable material outlay, it will not be able to have the value to employee ratio of a digital company like WhatsApp or Facebook. It would, however, employ more people. However, the odds are that most of the employees will not be well paid. While the digital economy is producing millionaires and billionaires, wages for labor are low. This helps to explain why the overall economy is doing great, while most workers are worse off than before the last bubble.

It might be wondered why this matters. There are, of course, the usual concerns about the terrible inequality of the economy. However, there is also the concern that a new bubble is being inflated, a bubble filled with digits. There are some good reasons to be concerned.

First, as noted above, digital companies seem grotesquely overvalued. While the situation is not exactly like the housing bubble, overvaluation should be a matter of concern. After all, if the value of these companies is effectively just “hot digits” inflating a thin skin, then a bubble burst seems likely. This can be countered by arguing that the valuation is accurate or even that all valuation is essentially a matter of belief and if we believe, all will be fine. Until, of course, it is no longer fine. The economy, like religion, is faith based.

Second, the current digital economy increases the income inequality mentioned above, widening the gap between the rich and the poor. Laying aside the fact that such a gap historically leads to social unrest and revolution, there is the more immediate concern that the gap will cause the bubble to burst. The economy cannot, one might assume, endure without a solid middle and base to help sustain the top of the pyramid.

This can be countered by arguing that the new digital economy will eventually spread the wealth. Anyone can make an app, anyone can create a startup, and anyone can be a millionaire. While this does have some appeal, there is the fact that while it is true that (almost) anyone can do these things, it is also true that most people will fail. One just needs to consider all the failed startups and the millions of apps that are not successful.

There is also the obvious fact that civilization requires more than WhatsApp, Twitch and Facebook and people need to work outside of the digital economy (which lives atop the non-digital economy). Perhaps this can be handled by an underclass of people beneath the digital (and financial) elite, who toil away at low wages to buy smartphones so they can update their status on Facebook and watch people play games via Twitch. This is, of course, just a standard sci-fi dystopia.

While college students have been completing student evaluations of faculty since the 1960s, the importance of these evaluations has increased. There are various reasons for this. One is a conceptual shift towards the idea that a college is a business and students are customers. On this model, student evaluations of faculty are part of the customer satisfaction survey process. Second is an ideological shift in education. Education is seen more as a private good and in need of quantification. This is also tied to the notion that the education system is, like a forest, worker, or oilfield, a resource to be exploited for profit. Student evaluations provide a cheap method of assessing the value provided by faculty and, best of all, provide numbers.

Obviously, I agree with the need to assess performance. As a gamer and runner, I am obsessed with measuring my athletic and gaming performances and I am comfortable with letting that obsession spread into my professional life. I want to know if my teaching is effective, what works, what does not, and what impact I am having. Of course, I want to be sure the assessment methods are useful. Having been in education for decades, I do have concerns about the usefulness of student evaluations of faculty.

The first and most obvious concern is that students are, almost by definition, not experts in assessing education. While they obviously take classes and observe faculty, they usually lack any training in assessment. Having students evaluate faculty could be seen as on par with having sports fans assessing coaching. While fans and students can have strong opinions, this does not qualify them to provide meaningful professional assessment.

Using the sports analogy, this can be countered by pointing out that while a fan might not be a professional at assessing coaching, they usually know good or bad coaching when they see it. Likewise, a student who is not an expert at education can still recognize good or bad teaching.

A second concern is the self-selection problem. While students have access to the evaluation forms and can easily go to Rate My Professor, those who take the time to complete the evaluation will usually have stronger feelings about the professor. These feelings can distort the results so that they are more positive or more negative than they should be. The counter to this is that the creation of such strong feelings is relevant to the assessment of the professor. A practical way to counter the bias is to ensure that most (if not all) students in a course complete the evaluations.

Third, people often base their assessments on irrelevant factors. These include such things as age, gender, appearance, and personality. The concern is that these factors makes evaluations a popularity contest: professors that are liked will be evaluated as better than professors who are not as well liked. There is also the concern that students tend to give younger professors and female professors worse evaluations than older professors and male professors and these sorts of gender and age biases lower the credibility of evaluations.

A stock reply to this is that these factors do not influence students as strongly as critics might claim. So, for example, a professor might be well-liked yet still get poor evaluations in regards to certain aspects of the course. There are also those who question the impact of alleged age and gender bias.

Fourth, people often base assessments on irrelevant factors about the course, such as how easy it is, their grade, or whether they like the subject. Not surprisingly, it is commonly held that students give better evaluations to professors who they regard as easy and downgrade those they see as hard.

Given that people often base assessments on irrelevant factors (a standard problem in critical thinking), this is a real concern. Anecdotally, my own experience indicates that student assessment varies based on irrelevant factors they explicitly mention. I have a 4.0 on Rate my Professors, but there are inconsistencies between evaluations. Some students claim that my classes are incredibly easy (“he is so easy”), while others claim they are incredibly hard (“the hardest class I have ever taken”). I am also described as being both very boring and very interesting, both helpful and unhelpful and so on. This sort of inconsistency in evaluations is common and raises concerns about the usefulness of such evaluations.

But it can be claimed that the information is still useful. Another counter is that the appropriate methods of statistical analysis can be used to address this concern. Those who defend evaluations point out that students tend to be generally consistent in their assessments. Of course, consistency in evaluations does not entail accuracy.

To close, there are two final general concerns about evaluations of faculty. One is the concern about values. That is, what is it that makes a good educator? This is a matter of determining what it is that we are supposed to assess and to use as the standard of assessment. The second is the concern about how well the method of assessment works.

In the case of student evaluations of faculty, we do not seem to be very clear about what we are trying to assess nor do we seem to be entirely clear about what counts as being a good educator. In the case of the efficacy of the evaluations, to know whether they measure well we would need to have some other means of determining whether a professor is good or not. But, if there were such a method, then student evaluations would seem unnecessary because we could just use those methods. To use an analogy, when it comes to football, we do not need to have the fans fill out evaluation forms to determine who is a good or bad athlete: there are clear, objective standards in regards to performance.