While all states allow concealed carry, most states forbid carrying guns on school grounds. Over the years, the Republican rulers of my adopted state of Florida have considered bills that would allow concealed carry on the campuses of the state’s public universities. Other states have already passed such laws. While there is the issue of whether this is a good idea, my focus is on professors who might refuse to allow guns in their classrooms and offices.

While I am not a lawyer, it is likely that professors lack the legal authority to impose such bans. Since I am not a lawyer, I will leave the legal matters to the experts. Instead, I will focus on the moral aspects of such bans.

One moral argument in favor of the professors is based on an assumption that they have the right to ban things they regard as morally offensive from their classrooms and offices. So, a professor who is morally opposed to guns could refuse to allow them. This is analogous to religious freedom arguments used to justify a business not providing coverage of contraception or denying services to same sex-couples. The idea in all these cases is that the moral interest of one person overrides that of another, thus justifying the rights of one person overriding that of another. In the case of guns, it is the right of the professor to teach and hold office hours in a gun-free environment that overrides the right of others to carry guns.

A reply to this argument, as can be used in the religious “freedom” cases, is that the right of the professor to restrict the right of students is not justified: their right to carry a weapon trumps the professor’s right to be in a weapon free zone. This would be somewhat like how the right of a same-sex couple to marry trumps the “right” of religious people to live in a country without same-sex marriage.

Another reply is to draw an analogy aimed at showing the absurdity of such a professorial ban. Imagine a professor who has a deep and abiding moral opposition to birth control and wants to ban them from her classroom and office. This includes birth control that is being “concealed” in the body (for example, a woman on the pill). While the professor cannot see it, she claims that its mere presence is morally intolerable to her.  But it would be absurd to claim that the professor has the right to ban the presence of birth control. A similar argument could be made with smart phones: a professor can forbid their use in class because they can be disruptive and be used to cheat, but they cannot refuse to allow students to have them. As such, professors do not seem to have the right to ban guns simply because they are morally offended by them.

A better moral argument is the safety argument: a professor could be concerned about people being shot (intentionally or accidentally). Some of my colleagues have claimed there would be a chilling effect if guns were allowed on campus: people might be afraid to discuss contentious issues out of fear of being shot. Some also considered that professors might be inclined to grade easier to avoid getting shot.

There are legitimate safety concerns about allowing guns on campus. However, there are two obvious points to consider. The first is that guns are already allowed in many places and people do not seem especially disinclined to engage in contentious discussions or to not do their jobs properly because someone might shoot them with a legally carried gun.  As such, unless campuses are simply special places, this concern does not warrant a special ban on campus carry. Put another away, if guns are allowed almost everywhere else, then without a relevant difference argument, they should be allowed on campuses. Secondly, as I point out to my colleagues, people can easily carry guns illegally on campus. If someone intends to kill a professor over a bad grade or a heated discussion (which has happened) they can just illegally bring a gun to campus to illegally shoot someone. In contrast with the prison style K-12 schools, college campuses are usually open places. A professor’s ban on guns would not provide a greater degree of safety, even if the professor was able to enforce such an almost certainly illegal ban.

Interesting, the state legislatures who pass concealed carry on campus laws almost always forbid people to bring guns to the legislature where they work. While this shows inconsistency, it does not show the law is wrong. It does, however, point towards a relevant difference argument: perhaps the campus is relevantly like the legislature. My view is that whatever arguments the state legislature advances for allowing guns on campus should also apply to carrying guns into the legislature. If they are worried they might be shot, then the same concern would apply to campuses and, one must think, everywhere else.

While I have been playing video games since the digital dawn of gaming, it was not until I completed Halo 5 that I gave some philosophical consideration to video game cut scenes. For those who are not familiar with cut scenes, they are non-interactive movies within a game. They are used for a variety of purposes, such as providing backstory, showing the consequences of the player’s action or providing information (such as how adversaries or challenges work).

The reason  Halo 5 motivated me to write this is an unfortunate one: Halo 5 made poor use of cuts scenes and will argue for this  claim as part of my sketch of a philosophical cut scene theory. Some gamers, including director Guillermo Del Toro and game designer Ken Levine, have spoken against the use of cut scenes. In support of their position, a reasonable argument can be presented.

One fundamental difference between a game and a movie is the distinction between active and passive involvement. In a typical movie, the audience merely experiences the movie as observers and do not influence the outcome. In contrast, gamers experience the game as participants in that they some control over the events. A cut scene, or in game movie, changes the person from being a player to being an audience member. This is analogous to taking a person playing baseball and then moving them into the bleachers to watch the game. They are, literally, taken out of the game. While many enjoy watching sports, the athlete is there to play and not to be part of the audience. Likewise, while watching a movie can be enjoyable, a gamer is there to game and not be an audience member. To borrow from Aristotle, games and movies each have their own proper pleasures and mixing them together can harm the achievement of this pleasure.

 Aristotle, in the Poetics, is critical of the use of the spectacle (such as what we would now call special effects) to produce the tragic feeling of tragedy. He contends that this should be done by the plot. Though this is harder to do, the effect is better. In the case of a video game, the use of cinematics can be regarded as an inferior way of bringing about the intended experience of a game. The proper means of bringing about the effect should lie within the game itself so that the player is playing and not merely observing. As such, cut scenes should be absent from games. Or, at the very least, kept to a minimum.

One way to counter this argument is to draw an analogy to table top role-playing games such D&D, Pathfinder and Call of Cthulhu. Such games typically begin with something like a video game’s opening cinematic: the game master sets the stage for the adventure to follow. During play, there are often important events that take considerable game world time but would be boring to play out in real time. For example, a stock phrase used by most game masters is “you journey for many days”, perhaps with some narrative about events that are relevant to the adventure, such as the party members becoming friends along the way. There are also other situations in which information needs to be conveyed, or stories told that do not need to be played out because doing so would not be enjoyable or would be needlessly time consuming. A part of these games is shifting from active participant to briefly taking on the role of the audience. However, this is rather like being on the bench listening to the coach rather than being removed from the field and put into the bleachers. While one is not actively playing at that moment, it is still an important part of the game and the player knows they will be playing soon.

In the case of video games, the same sort of approach would also seem to fit, at least in games that have story elements that are important to the game (such as plot continuity, background setting, maintaining some realism, and so on) yet would be tedious, time consuming or beyond the mechanics of the game to play through. For example, if the game involves the player driving through a wasteland to the ruins of a city she wishes to explore, then a short cut scene that illustrates the desolation of the world would be appropriate. After all, driving for hours through a desolate wasteland would be very boring.

Because of the above argument, I do think that cut scenes can be a proper part of a video game, if they are used well. This requires, but is not limited to, ensuring that the cut scenes are necessary and that the game would not be better served by either deleting the events or address them with game play. It is also critical that the player does not feel they have been put into the bleachers, although a benched feeling can be appropriate. As a rule, I look at cut scenes as analogous to narrative in a tabletop role-playing game: a cut scene in a video game is fine if narrative would be fine in an analogous situation in a tabletop game.

Since I was motivated by Halo 5’s failings, I will use it as an example of the bad use of cut scenes. Going with my narrative rule, a cut scene should not contain things that would be more fun to play than watch, unless there is some greater compelling reason why it must be a cut scene. Halo 5 routinely breaks this rule. A rather important sub-rule of this rule is that major enemies should be dealt with in game play and not simply defeated in a cut scene. Halo 5 broke this rule right away. In Halo 4 Jul ‘Mdama was built up as a major enemy. As such, it was rather surprising that he was knifed to death in a cut scene near the start. This would be like setting out to kill a dragon in Dungeons & Dragons and having the dungeon master allow you to fight their orc and goblin minions, but then just say “Fred the fighter hacks down the dragon. It dies” in lieu of playing out the fight with the dragon. Throughout Halo 5 there were cut scenes to which I and my gaming buddy said,  “huh, that would have been fun to play rather watch.” That, in my view, is a mark of bad choices about cut scenes.

The designers also made the opposite sort of error: making players engage in tedious “play” that would have been far better served by short cut scenes. For example, there are parts where the player must engage in tedious travel (such as ascending a damaged structure). While it would have been best to make it interesting, it would have been less bad to have a quick cut scene of the Spartans scrambling to safety. The worst examples, though, involved “game play” in which the player remains in first person shooter view, but cannot use any combat abilities. For example, in one section of play the goal is to walk around trying to find various people to “talk” to. The conversations are scripted: when you reach the person, the non-player character t says a few things and your character says something back. There are no dialogue choices. These should have been handled by short cut scenes. After all, when playing a first person shooter, I do not want to walk around unable to shoot  while I trigger uninteresting recorded conversations.  These games are supposed to be “shoot and loot” not “walk and talk.”

To conclude, I take the view of cut scenes that Aristotle takes of acting: while some condemn all cut scenes and all acting (it was argued by some that tragedy was inferior to the epic because it was acted out on stage), it is only poor use of cut scenes (and poor acting) that should be condemned. I do condemn Halo 5’s cutscenes.

On an episode of the Late Show, host Stephen Colbert and Jane Lynch had an interesting discussion of guardian angels. Lynch, who starred as a guardian angel in “Angel from Hell”, related a story of how her guardian angel held her in a protective embrace during a low point of her life. Colbert, ever the rational Catholic, noted that he believed in guardian angels despite knowing they do not exist. The question of the existence of guardian angels is yet another way to consider the classic problem of evil.

In general terms, a guardian angel is a supernatural, benevolent being who serves as a personal protector. The nature of  this guarding varies. For some, the guardian angel is supposed to serve in the classic “angel on the shoulder” role and provide good advice. For others, the angel provides a comforting presence. Some even claim that guardian angels take a very active role, such as reducing a potentially fatal fall to one that merely inflicts massive injury. My interest is, however, not with the specific functions of guardian angels, but with the question of their existence.

In the context of monotheism, a guardian angel is an agent of God. As such, this ties them into the problem of evil. The general problem of evil is the challenge of reconciling the alleged existence of God with the existence of evil. Some take this problem to decisively show that God does not exist. Others contend that it shows that God is not how philosophers envision Him in the problem, so that He is not omniscient, omnibenevolent or omnipotent. In the case of guardian angels, the challenge is to reconcile their alleged existence with evil.

There are presumably thousands or millions of cases each day in which a guardian angel could have saved the day with little effort. For example, a guardian angel could tell the police the location of a kidnapped or missing child. As another example, a guardian angel could keep a ladder from slipping. They could also do more difficult things, like preventing cancer from killing children or deflecting bullets away from school children. Since none of this ever happens, the obvious conclusion is that there are no guardian angels of this type.

However, as with the main problem of evil, there are ways to address this problem. One option, which is not available in the case of God, is to argue that guardian angels have very limited capabilities and are weak supernatural beings. Alternatively, they might operate under very restrictive rules. One problem with this reply is that weak angels are indistinguishable in their effects from non-existent angels. Another problem ties this into the broader problem of evil: why wouldn’t God deploy a better sort of guardian or give them broader rules? This, of course, just brings up the usual problem of evil.

Another option is that not everyone gets an angel. Jane Lynch, for example, might get an angel that hugged her. Alan Kurdi, the young boy who drowned trying to flee Syria, did not get a guardian angel. While this would be an explanation of sorts, it still just pushes the problem back: why would God not provide everyone in need with a guardian? We humans are, of course, limited in our resources and abilities, so everyone cannot be protected all the time. However, an omnipotent God does not face this challenge.

It is also possible to make use of a stock reply to the problem of evil and bring in the Devil. Perhaps Lucifer deploys his demonic agents to counter the guardian angels. So, when something bad happens to a good person, it is because her guardian angel was defeated by a demon. While this has a certain appeal, it would require a world in which God and the Devil are closely matched, thus allowing the Devil to defy God and defeat His other angels. This, of course, just brings in the general problem of evil: unless one postulates two roughly equal deities, God is on the hook for the Devil and his demons. Or rather, God’s demons since He created them.

Guardian angels fare no better than God in regards to the problem of evil. That said, the notion of benevolent, supernatural personal guardians predates monotheism. Socrates, for example, claimed to have a guardian who would warn him of bad choices (which Stephen Colbert also claims to have).

These sorts of guardians were not claimed to be agents of a perfect being, and so avoid the problem of evil. Supernatural beings that are freelancers or who serve a limited deity can reasonably be expected to be limited in their abilities and it would make sense that not everyone would have a guardian. Conflict between opposing supernatural agencies also makes sense, since there is no postulation of a single supreme being.

While these supernatural guardians do avoid the problem of evil, they run up against the problem of evidence: there does not appear to be adequate evidence for the existence of such supernatural beings. In fact, the alleged evidence for them is better explained by alternatives. For example, a little voice in one’s head is better explained in terms of the psychological rather than the supernatural (a benign mental condition rather than a supernatural guardian). As another example, a fall that badly injures a person rather than killing them is better explained in terms of the vagaries of chance than in terms of supernatural intervention.

Given the above discussion, there seems to be little reason to believe in the existence of guardian angels. The world would be radically different if they did exist, so they do not. Or they do so little as to make no meaningful difference, which is hard to distinguish from them not existing at all.  

I certainly do not begrudge the belief in guardian angels. If that belief leads them to make better choices and feel safer in a dangerous world, then it is a benign belief. I certainly have comforting beliefs as well, such as the belief that most people are basically good. Perhaps these beliefs are our guardian angels.

One stock argument against increasing taxes on the rich to address income inequality is a disincentive argument. The gist of the argument is that if taxes are raised on the rich, they will lose the incentive to invest, innovate, create jobs and so on.  Most importantly, in terms of income inequality, the consequences of this disincentive will have the greatest impact on those who are not rich. For example, it has been claimed that the job creators would create fewer jobs and pay lower wages if they were taxed more to address income inequality. As such, such a tax increase would be both harmful and self-defeating: the poor will be no better off than they were before (and perhaps even worse off). As such, there would seem to be good utilitarian moral grounds for not increasing taxes on the rich.

Naturally, there is the question of whether this disincentive effect is morally justifiable. If the rich retaliated from spite, then the moral argument would fall apart. While there would be negative consequences for such a tax increase, these consequences would be harms intentionally inflicted. In this scenario, not increasing taxes because of fear of retaliation would be morally equivalent to paying protection money so that criminals do not break things in one’s business or home. While there could be a practical reason to do this, the criminals would be acting immorally.

If the rich responded not from spite but because the tax increase was an unfair burden inflicted upon them, then the ethics of the situation would be different. To use an obvious analogy, if wealthy customers at a restaurant were forced to pay some of the bills for the poor customers, it would be hard to fault them for leaving smaller tips. While the matter of what counts as a fair tax is controversial, one approach would be to define unfairness in terms of the taxes cutting too much into what the person is entitled to based on their efforts, ability and productivity relative to what they owe the country. This seems reasonable in that it provides room for debate and does not beg any obvious questions (after all, the amount one owes one’s country could be nothing).

Interestingly, the fairness argument would also apply to workers in regard to their salary. When a worker produces value, the employer pays the worker some of that value and keeps some of it. What the employer keeps can be seen as analogous to the tax imposed by the state on the rich. As with the taxes on the rich, there is the question of what is fair to take from workers. Bringing in the disincentive argument, if it works to justify imposing only a fair tax on the rich, it will also do the same for the less rich. So, those who argue against raising taxes on the rich using the disincentive argument should also accept that workers should be paid in accord with the same principles used to judge how much income should be taken from the rich.

The obvious counter to this approach is to break the analogy between the two situations: this would involve showing that the rich differ from other people in relevant ways or that taking income by taxes is relevantly different from taking money from employees. The challenge is, of course, to show that the differences really are relevant.

One way to justify income inequality is the incentive argument. The gist is that income inequality is necessary as a motivating factor: if people could not get rich, then they would not have the incentive to work hard, innovate, invent and so on. The argument requires the assumption that hard work, innovation, inventing and so on are good; an assumption that has some plausibility.

This argument does have considerable appeal. In terms of psychology, it is reasonable to make the descriptive claim that people are motivated by the possibility of gain. This view was held by Thomas Hobbes and others on the grounds that it matches the observed behavior of many (but not all) people. If this view is correct, then achieving the goods of hard work, innovation, invention and so on would require income inequality.

There is the counter that some are motivated by factors other than achieving inequality in financial gain. Some are motivated by altruism, by a desire to improve, by curiosity, by the love of invention, by the desire to create things of beauty, to solve problems and other motives that do not depend on income. These sorts of motivations do suggest that income inequality is not necessary as a motivating factor, at least for some people.

Since this is a matter of fact, it can (in theory) be settled by empirical research. It worth noting that even if income inequality is necessary as a motivating factor, there remain other concerns, such as the question of how much income inequality is necessary and how much is morally acceptable.

Interestingly, the incentive argument is a two-edged sword: while it can be used to justify income inequality, it can also be used to argue against the sort of economic inequality that exists in the United States many other countries. The argument is as follows.

While worker productivity has increased significantly in the United States income for workers has not matched this productivity. This is a change from the past, the income of workers generally went up more proportionally to the increase in productivity. This explains, in part, why CEO (and upper management in general) salaries have seen a significant increase relative to worker income: the increased productivity of the workers generates more income for the upper management than it does for the workers.

If it is assumed that gain is necessary for motivation and that inequality is justified by the results (working harder, innovating, producing and so on), then the workers should receive a greater proportion of the returns on their productivity. After all, if high executive compensation is justified on the grounds of its motivational power, then the same principle would apply to workers. They, too, should receive compensation proportional to their productivity, innovation and so on. If they do not, then the incentive argument would entail that they would have no incentive to be as productive, etc.

It could be argued that top management earns its higher income by being responsible for the increase in worker productivity, that the increase in worker productivity is not because of the workers but because of leadership being motivated by the possibility of gain. If this is the case, then the disparity would be  justified by the incentive argument: the workers are more productive because the CEO is motivated to make them more productive so she can have an even greater income.

 However, if the increased productivity is due mainly to the workers, then this counters the incentive argument: if workers are more productive than before with less relative compensation, then there does not seem to be the alleged connection between incentive and productivity required by the incentive argument. If workers will increase productivity while receiving less compensation relative to their productivity, then the same should hold for the top executives. While there are other ways to warrant extreme income inequality, the incentive argument has a problem.

One possible response is to argue there are relevant differences between the executives and workers such that executives need the incentive provided by extreme inequality and workers are motivated sufficiently by other factors (like being able to buy food). It could also be contended that the workers are motivated by the extreme inequality as well, perhaps they would not be as productive if they did not have the (almost certainly false) belief that they will become rich.