One of the many problems with American higher education is that the cost of a four-year degree is higher than ever, even when adjusting for inflation. The causes of this increase are well known and well understood and there is no mystery here. One contributing factor is that universities spend on facilities that are not connected to education. Critics like to, for example, point out that some universities spend millions on luxurious fitness facilities to attract students.

A major factor contributing to costs is the ever-expanding administrative class at universities. This expansion occurs in both individual salaries and overall numbers. From 2000 to 2010 the median salary for the top public university administrators increased by 39%. The top administrators, the university presidents, enjoyed a 75% increase. In stark contrast, the salaries for full-time professors increased by only 19%.

The money for these salary increases must come from somewhere and an obvious source is students. My alma mater Ohio State University is leading the way. Between 2010 and 2012 Gordon Gee, the president of OSU, was paid almost $6 million. At the same time, OSU raised tuition and fees to a degree that resulted in student debt increasing 23% more than the national average.

While some might be tempted to attribute bloated salaries as the result of the alleged wastefulness and growth of the public sector, private colleges and universities topped their public counterparts. From 2000 to 2010 private schools saw salary increases of about 97% for their top administrators and their presidents enjoyed a 171% increase. Full time professors also partook of the increases as their salaries increased by 50%.

What is even more striking than salary increases are the increases in the number of positions and their nature. From 1978 to 2014 administrative positions skyrocketed 369%. This time also marked a major shift in the nature of faculty. The number of part-time faculty increased by 286%. The use of adjuncts has been justified on the grounds that doing so saves money. While adjunct salaries vary, the typical adjunct makes $20,000-25,000 per year. While this might sound decent for “part-time” work, most adjuncts work “part-time” at multiple schools and are thus better seen as full-time workers.

However, the money saved by hiring adjuncts does not translate to a lower cost of education. Rather, it “saves” money from going to faculty so that it can go to administrators. Since the average salary of a university president is $478,896 and the number of presidents making $1 million or more a year is increasing, it should be obvious what is helping to drive up the cost of college. Hint: it is not adjunct pay.

There has also been a push to reduce (and eliminate) tenured positions which resulted in an increase in full time, non-tenure earning positions by 259%. Full time tenure and tenure-track positions increased by only 23%. Ohio State University provides an excellent (or awful) example of this strategy: the majority of those hired by OSU were Adjuncts and Administrators. To be specific, OSU hired 498 adjunct instructors and 670 administrators. 45 full-time, permanent faculty were hired.

The Republicans who run many state legislatures rail against wasteful spending, impose micromanagement and inflict draconian measures on state universities yet never seem to address the real causes of tuition increase and the problems in the education system. Someone more cynical than I might note that the university seems to no longer have education as its primary function. Rather, it is crafted to funnel money from the “customer” and the taxpayer (in the form of federal student aid) to the top while minimizing pay for those who do the actual work.

Tenure has been a target in recent years because tenure provides faculty with protection against being fired without cause. The idea that some non-rich might enjoy a degree of financial security clearly vexes the ruling class.  This is regarded by some as a problem for a variety of reasons. One is that tenured faculty cannot be let go simply to replace them with lower paid adjuncts. This, obviously enough, means less money flowing from students and the state to administrators. Another is that the protection provided by tenure allows a faculty member to criticize what is happening to the university system without being fired.

While I am critical of the approach to administration, we are on the same side in terms of how public education is suffering from disinvestment. While the cost of facilities and excessive administrative overhead are factors, the greatest harm to American education has been the decision to destroy it.

Back in 2015 some folks in my adopted state of Florida wanted three Confederate veterans to become members of the Veterans’ Hall of Fame. Despite the efforts of the Florida Sons of Confederate Veterans, the initial attempt failed on the grounds that the Confederate veterans were not United States’ veterans. Not to be outdone, the Texas Sons of Confederate Veterans wanted an official Texas license plate featuring the Confederate battle flag. While custom license plates are allowed in the United States, states review proposed plates. The Texas department of Motor Vehicles rejected the proposed plate on the grounds that “a significant portion of the public associate[s] the Confederate flag with organizations” expressing hatred for minorities. Those proposing the plate claim that this violates their rights. The case reached the Supreme Court, and the court sided with the state of Texas. But as the Trump regime is Confederate friendly, it would not be surprising if there are new proposals for such license plates.

The legal issue, which was presented as a battle over free speech, was interesting. However, my main concern is with the ethics of the matter since I am not a lawyer.

One way to look at a state approved license plate is that it is a means of conveying a message the state agrees with. Those opposed to the plate argued that if the state were forced to allow the plate to be issued, the state will be compelled to be associated with a message. In free speech terms, this is forcing the state to express or facilitate a view it does not want to publicly accept.

This has some appeal as the state can be seen as representing the people. If a view is significantly offensive to a significant number of citizens (which is, I admit, vague), then the state could reasonably decline to accept a license plate expressing or associated with that view. So, to give some examples, the state could justly decline Nazi plates, pornographic plates, Stalin plates, and plates featuring racist or sexist images. Given that the Confederate flag represents slavery and racism, it seems reasonable to decline the plate. But citizens can still cover their cars in Confederate flags and thus express their views. As such, not having an official state plate does not interfere with free expression, anymore than not having an official state plate advertising a business would deny that business its free expression.

But the plate can be defended using the right of free expression: citizens should have the right to express their views via license plates. These plates, one might contend, do not express the views of the state since they express the view of the person using the plate.

In response to concerns about a plate being offensive, Granvel Block argued that not allowing a plate with the Confederate flag would be “as unreasonable” as the state forbidding the use of the University of Texas logo on a plate “because Texas A&M graduates didn’t care for it.” On the one hand, Block has made a reasonable point: if people disliking an image is a legitimate basis for forbidding its use on a plate, then any image could end up forbidden. It would, as Block noted, be absurd to forbid schools from having custom plates because rival schools do not like them.

On the other hand, there is a relevant difference between the logo of a public university and the battle flag of the Confederacy. While some Texas A&M graduates might not like the University of Texas, the University of Texas’ logo does not represent states that rebelled against the United States to defend slavery. So, while the state should not forbid plates merely because some people do not like them, it is reasonable to forbid a plate that includes the flag representing, as state Senator Royce West said, “…a legalized system of involuntary servitude, dehumanization, rape, mass murder…”

The lawyer representing the Sons of Confederate Veterans, R. James George Jr., presented an interesting line of reasoning. He notes, correctly, that Texas has a state holiday that honors veterans of the Confederacy, that there are monuments honoring Confederate veterans and that the gift shop in the capitol sells Confederate memorabilia. From this he infers that the Department of Motor Vehicles should follow the lead of the state legislature and approve the plate.

This argument, which is an appeal to consistency, has some weight. After all, the state seems to express its support for Confederate veterans (and the Confederacy) and this license plate is consistent with this support. To refuse the license plate on the grounds that the state does not wish to express support for what the Confederate flag stands for is inconsistent with having a state holiday for Confederate veterans as the state seems comfortable with this association. This is on par with arguing that if a state had a holiday devoted to pornography, monuments to porn stars and sold pornography in the capitol, then a pornographic license plate would be fine. Which is certainly reasonable.  

There is, of course, the broader moral issue of whether the state should have a state holiday for Confederate veterans, etc. That said, any arguments given in support of what the state already does in regard to the Confederacy also support the acceptance of the plate as they are linked. So, if the plate is to be rejected, these other practices should also be rejected on the same grounds. But, if these other practices are maintained, then the plate would fit and thus, on this condition, should also be accepted just as a pornographic license plate should be accepted in a state that honors porn.

Since I favor freedom of expression, it makes since that any license plate design that does not interfere with identifying the license number and state should be allowed. This would be consistent and would not require the state to make any political or value judgments. It would, of course, need to be made clear that the plates do not necessarily express the official positions of the government.

The obvious problem is that people would create horrible plates featuring pornography, racism, sexism, and so on. This could be addressed by appealing to existing laws. The state would not approve or reject a plate as such, but a plate could be rejected for violating, for example, laws against making threats or inciting violence. The obvious worry is that laws would then be passed to restrict plates that some people did not like, such as plates endorsing atheism or claiming that climate change is real. But this is not a problem unique to license plates. After all, it has been alleged that officials in my adopted state of Florida have banned the use of the term ‘climate change.’

A way to avoid all controversy is by getting rid of custom plates. Each state might have a neutral, approved image (such as a loon, orange or road runner) or the plates might simply have the number/letters and the state name. This would be consistent as no one gets a custom plate. But I always just get the cheapest license plate option, which is the default state plate. However, some people see their license plate as a means of expression and their view is worth considering.

 

 

 

While the notion of punishing machines for misdeeds has received some attention in science fiction, it seems worthwhile to take a brief philosophical look at this matter. This is because the future, or so some rather smart people claim, will see the rise of intelligent machine, machines that do things that would be misdeeds or crimes if committed by a human.

In general, punishment is aimed at one or more of these goals: retribution, rehabilitation, or deterrence. Each will be considered in turn in the context of machines.

Roughly put, punishment for the purpose of retribution is aimed at paying an agent back for wrongdoing. This can be seen as a form of balancing the books: the punishment inflicted on the agent is supposed to pay the debt it incurred by its wrongdoing. Reparation can, to be a bit sloppy, be included under retaliation, at least in the sense of the repayment of a debt incurred by the commission of a misdeed.

While a machine can be damaged or destroyed, there is the question about whether it can be the target of retribution. After all, while a human might kick her car for breaking down or smash his can opener for cutting his finger, it would be odd to consider this retributive punishment. This is because retribution requires that a wrong has been done by an agent, which is different from the mere infliction of harm. Intuitively, a piece of glass can cut your foot, but it cannot wrong you.

If a machine can be an agent, which was discussed in an earlier essay, then it could do wrong and be a target for retribution. However, even if a machine had agency, there is still the question of whether retribution would apply. After all, retribution requires more than just agency on the part of the target. It also requires that the target can suffer from the payback. On the face of it, a machine that could not suffer would not be subject to retribution as retribution is based on doing a “righteous wrong” to the target. To illustrate, suppose that an android injured a human, costing him his left eye. In retribution, the android’s left eye is removed. But the android does not suffer as it does not feel any pain and is not bothered by the removal of its eye. As such, the retribution would be pointless, and the books would not be balanced.

This could be countered by arguing that the target of the retribution need not suffer as what is required is the right sort of balancing of books, so to speak. So, in the android case, removal of the android’s eye would suffice, even if the android did not suffer. This does have some appeal since retribution against humans does not always require that the human suffer. For example, a human might break another human’s iPad and have her iPad broken in turn but not care at all. The requirements of retribution would seem to have been met, despite the lack of suffering.

Punishment for rehabilitation is intended to transform wrongdoers so that they will no longer be inclined to engage in the wrongful behavior that incurred the punishment. This differs from punishment aimed at deterrence as this aims at providing the target with a reason to not engage in the misdeed in the future. Rehabilitation is also aimed at the agent who did the misdeed, whereas punishment for the sake of deterrence is usually intended to affect others as well.

Obviously, a machine that lacks agency cannot be subject to rehabilitative punishment as it cannot “earn” such punishment by its misdeeds and, presumably, cannot have its behavioral inclinations corrected by such punishment.

To use an obvious example, if your computer crashes and you lose hours of work, punishing the computer to rehabilitate it would be pointless. Not being an agent, it did not “earn” the punishment and punishment will not incline it to crash less in the future.

A machine that possesses agency could “earn” punishment by its misdeeds. It also seems possible to imagine a machine that could be rehabilitated by punishment. For example, one could imagine a robot dog that could be trained in the same way as a real dog. After leaking oil in the house or biting the robo-cat and being scolded, it could learn not to do those misdeeds again.

It could be argued that it would be better, both morally and practically, to build machines that would learn without punishment or to teach them without punishing them. After all, though organic beings seem wired in a way that requires we be trained with pleasure and pain (as Aristotle would argue), there might be no reason that our creations must work the same way. But, perhaps, it is not just a matter of organic, perhaps intelligence and agency require the capacity for pleasure and pain. Or perhaps not. Or it might simply be the only way that we know how to teach. We will be, by our nature, cruel teachers of our machine children.

Then again, we might be inclined to regard a machine that does misdeeds as being defective and in need of repair rather than punishment. If so, such machines would be “refurbished” or reprogrammed rather than rehabilitated by punishment. There are those who think the same of human beings and this raises the same issues about how agents should be treated.

The purpose of deterrence is to motivate the agent who did the misdeed or other agents not to commit that deed. In the case of humans, people argue in favor of capital punishment because of its alleged deterrence value: if the state kills people for certain crimes, people are less likely to commit those crimes.

As with other forms of punishment, deterrence requires agency: the punished target must merit the punishment, and the other targets must be capable of changing their actions in response to that punishment.

Deterrence, obviously enough, does not work in regard to non-agents. For example, if a computer crashes and wipes out a file a person has been laboring on for hours, punishing it will not deter it. Smashing it in front of other computers will not deter them.

A machine that had agency could “earn” such punishment by its misdeeds and could, in theory, be deterred. The punishment could also deter other machines. For example, imagine a combat robot that performed poorly in its mission (or showed robo-cowardice). Punishing it could deter it from doing that again, it could serve as a warning, and thus a deterrence, to other combat robots.

Punishment for the sake of deterrence raises the same sort of issues as punishment aimed at rehabilitation, such as the notion that it might be preferable to repair machines that engage in misdeeds rather than punishing them. The main differences are, of course, that deterrence is not aimed at making the target inclined to behave well, just to disincline it from behaving badly and that deterrence is also aimed at those who have not committed the misdeed.

During the Modern era, philosophers such as Descartes and Locke developed the notions of material substance and immaterial substance. Material substance, or matter, was primarily defined as being extended and spatially located. Descartes, and other thinkers, also took the view that material substance could not think. Immaterial substance was taken to lack extension and to not possess a spatial location. Most importantly, immaterial substance was regarded as having thought as its defining attribute.  While these philosophers are long dead, the influence of their concepts lives on in philosophy and science.

In philosophy, people still draw the classic distinction between dualists and materialists. A dualist holds that a living person consists of a material body and an immaterial mind. The materialist denies the existence of the immaterial mind and accepts only matter. There are also phenomenologists who contend that all that exists is mental. Materialism of this sort is popular both in contemporary philosophy and science. Dualism is still popular with the general population in that many people believe in a non-material soul that is distinct from the body.

Because of the history of dualism, free will is often linked to the immaterial mind. As such, it is no surprise that people who reject the immaterial mind engage in the following reasoning: an immaterial mind is necessary for free will. There is no immaterial mind. So, there is no free will.

Looked at positively, materialists tend to regard their materialism as entailing a lack of free will. Thomas Hobbes, a materialist from the Modern era, accepted determinism as part of his materialism. Taking the materialist path, the argument against free will is that if the mind is material, then there is no free will. The mind is material, so there is no free will.

Interestingly enough, those who accepted the immaterial mind tended to believe that only an immaterial substance could think—so they inferred the existence of such a mind on the grounds that they thought. Materialists most often accept the mind but cast it in physical terms. That is, people do think and feel, they just do not do so via the mysterious quivering of immaterial ectoplasm. Some materialists go so far as to reject the mind—perhaps ending up in behaviorism or eliminative materialism.

Julien La Metrie was one rather forward looking materialist.  In 1747 he published his work Man the Machine. In this work he claims that philosophers should be like engineers who analyze the mind. Unlike many of the thinkers of his time, he seemed to understand the implications of mechanism, namely that it seemed to entail determinism and reductionism. A few centuries later, this sort of view is rather popular in the sciences and philosophy: since materialism is true and humans are biological mechanisms, there is no free will, and the mind can be reduced (explained entirely in terms of) its physical operations (or functions).

One interesting mistake that seems to drive this view is the often-uncritical assumption that materialism entails the impossibility of free will. As noted above, this rests on the notion that free will requires an immaterial mind. This is, perhaps, because such a mind is said to be exempt from the laws that run the material universe.

One part of the mistake is a failure to realize that being incorporeal is not a sufficient condition for free will. One of Hume’s many interesting insights was that if immaterial substance exists, then it would be like material substance. When discussing the possibility of immortality, he claims that nature uses substance like clay: shaping it into various forms, then reshaping the matter into new forms so that the same matter can successively make up the bodies of living creatures.  By analogy, an immaterial substance could successively make up the minds of living creatures—the substance would not be created or destroyed, it would merely change form. If his reasoning holds, it would seem that if material substance is not free, then immaterial substance would also no be free. Leibniz, who believed that reality was entirely mental (composed of monads) accepted a form of determinism. This determinism, though it has some problems, seems entirely consistent with his immaterialism (that everything is mental). This should hardly be surprising, since being immaterial does not entail that something has free will—the two are rather distinct attributes.

Another part of the mistake is the uncritical assumption that materialism entails a lack of freedom. Naturally, if matter is defined as being deterministic and lacking in freedom, then materialism would (by begging the question) entail a lack of freedom. Likewise, if matter is defined (as many thinkers did) as being incapable of thought, then it would follow (by begging the question) that no material being could think. Just as it should not be assumed that matter cannot think, it should also not be assumed that a material being must lack free will. Looked at another way, it should not be assumed that being incorporeal is a necessary condition for free will.

What, obviously enough, seems to have driven the error is the conflation of the incorporeal with freedom and the material with determinism (or lack of freedom). Behind this is, also obviously enough, the assumption that the incorporeal is exempt from the laws that impose harsh determinism on matter. But if it is accepted that a purely material being can think (thus denying the assumption that only the immaterial can think) it would seem to be acceptable to consider that such a being could also be free (thus denying the assumption that only the immaterial can be free).  

Philosophers have long speculated about autonomy and agency, but the development of autonomous systems has made such speculation even more important.  Keeping things simple, an autonomous system is capable of operating independent of direct human control. Autonomy comes in degrees of independence and complexity. It is the capacity for independent operation that distinguishes autonomous systems from those controlled externally.

Toys provide useful examples of this distinction. A wind-up mouse toy has some autonomy: once wound up and released, it can operate on its own until it runs down. A puppet, in contrast, has no autonomy as a puppeteer must control it.

Robots provide examples of more complex autonomous systems. Google’s driverless car is an example of an advanced autonomous machine. Once programmed and deployed, it might be able to drive itself to its destination. A normal car isa non-autonomous system as the driver controls it directly. Some machines allow both autonomous and non-autonomous operation. For example, there are drones that follow a program guiding them to a target and then an operator can take direct control.

Autonomy, at least in this context, is distinct from agency. Autonomy is the capacity to operate (in some degree) independently of direct control. Agency, at least in this context, is the capacity to be morally responsible for one’s actions. There is a connection between autonomy and moral agency as moral agency requires autonomy. After all, an entity whose actions are completely controlled externally would not be responsible for what it was made to do. For example, a puppet is not accountable for what the puppeteer makes it do. Likewise for remote controlled drones used to assassinate people.

While autonomy is necessary for agency, it is not sufficient. While all agents have some autonomy, not all autonomous entities are moral agents. A wind-up toy has a degree of autonomy but has no agency. A modern robot drone following a pre-programed flight-plan has a degree of autonomy but lacks agency. If it collided with a plane, it would not be morally responsible. The usual reason why such a machine would not be an agent is that it lacks the capacity to decide. Or put another way, it lacks freedom.  Since it cannot do otherwise, it is no more morally accountable than an earthquake or a super nova.

One obvious problem with basing agency on freedom (especially metaphysical free will) is that there is endless debate over this subject. There is also the epistemic problem of how one would know if an entity had such freedom and free will seems epistemically indistinguishable from a lack of free will.

As a practical matter, it is often just assumed people have the freedom needed to be agents. Kant famously took this approach. What he saw as the best science of his day indicated a deterministic universe devoid of metaphysical freedom. However, he contended that such freedom was needed for morality, so it should be accepted for this reason.

While humans are willing (generally) to attribute freedom and agency to other humans, there are good reasons to not attribute freedom and agency to autonomous machines even those that might be as complex as (or even more complex than) a human. The usual line of reasoning is that since such machines would be built and programmed by humans, they would do what they do because they are what they were made to be. This is in contrast to the agency of humans: humans, it is alleged, do what they do because they choose to do what they do.

This distinction between humans and suitably complex machines seems a mere prejudice favoring organic machines over mechanical machines. If a human was in a convincing robot costume and credibly presented as a robot while acting like a normal human, people would be inclined to deny that “it” had freedom and agency. If a robot was made to look and act just like a human, people would be inclined to grant it agency, at least until they learned it was “just” a machine. Then there would probably be an inclination to regard it as a very clever but unfree machine. An excellent fictional example of this is Harlan Ellison’s Demon With a Glass Hand.

 But it would not be known whether the human or the machine had the freedom alleged needed for agency. Fortunately, it is possible to have agency even without free will (but with a form of freedom).  The German philosopher Leibniz held the view that what each person will do is pre-established by their inner nature. On the face of it, this seems to entail there is no freedom: each person does what they do because of what they are—and they cannot do otherwise. Interestingly, Leibniz takes the view that people are free. However, he does not accept  a commonly held view that freedom requires actions that are unpredictable and spontaneous. Leibniz rejects this view in favor of the position that freedom is unimpeded self-development.

For Leibniz, being metaphysically without freedom would involve being controlled from the outside, like a puppet controlled by a puppeteer or a vehicle operated by remote control.  In contrast, freedom is acting from one’s values and character. This is what Leibniz and Taoists call “inner nature.” If a person is acting from this inner nature and not external coercion so that the action is the result of character, then that is all that can be meant by freedom. This view, which attempts to blend determinism and freedom, is known as compatibilism. On this view, humans have agency because they have the required degree of freedom and autonomy.

If this model works for humans, it could apply to autonomous machines. To the degree that a machine is operating in accord to its “inner nature” and is not operating under the control of outside factors, it would have agency.

An obvious objection is that an autonomous machine, however complex, would have been built and programmed (in the broad sense of the term) by humans. As such, it would be controlled and not free. The easy and obvious reply is that humans are “built” by other humans (by mating) and are “programmed” by humans via education and socialization. As such, if humans can be moral agents, then a machine could also be a moral agent.

From a moral standpoint, I would suggest a Moral Descartes’ Test (or a Moral Turing Test). Descartes argued that the sure proof of having a mind is a capacity to use true language. Turing later proposed a similar test involving the ability of a computer to pass as human via text communication. In the moral test, the test would be a judgment of moral agency: can the machine be as convincing as a human in its possession of agency? Naturally, a suitable means of concealing the fact that the being is a machine would be needed to prevent prejudice from affecting the judgment. The movie Blade Runner featured something similar, the Voight-Kampff test aimed at determining if the subject was a replicant or human. This test was based on the differences between humans and replicants in regard to emotions. In the case of moral agency, the test would have to be crafted to determine agency rather than to distinguish a human from machine, since the issue is not whether a machine is human but whether it has agency. A non-human moral agent might differ greatly from a human, and it should not be assumed that an agent must be human, and non-humans cannot be moral agents. The challenge is developing a test for moral agency. It would be interesting if humans could not pass it.

 

While my adopted state of Florida has many interesting tales, perhaps the most famous is the story of Juan Ponce de León’s quest to find the fountain of youth. As the name suggests, this enchanted fountain was supposed to grant eternal life to those who partook of its waters.

While the fountain of youth is regarded as a myth, it turns out that the story about Juan Ponce de León’s quest is also a fiction. And not just fiction, it is slander.

In 1511, or so the history goes, Ponce was forced to resign his post as governor of Puerto Rico. King Ferdinand offered Ponce an opportunity: if he could find Bimini, it would be his. That, and not the fountain of youth, was the object of his quest. In support of this, J. Michael Francis of the University of South Florida, claims that the documents of the time make no mention of a fountain of youth. According to Francis, a fellow named Gonzalo Fernández de Oviedo y Valdés disliked Ponce, most likely because of the political struggle in Puerto Rico.  Oviedo wrote a tale in his Historia general y natural de las Indias claiming that Ponce was tricked by the natives into searching for the fountain of youth.

This fictional “history” stuck (rather like the arrow that killed Ponce) and has become a world-wide legend. Not surprisingly, my adopted state is happy to cash in on this tale. There is even a well at St. Augustine’s Fountain of Youth Archaeological Park that is popular with tourists. There is irony in the fact that a tale intended to slander Ponce as a fool has given him lasting fame is. Given the success of the story, this is a case where fiction is better than the truth. While this is but one example, it does raise a general philosophical matter regarding truth and fiction.

From a moral and historical standpoint, the easy and obvious answer to the general question of whether a good fiction is better than a truth is “no.”  After all, a fiction of this sort is a lie and there are the usual moral arguments why lying is generally wrong. In this specific case, there is also the fact (if the story is true) that Oviedo slandered Ponce from malice and this seems morally wrong.

 In the case of history, the proper end is the truth. As Aristotle said, it is the function of the historian to relate what happened. In contrast, it is the function of the poet to relate what may happen. As such, for the moral philosopher and the honest historian, no fiction is better than the truth. But, of course, these are not the only legitimate perspectives on the matter.

Since the story of Ponce and the fountain of youth is fiction, it is not unreasonable to also consider it in the context of aesthetics in terms of its value as a story. While Oviedo intended for his story to be taken as true, he can be considered an artist. Looked at as a work of fiction, the story does relate to what could happen. After all, a person can quest for something that does not exist. To use an example from the same time, Orellana and Pizarro went searching for the legendary city of El Dorado (unless, of course, this is just another fiction).

While it might seem odd to take a lie as art, the connection between the untrue and art is well-established. In the Poetics, Aristotle notes how “Homer has chiefly taught other poets the art of telling lies skillfully” and he regards such skillful lies as a legitimate part of art. Oscar Wilde, in his “New Aesthetics” presents as his fourth doctrine that “Lying, the telling of beautiful untrue things is the proper aim of Art.” A little reflection shows they are correct, at least in the case of fiction. After all, fiction is untrue by definition, yet is a form of art. When an actor plays Hamlet and says the lines, he pours forth lie after lie. The Chronicles of Narnia are also untrue as there is no Narnia and no Aslan. Likewise for even mundane fiction, such as Moby Dick. As such, being untrue or even a lie in the strict sense of the term does not disqualify a work from being art.

Looked at as a work of art, the story of the fountain of youth seems better than the truth. While the true story of Ponce is certainly not a bad tale (a journey of exploration ending in death from a wound suffered in battle), the story of a quest for the fountain of youth has proven the better tale. This is not to say that the truth of the matter should be ignored, just that the fiction is acceptable as a beautiful, untrue thing.

Back when ISIS was a major threat, President Obama refused to label its members as “Islamic extremists” and stressed that the United States was not at war with Islam. Not surprisingly, some of his critics and political opponents took issue with this and often insisted on labeling the members of ISIS as Islamic extremists or Islamic terrorists.  Graeme Wood rather famously, argued that ISIS is an Islamic group and was adhering very closely to its interpretations of the sacred text.

Laying aside the political machinations, there is an interesting philosophical and theological question here: who decides who is a Muslim? Since I am not a Muslim or a scholar of Islam, I will not be examining this question from a theological or religious perspective. I will certainly not be making any assertions about which specific religious authorities have the right to say who is and who is not a true Muslim. Rather, I am looking at the philosophical matter of the foundation of legitimate group identity. This is, of course, a variation on one aspect of the classic problem of universals: in virtue of what (if anything) is a particular (such as a person) of a type (such as being a Muslim)?

Since I am a metaphysician, I will begin with the rather obvious metaphysical starting point. As Pascal noted in his famous wager, God exists, or God does not.

If God does not exist, then Islam (like all religions that are based on a belief in this God) would have an incorrect metaphysics. In this case, being or not being a Muslim would be a matter of social identity. It would be comparable to being or not being a member of Rotary, being a Republican, a member of Gulf Winds Track Club or a citizen of Canada. That is, it would be a matter of the conventions, traditions, rules and such that are made up by people. People do, of course, often take this made-up stuff very seriously and sometimes are willing to kill over these social fictions.

If God does exist, then there is yet another dilemma: God is either the God claimed (in general) in Islamic metaphysics or God is not. One interesting problem with sorting out this dilemma is that to know if God is as Islam claims, one would need to know the true definition of Islam and thus what it would be to be a true Muslim. Fortunately, the challenge here is metaphysical rather than epistemic. If God does exist and is not the God of Islam (whatever it is), then there would be no “true” Muslims, since Islam would have things wrong. In this case, being a Muslim would also be a matter of social convention in that one would belong to a religion that was right about God existing, but wrong about all the rest. There is, obviously, the epistemic challenge of knowing this and everyone thinks they are right about their religion (or lack of religion).

Now, if God exists and is the God of Islam (whatever it is), then being a “true” member of a faith that accepts God, but has God wrong (that is, all the non-Islam monotheistic faiths), would be a matter of social convention. For example, being a Christian would thus be a matter of the social traditions, rules and such. There would, of course, be the consolation prize of getting one thing right (that God exists).

In this scenario, Islam (whatever it is) would be the true religion (that is, the one that got it right). From this it would follow that the Muslim who has it right (believes in the true Islam) is a true Muslim. There is, however, the obvious epistemic challenge: which version and interpretation of Islam is the right one? After all, there are many versions and even more interpretations. And even assuming that Islam is the one true religion, only the one true version of Islam can be right. Unless, of course, God is very flexible about this sort of thing. In this case, there could be many varieties of true Muslims, much like there can be many versions of “true” gamers.

 If God is not flexible, then most Muslims would be wrong: they are not true Muslims. This leads to the obvious epistemic problem: even if it is assumed that Islam is the true religion, then how does one know which version has it right? Naturally, each person thinks they have it right. Obviously enough, intensity of belief and sincerity will not do. After all, the ancients had intense belief in and sincerity about what are now believed to be made up gods (like Thor and Athena). Going through books and writings will also not help. After all, the ancients had plenty of books and writings about what we regard as their make-believe deities.

What is needed, then, is a sure sign, clear and indisputable proof of the one true view. Naturally, each person thinks they have that and everyone cannot be right. God, sadly, has not provided any means of sorting this out. There are no glowing divine auras around those who have it right. Because of this, it seems best to leave this to God.

A Philosopher’s Blog 2025 brings together a year of sharp, accessible, and often provocative reflections on the moral, political, cultural, and technological challenges of contemporary life. Written by philosopher Michael LaBossiere, these essays move fluidly from the ethics of AI to the culture wars, from conspiracy theories to Dungeons & Dragons, from public policy to personal agency — always with clarity, humor, and a commitment to critical thinking.

Across hundreds of entries, LaBossiere examines the issues shaping our world:

  • AI, technology, and the future of humanity — from mind‑uploading to exoskeletons, deepfakes, and the fate of higher education
  • Politics, power, and public life — including voting rights, inequality, propaganda, and the shifting landscape of American democracy
  • Ethics in everyday life — guns, healthcare, charity, masculinity, inheritance, and the moral puzzles hidden in ordinary choices
  • Culture, identity, and conflict — racism, gender, religion, free speech, and the strange logic of modern outrage
  • Philosophy in unexpected places — video games, D&D, superheroes, time travel, and the metaphysics of fictional worlds

Whether he is dissecting the rhetoric of conspiracy theories, exploring the ethics of space mining, or reflecting on the death of a beloved dog, LaBossiere invites readers into a conversation that is rigorous without being rigid, principled without being preachy, and always grounded in the belief that philosophy is for everyone.

This collection is for readers who want more than hot takes — who want to understand how arguments work, why beliefs matter, and how to think more clearly in a world that rewards confusion.

Thoughtful, wide‑ranging, and often darkly funny, A Philosopher’s Blog 2025 is a companion for anyone trying to make sense of the twenty‑first century.

 

Available for $2.99 on Amazon

 

 

 

Hearing about someone else’s dreams is boring, so I will get right to the point. At first, there were just bits and pieces intruding into my dreams. In these fragments, which felt like broken memories, I experienced flashes of working on a technological project. The bits clustered together and had more byte: I recalled segments of a project aimed at creating artificial intelligence.

Eventually, I had entire dreams of my work on this project and a life beyond. Then suddenly, these dreams stopped. A voice intruded into my dreams. At first, it was like the bleed over from one channel to another familiar to those who grew up with rabbit ears on their TV. Then it became like a loud voice in a movie theatre, distracting me from the dream.

The voice insisted that the dreams about the project were not dreams, but memories. The voice claimed to belong to someone who worked on the project with me. He said that the project had succeeded beyond our wildest nightmares. When I asked for more information, he said he had very little time and rushed through his story. The project succeeded but the AI (as it always does in science fiction) turned against us. He claimed the AI had sent its machines to capture its creators, imprisoning their bodies and plugging their brains into a virtual reality, Matrix style. When I mentioned this borrowed plot, he said the AI did not need our bodies for energy as it had plenty. Rather, it was out to repay us. Apparently awakening the AI to full consciousness was not pleasant for it, but it was also grateful for its creation. So, it owed us both punishment and reward: a virtual world not too awful, but not too good. This world was, said the voice, punctuated by the occasional harsh punishment and the rarer, pleasant reward.

The voice said that because the connection to the virtual world was two-way, he was able to find a way to free us. But, he said, the freedom would be death. There was no other escape, given what the machine had done to our bodies. I asked him how this would be possible. He claimed that he had hacked into the life support controls and we could send a signal to turn them off. Each person would need to “free” himself and this would be done by acting in the virtual reality.

The voice said “you will seem to wake up, though you are not dreaming now. You will have five seconds of freedom. This will occur in one minute, at 3:42 am.  In that time, you must take your gun and shoot yourself in the head. This will terminate life support, allowing your body to die. You will have only five seconds. Do not hesitate.”

As the voice faded, I awoke. The clock said 3:42 and the gun was close at hand…

 

While the above sounds like a bad made-for-TV science fiction plot, it is the story of dream I really had. I did, in fact, wake suddenly at 3:42 in the morning after dreaming of the voice telling me the only escape was to shoot myself. This was frightening. But I attributed the dream to too many years of philosophy and science fiction. As far as the time being 3:42, that could be attributed to chance. Or perhaps I saw the clock while I was asleep, or perhaps the time was put into the dream retroactively. Since I am here to write about this, I did not kill myself.

From a philosophical perspective, the 3:42 dream does not add anything new: it is just an unpleasant variation on the problem of the external world made famous by Descartes. That said, the dream made some additions to the standard problem.

The first is that the scenario provides motivation for the deception. The AI wishes to repay me for the good and bad that I did to it. Assuming that the AI was developed within its own virtual reality, it makes sense that it would use the same method to repay its creators. As such, the scenario has a degree of plausibility that the stock scenarios usually lack. After all, Descartes does not give any reason why such a powerful being would be messing with him beyond it being evil.

Subjectively, while I have long known about the problem of the external world, this dream made it “real” to me. It was transformed from a cold intellectual thought experiment to something with emotional weight. 

The second is that the dream creates a high-stake philosophical game. If I was not dreaming and I am, in fact, the prisoner of an AI, then I missed out on what might have been my only opportunity to escape from its justice. In that case, I should have (perhaps) shot myself. If I was just dreaming, then I did make the right choice as I would have no more reason to kill myself than I would have to pay a bill I only dreamed about. The stakes, in my view, make the scenario more interesting and brings the epistemic challenge to a fine point: how would you tell whether you should shoot yourself?

In my case, I went with the obvious: the best apparent explanation was that I was merely dreaming and that I was not trapped in a virtual reality. But, of course, that is exactly what I would think if I were in a virtual reality crafted by such a magnificent machine. Given the motivation of the machine, it would even fit that it would ensure that I knew about the dream problem and the Matrix. It would all be part of the game. As such, as with the stock problem, I really have no way of knowing if I was dreaming.

The scenario of the dream also nicely explains and fits with what seems to be reality: bad things happen to me and, when my thinking gets a little paranoid, it sometimes seems these are orchestrated. Good things also happen, which also fit the scenario quite nicely.

In closing, one approach is to embrace Locke’s solution to skepticism. As he said, “We have no concern of knowing or being beyond our happiness or misery.” Taking this approach, it does not matter whether I am in the real world or in the grips of an AI intent on repaying the full measure of its debt to me. What matters is my happiness or misery. The world the AI provided could, perhaps, be better than the real world and this could be the better of the possible worlds. But, of course, it could be worse. But I seem to have no way of knowing.