While I consider myself something of a movie buff, I was out-buffed by one of my old colleagues. This is a good thing—I enjoy the opportunity to hear about movies from someone who knows more than I. Some years ago we talked about  science-fiction classics and movies based on them.

Not surprisingly, the discussion turned to Blade Runner, which is based on Do Androids Dream of Electric Sheep? By Phillip K. Dick. While I like the movie, some fans of the author hate the movie because it deviates from the book. This leads to three questions one should ask about such works.

The first question, which is the most important is: is the movie good? The second question, which I consider less important is: how much does the movie deviate from the book/story? For some people, the second question is important and their answer to the first question can hinge on the answer to the second. For them, the greater deviation from the book/story, the worse the movie. This rests on the view that an important aesthetic purpose of a movie based on a book/story is to faithfully reproduce the book/story in movie format.

My view is that deviation from the original is not relevant to the quality of the movie as a movie. That is, if the only factor that allegedly makes the movie bad is that it deviates from the book/story, then the movie would seem to be good. One way to argue for this is to point out the obvious: if someone saw the movie without knowing about the book, she would regard it as a good movie. If she then found out it was based on a book/story, then nothing about the movie would have changed—as such, it should still be a good movie on the grounds that the relation to the book/story is external to the movie. To use an analogy, imagine that someone sees a painting and regards it as well done artistically. Then the person finds out it is a painting of a specific person and finds a photo of the person that shows the painting differs from the photo. To then claim that the painting is not a good work of art would be mistaken.

It might be countered that the painting would be bad because it failed to properly imitate the person. However, this would only count against the accuracy of the imitation and not the artistic merit of the work. That it does not look exactly like the person would not entail that it is lacking aesthetically. Likewise for a movie: the fact that it is not enough like the book/story does not entail that it is a bad movie. Naturally, it is fair to claim that it does not imitate well, but this is a different matter than being a well-done work.

That said, I am sympathetic to the view that a movie must imitate a book/movie to a certain degree if it is to legitimately claim the same name. Take, for example, the movie Lawnmower Man.  While it is not a great film, the only thing it has in common with the Stephen King story is the name. In fact, King apparently sued over this because the film had no meaningful connection to his story. However, whether the movie has a legitimate claim to the name of a book/story or not is distinct from the quality of the movie. After all, a very bad movie might be faithful to a very bad book/story. But it would still be bad.

The third question is: is the movie so bad that it desecrates the story/book? In some cases, authors sell the film rights to books/stories or the works become public domain (and thus available to anyone). In some cases, the films made from such works are both reasonably true to the originals and reasonably good. The obvious examples here are the Lord of the Rings movies. However, there are cases in which the movie (or TV show) is so bad that the badness desecrates the original work by associating its awfulness with a good book/story.

One example of this is the desecration of the Wizard of Earthsea by the Sci-Fi Channel. This was so badly done that Ursula K. Le Guin felt obligated to write a response to it. While the book is not one of my favorites, I did like it and was initially looking forward to seeing it as a series. However, it was the TV version of seeing a friend killed and re-animated as a shuffling zombie. Perhaps not quite that bad—but still bad. Since I also like Edgar Rice Burroughs Mars books, I did not see the travesty that is Disney’s John Carter. To answer my questions, this movie was apparently very bad, deviated from the book, and did desecrate it just a bit (I have found it harder to talk people into reading the Mars books because of the badness of that movie). I think that the Hobbit films desecrated the Hobbit book and will stand by that position, despite liking most of the director’s works.

From both a moral and aesthetic standpoint, I would contend that if a movie is to be made from a book or story, those involved have an obligation to make the movie at least as good as the original book/story. There is also an obligation to have at least some meaningful connection to the original work—after all, if there is no such connection then there are no legitimate grounds for having the film bear that name.

 

A Philosopher’s Blog is Now on Substack!

You can subscribe and read for free.

https://aphilosophersblog.substack.com/

The cost of higher education has increased dramatically resulting in a corresponding increase in student debt. It is worth considering the cause and what could be done to reduce costs without reducing the quality of education.

One obvious approach is to consider whether university presidents are worth their expense. If a university president received $1 million in compensation, they would need to contribute the equivalent of 40+ adjuncts in terms of value created. It could, of course, be argued public university presidents bring in money from other rich people, provide prestige and engage in the politics needed to keep money flowing from the state. If so, a million-dollar president is worth 40+ adjuncts. If not, either the adjuncts should be paid more or the president paid less (or both) to ensure that money is not being wasted—and thus needlessly driving up the cost of education.

One reply to criticisms of high president pay is that for big public universities, even a million dollar president is a tiny part of the budget. As such, cutting the presidential salary would not yield significant savings. However, something is driving up the cost of education—and it is not faculty salary.

One major contribution to the increasing costs is the growth of the administrative sector of higher education.  A study found that the public universities that have the highest administrative pay spend half as much on scholarships as they do on administration. In such situations, students go into debt being taught by adjuncts while supporting the administration.

It is easy enough to demonize administrators. However, a university (like any organization) requires administration. Applications need to be processed, equipment needs to be purchased, programs need to be directed, state paperwork needs to be completed, the payroll must be handled and so on. There is a clear and legitimate need for administrators. However, this does not mean that all administrators are needed or that all high salaries are warranted. As such, one potential way to lower the cost of education is to reduce administrative positions and lower their salaries. That is, to take a standard approach used in the business model so often beloved by some administrators.

Since a public university is not a for-profit institution, the reason for the reduction should be to get the costs in line with the legitimate needs, rather than to make a profit. As such, the reductions could be more just than in the for-profit sector.

In terms of reducing the number of personnel, the focus should be on determining which positions are needed in terms of what they do in terms of advancing the core mission of the university (which should be education). In terms of reducing salary, the focus should be on determining the value generated by the person and the salary should correspond to that. Since administrators seem exceptionally skilled at judging what faculty (especially adjuncts) should be paid, presumably there is a comparable skill for judging what administrators should be paid.

Interestingly, much of the administrative work that directly relates to students, and education is already handled by faculty. For example, on top of my paid duties as a professor, I have always had administrative duties that are essential, yet not important enough to merit an increase in my pay proportional to an administrative salary. In this I am not unusual. Not surprisingly, faculty and students at universities often wonder what some administrators do, given that so many administrative tasks are done by faculty and staff. Presumably the extra administrative work done by faculty (often effectively for free) is already helping schools save money, although perhaps more could be offloaded to faculty for additional savings.

One obvious problem is that those who make decisions about administration positions and salaries are usually administrators. While some are noble and honest enough to report on the true value of their position, self-interest makes an objective assessment problematic. As such, it seems unlikely that an administration would want to act to reduce itself merely to reduce the cost of education. This is, of course, not impossible—and some administrators would not doubt be quite willing to fire or cut the salaries of other administrators.

Since many state governments have been willing to engage in close management of state universities, one option is for them to impose a thorough examination of administrative costs and implement solutions to the high cost of education. Unfortunately, there are sometimes strong political ties between top administrators and the state government. There is also the general worry that any cuts will be more political or ill-informed than rationally based.

Despite these challenges, the administrative costs need to be addressed and action must be taken—the alternative is ever increasing costs in return for less actual education.

It has been suggested that the interest rates of student loans be lowered and that more grants be awarded to students. These are both good ideas, those who graduate from college generally have better incomes and end up paying back what they received many times over in taxes and other contributions. However, providing students with more money from the taxpayers does not directly address the cost of education, it just shifts it.

Some states, such as my adopted state of Florida, have endeavored to keep costs lower by freezing tuition for as long as possible. While this seems reasonable, one obvious problem is that keeping tuition low without addressing the causes of increased costs does not solve the problem. What often happens is that the university must cut expenses and these tend to be in areas that serve the core mission of the university. For example, a university president’s high salary, guaranteed bonuses and perks are usually not cut—instead faculty are not hired, and class sizes are increased. While tuition does not increase, it does so at the cost of the quality of education. Unless, of course, the guaranteed bonuses of a university president are key to education quality.

As such, when trying to lower the cost of education, it should be done in a way that does not sacrifice the quality of education.

 

 

 

A Philosopher’s Blog is Now on Substack!

You can subscribe and read for free.

https://aphilosophersblog.substack.com/

The philosophical problem of other minds is an epistemic challenge: while I know I have a mind, how do I know if other beings have minds as well? The practical problem of knowing whether another person’s words match what they are thinking also falls under this problem. For example, if someone says they love you, how do you know if they feel that professed love?

Descartes, in his discussion of whether animals have minds, argued that the definitive indicator of having a mind (thinking) is the ability to use true language.

His idea is that if something talks, then it is reasonable to see it as a thinking being. Descartes was careful to distinguish between what mere automated responses and actual talking:

 

How many different automata or moving machines can be made by the industry of man […] For we can easily understand a machine’s being constituted so that it can utter words, and even emit some responses to action on it of a corporeal kind, which brings about a change in its organs; for instance, if touched in a particular part it may ask what we wish to say to it; if in another part it may exclaim that it is being hurt, and so on. But it never happens that it arranges its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do.

 

This Cartesian approach was explicitly applied to machines by Alan Turing in his Turing test. The idea is that if a person cannot distinguish between a human and a computer by engaging in a natural language conversation via text, then the computer would have passed the Turing test.

Not surprisingly, technological advances have resulted in computers that can engage in behavior that appears to involve using language in ways that might pass the test. Over a decade ago IBM’s Watson won at Jeopardy in 2011 and then upped its game by engaging in debate regarding violence and video games. Since Watson, billions have been poured into AI and some claim that AI models can pass the Turing test.

Long ago, in response to Watson, I jokingly suggested a new test to Patrick Lin: the trolling test. In this context, a troll is someone “who sows discord on the Internet by starting arguments or upsetting people, by posting inflammatory, extraneous, or off-topic messages in an online community (such as a forum, chat room, or blog) with the deliberate intent of provoking readers into an emotional response or of otherwise disrupting normal on-topic discussion.”

While trolls are apparently awful people (a hateful blend of Machiavellianism, narcissism, sadism and psychopathy) and trolling is certainly undesirable behavior, the trolling test does seem worth considering.

In the abstract, the test would work like the Turing test but would involve a human troll and a computer attempting to troll. The challenge would be for the computer troll to successfully pass as human troll.

Obviously enough, a computer can easily be programmed to post random provocative comments from a database. However, the real meat (or silicon) of the challenge comes from the computer being able to engage in (ironically) relevant trolling. That is, the computer would need to engage the other commentators in true trolling.

As a controlled test, the trolling computer (“mechatroll”) would “read” and analyze a selected blog post. The post would then be commented on by human participants—some engaging in normal discussion and some engaging in trolling. The mechatroll would then endeavor to troll the human participants (and, for bonus points, to troll the trolls) by analyzing the comments and creating appropriately trollish comments.

Another option is to have an actual live field test. A specific blog site would be selected that is frequented by human trolls and non-trolls. The mechatroll would then endeavor to engage in trolling on that site by analyzing the posts and comments.

In either test scenario, if the mechatroll were able to troll in a way indistinguishable from the human trolls, then it would pass the trolling test.

While “stupid mechatrolling”, such as just posting random hateful and irrelevant comments, is easy, true mechatrolling would be difficult. After all, the mechatroll would need to be able to analyze the original posts and comments to determine the subjects and the direction of the discussion. The mechatroll would then need to make comments that would be trollishly relevant and this would require selecting those that would be indistinguishable from those generated by a narcissistic, Machiavellian, psychopathic, and sadistic human.

Years ago, I thought that creating a mechatroll might be an interesting project because modeling such behavior could provide useful insights into human trolls and the traits that make them trolls. As far as a practical application, such a system could have been developed into a troll-filter to help control the troll population. I’m confident that the current LLMs could engage in trolling with the proper prompts, although they would lack the true soul of the troll.

 

A Philosopher’s Blog is Now on Substack!

You can subscribe and read for free.

https://aphilosophersblog.substack.com/

Before the Trump regime, the United States miliary expressed interest in developing robots capable of moral reasoning and provided grant money to support such research. Other nations are no doubt also interested.  

The notion of instilling robots with ethics is a common theme in science fiction, the most famous being Asimov’s Three Laws. The classic Forbidden Planet provides an early movie example of robotic ethics: Robby the robot has an electro-mechanical seizure if he is ordered to cause harm to a human being (or an id-monster created by the mind of his creator. Dr. Morbius). In contrast, killer machines (like Saberhagan’s Berserkers) of science fiction tend to be free of moral constraints.

While there are various reasons to imbue robots with ethics (or at least pretend to do so), one is public relations. Thanks to science fiction dating at least back to Frankenstein, people worry about our creations getting out of control. As such, a promise that our killbots will be governed by ethics might reassure the public. Another reason is to make the public relations gimmick a reality—to place behavioral restraints on killbots so they will conform to the rules of war (and human morality). Presumably the military will also address the science fiction theme of the ethical killbot who refuses to kill on moral grounds. But considering the ethics of war endorsed by the Trump regime, they are probably not interested in ethical war machines.

While science fiction features ethical robots, the authors (like philosophers) are vague about how robot ethics works. In the case of intelligent robots, their ethics might work the way ours does—which is a mystery debated by philosophers and scientists to this day. While AI has improved thanks to massive processing power, it does not have human-like ethical capacity, so the current practical challenge is to develop ethics for the autonomous or semi-autonomous robots we can build now.

While creating ethics for robots might seem daunting, the limitations of current robot technology means robot ethics is a matter of programming these machines to operate in specific ways defined by whatever ethical system is used. One way to look at programing such robots with ethics is that they are being programmed with safety features. To use a simple example, suppose that I see shooting unarmed people as immoral. To make my killbot operate according to that ethical view, it would be programmed to recognize armed humans and have some code saying, in effect “if unarmedhuman = true, then firetokill= false” or, in normal English, if the human is unarmed, do not shoot them. Sorting out recognizing weapons would be a programming feat, likely with people dying in the process.

While a suitably programmed robot would act in a way that seemed ethical, the robot would not be engaged in ethical behavior. After all, it is merely a more complex version of an automatic door. A supermarket door, though it opens for you, is not polite. The shredder that catches your tie and chokes you is not evil.  Likewise, the killbot that does not shoot you because its cameras show you are unarmed is not ethical. The killbot that chops you into chunks is not unethical. Following Kant, since the killbot’s programming is imposed and the killbot lacks the freedom to choose, it is not engaged in ethical (or unethical behavior), though the complexity of its behavior might make it seem so.

To be fair to killbots, perhaps humans are not ethical or unethical under these requirements—we could just be meat-bots operating under the illusion of ethics. Also, it is sensible to focus on the practical aspect of the matter: if you are targeted by a killbot, your concern is not whether it is an autonomous moral agent or merely a machine—your main worry is whether it will kill you. As such, the general practical problem is getting our killbots to behave in accord with our ethical values. Or, in the case of the Trump regime, a lack of ethics.

Achieving this goal involves three steps. The first is determining which ethical values we wish to impose on our killbots. Since this is a practical matter and not an exercise in philosophical inquiry, this will involve using the accepted ethics (and laws) governing warfare rather than trying to determine what is truly good (if anything). The second step is translating ethics into behavioral terms. For example, the moral principle that makes killing civilians wrong would be translated into behavioral sets of allowed and forbidden behavior relative to civilians. This would require creating a definition of civilian  that would allow recognition using the sensors of the robot. As another example, the moral principle that surrender should be accepted would require defining surrender behavior in a way the robot could recognize.  The third step would be coding that behavior in whatever programming  is used for the robot in question. For example, the robot would need to be programmed to engage in surrender-accepting behavior. Naturally, the programmers or those typing the prompts into an AI program would need to worry about clever combatants trying to “deceive” the killbot to take advantage of its programming (like pretending to surrender to get close enough to destroy the killbot).

Since these robots would be following programmed rules, they would seem to be controlled by deontological ethics—that is, ethics based on following rules. Thus, they would be (with due apologies to Asimov), the Robots of Deon.

A  practical question is whether the “ethical” programming would allow for overrides or reprogramming. Since the robot’s “ethics” would just be behavior governing code, it could be changed and it is easy to imagine ethics preferences in which a commander could selectively (or not so selectively) turn off behavioral limitations. And, of course, killbots could be simply programmed without such ethics (or programmed to be “evil”).

One impact for this research will be that some people will get to live the science-fiction dream of teaching robots to be good. That way the robots might feel a little bad when they kill us all.

 

 

A Philosopher’s Blog is Now on Substack!

You can subscribe and read for free.

https://aphilosophersblog.substack.com/

When a new technology emerges, it is often claimed that it is outpacing ethics and law. Because of the nature of law in the United States, it is easy for technology to outpace it, especially given the average age of members of Congress. However, it is difficult for technology to outpace ethics.

One reason is that any minimally adequate ethical theory will have the quality of expandability. That is, the theory can be applied to what is new, be that technology, circumstances or something else. An ethical theory that lacks the capacity of expandability would become useless immediately and would not be much of a theory.

It is, however, worth considering that a new technology could “break” an ethical theory in that the theory could not expand to cover the technology. However, this would seem to show that the theory was inadequate rather than showing the technology outpaced ethics.

Another reason technology would have a hard time outpacing ethics is that an ethical argument by analogy can (probably) be applied to new technology. That is, if the technology is like something that exists and has been discussed in ethics, this ethical discussion can be applied to the new technology. This is analogous to using ethical analogies to apply ethics to different specific situations, such as an act of cheating in a relationship.

Naturally, if a new technology is absolutely unlike anything else in human experience (even fiction), then the method of analogy would fail absolutely. However, it seems unlikely that such a technology could emerge. But I like science fiction (and fantasy) and am willing to entertain the possibility of an absolutely new technology. While it would seem that existing ethics could handle, but perhaps something absolutely new would break all existing ethical theories, showing they are all inadequate.

While a single example does not provide much in the way of proof, it can be used to illustrate. As such, I will use the matter of personal drones to illustrate how ethics is not outpaced by technology.

While remote controlled and automated devices have been around a long time, the expansion of technology created something new for ethics: drones, driverless cars,  AI, Facebook, and so on. However, drone ethics is easy. By this I do not mean that ethics is easy, it is just that applying ethics to new technology (such as drones) is not as hard as some might claim. Naturally, doing ethics is hard—but this applies to very old problems (the ethics of war) and very “new” problems (the ethics of killer robots in war).

Getting back to the example, a personal drone is one that tends to be much smaller, lower priced and easier to use relative to government operated drones. In many ways, these drones are slightly advanced versions of the remote-control planes that are regarded as expensive toys. Drones of this sort that most concern people are those that have cameras and can hover—perhaps outside a bedroom window.

Two areas of concern are safety and privacy. In terms of safety, the worry is that drones can collide with people (or vehicles, such as manned aircraft) and injure them. Ethically, this falls under doing harm to people, be it with a knife, gun or drone. While a flying drone flies about, the ethics that have been used to handle flying model aircraft, cars, etc. can be applied here. So, this aspect of drones did not outpace ethics.

Privacy can also be handled. Simplifying things for the sake of a brief discussion, a drone allows a person to (potentially) violate privacy in the usual two “visual” modes. One is to intrude into private property to violate a person’s privacy. In the case of the “old” way, a person can put a ladder against a person’s house and climb up to peek through a window. In the “new” way, a person can fly a drone up to the window and peek in using a camera. While the person is not physically present in the case of the drone, their “agent” is present and is trespassing. Whether a person is using a ladder or a drone to gain access to the window does not change the ethics of the situation.

A second way is to peek into private space from public space. In the case of the old way a person could, for example,  stand on the public sidewalk and look into other peoples’ windows or yards. In the “new” way, a person can deploy his agent (the drone) in public space to do the same sort of thing.

One potential difference between the two situations is that a drone can fly and thus can get viewing angles that a person on the ground (or even with a ladder) could. For example, a drone might be in the airspace far above a person’s backyard, sending images of someone sunbathing in the nude behind her very tall fence on her very large estate. However, this is not a new situation—paparazzi have used helicopters to get shots of celebrities, and the ethics are the same. As such, ethics has not been outpaced by the drones in this regard.  This is not to say that the matter is solved people are still debating the ethics of this sort of “spying”, but to say that it is not a case where technology has outpaced ethics.

What is mainly different about the drones is that they are now affordable and easy to use—so whereas only certain people could afford to hire a helicopter to get photos of celebrities, now camera-equipped drones are easily in reach of the hobbyist. So, it is not that the low priced drone provides new capabilities, it is that it puts these capabilities in the hands of the many.

 

 

A Philosopher’s Blog is Now on Substack!

You can subscribe and read for free.

https://aphilosophersblog.substack.com/

While science and philosophy are about determining the nature of reality, politics is about creating perceptions alleged to be reality. This is one of many reasons why it is wiser to accept claims supported by science and reason over claims “supported” by ideology and interest.

Climate change is a matter of both science and politics. Ideally, the facts of climate change would be left to science and sorting out how to address it via policy would fall, in part, to the politicians using the facts. Unfortunately, politicians and other non-scientists make claims about climate science, usually in the form of unsupported talking points.

On the conservative side, there was a gradual shift in their talking points. In the beginning of climate change denial, they simply asserted that there was no climate change and the scientists were wrong. It was alleged that scientists were motivated by ideology to lie. In contrast, those whose profits could be impacted if climate change were real were presented as trustworthy sources.

In the face of mounting evidence and shifting public opinion, there was a shift to the claim that while climate change is occurring, it is not caused by humans. This shifted to the claim that climate change is caused by humans, but there is nothing we can (or should) do now. After Trump’s return to the White House, there has been a return to ignoring and denying climate change.  Those who are willing to concede that climate change is occurring while also wanting to do nothing about it often repeat some talking points.

One talking point is that scientists are exaggerating the impact of climate change and it will not be as bad as they claim. To be fair, this can be based on a reasonable concern about the accuracy of any prediction. In the case of a scientific prediction based on data and models, a reasonable inquiry would focus on the accuracy of the data and the quality of the models.

To rationally dispute the predictions would require showing problems with either the data or the models (or both). Simply saying they are wrong would not suffice—what is needed is clear evidence that the data or models (or both) are defective in ways that would show the predictions are excessive in terms of the predicted impact.

One indirect way to do this would be to find evidence that scientists are intentionally exaggerating. However, if they are exaggerating, this could be proven by examining the data and using it an accurate model. That is, if the scientists were exaggerating, the scientific method would show they were wrong. Shockingly enough, climate change deniers do not run better models with better data to disprove climate change.  

In some cases, it is claimed climate scientists are exaggerating from nefarious motives—a liberal agenda, a hatred of oil companies, communist tendencies, a desire for fame or some other wickedness. However, even if it could be shown that scientists have wicked motives, it does not follow that their predictions are wrong. To dismiss a claim because of an alleged defect in the person making the claim is an ad homimen fallacy. Being suspicious because of a possible nefarious motive can be reasonable, and such motives can undercut a person’s credibility. So, for example, the fact that fossil fuel companies have a financial stake does not prove that their claims about climate change are wrong. But the fact that they have an incentive to deny such claims makes it reasonable to be suspicious of their objectivity and credibility.  Naturally, if one suspects there is a global conspiracy of scientists driven by their interests, then one should be willing to consider that fossil fuel companies might be influenced by their financial interests.

One could, of course, hold that the scientists are exaggerating from a noble motive–so people will act. To use an analogy, parents sometimes exaggerate harms to try to persuade their children not to try it. While this is kinder than attributing nefarious motives to scientists, it is also no evidence against their claims. And even if scientists are exaggerating, there is still the question about how bad things really would be—they might still be very bad.

Naturally, if an objective and properly conducted study overturned the established science using the scientific method, I would have to accept that study. But no such study exists, for obvious reasons. If the climate change deniers had the truth on their side, they would be embracing rather than fighting science.

The second talking point is to claim that proposed solutions, such as laws, will not solve the problems. Interestingly, this talking point concedes that climate change is a problem. This point does have a reasonable foundation in that it would be unreasonable to take actions that are ineffective.

While crafting laws is politics, sorting out whether such laws would be effective falls in the domain of science. For example, if a law proposes cutting carbon emissions, there is a legitimate question as to whether it would have a meaningful impact on climate change. Showing this would require having data, models and so on—merely saying that the laws will not work is obviously not enough.

Now, if the laws and other proposals would not work, then the people who confidently make that claim should be equally confident in providing adequate evidence for their claim. It is reasonable to expect such evidence, although it is rarely forthcoming. One interesting exception is when scientists are critical of “mad science” proposals which would either not work or make things worse.

The third talking point is that the proposals to address climate change will hurt the American economy. As with the other points, this does have a rational basis, and it is sensible to consider the impact on the economy.

One approach is utilitarian: we can accept so much environmental harm (such as coastal flooding) in return for economic gain (such as jobs and profits generated by fossil fuels). Assuming that one is a utilitarian and that one accepts this value calculation, then one can accept that enduring such gains could be worth the harm. As usual, the costs will fall heavily on those who are not profiting. For example, fossil fuel executives do not have to endure the harms of climate change.

Utilitarian decisions about climate change should involve openly considering the costs and benefits as well as who will be hurt and who will benefit. Vague claims about damaging the economy do not allow us to make a proper moral and practical assessment of whether an approach will be correct. It might turn out that staying the course is the better option—but this needs to be determined with an open and honest assessment. However, this is unlikely to happen—especially during the Trump regime. To be fair and balanced, the mainstream Democrats will not save us.

It is also worth considering that addressing climate change could be good for the economy. After all, preparing coastal towns and cities for therising waters could be a huge and profitable industry creating many jobs. Developing alternative energy sources could also be profitable as could developing new crops able to handle the new conditions. There could be a whole new economy created, perhaps one that might rival more traditional economic sectors and newer ones, such as the internet economy. If companies with well-funded armies of lobbyists got into the climate change countering business, I suspect that a different tune would be playing. But I do worry that these solutions will create new problems; but that is how we operate as a species: solving problems by creating more problems until we become extinct.

 

A Philosopher’s Blog is Now on Substack!

You can subscribe and read for free.

https://aphilosophersblog.substack.com/

Way back in 2014 popular astrophysicist and Cosmos host Neil deGrasse Tyson did a Nerdist Podcast in which he seemed critical and dismissive of philosophy. There was a response from the defenders of philosophy and some critics went so far as to accuse him of being a philistine. While philosophy’s most ancient enemy is poetry (according to Plato), science is usually up for a good fight.

Tyson presents a not unreasonable view of contemporary philosophy, namely that “asking deep questions” can cause a “pointless delay in your progress” in engaging “this whole big world of unknowns out there.” To avoid such pointless delays, Tyson advised scientists to respond to such questioners by saying, “I’m moving on, I’m leaving you behind, and you can’t even cross the street because you’re distracted by deep questions you’ve asked of yourself. I don’t have time for that.”

While I wrote about this back in 2014, it is wise to revisit my views on the matter.

The idea that a scientist might see philosophy as useless (or worse) is consistent with my own experiences in academics. Since 2014, STEM has risen and the humanities have been under constant attack. As one example, as of Fall 2026 Florida A&M University will no longer have a distinct philosophy (and religion) major. I will still be teaching philosophy, but in a new combined program made up of philosophy, history, religion, and African-American studies.  We are, of course, lucky that we are still permitted to even exist. To be fair and balanced, a case can be made against philosophy. And the concern that the deep questioning of philosophy can cause pointless delays has merit and is well worth considering. After all, if philosophy is useless or even detrimental, then this would be worth knowing.

The main bite of this criticism is that philosophical questioning is detrimental to progress: a scientist who gets caught in these deep questions, it seems, would be like a kayaker caught in a strong eddy: they would be spinning around rather than zipping down the river. This concern also has practical merit. To use an analogy outside of science, consider a committee meeting aimed at determining the curriculum for state schools. This committee has an objective to achieve and asking questions is a reasonable way to begin. But imagine that people start raising deep questions about the meaning of terms such as “humanities” or “science” and become too interested in the semantics. This sidetracking will create a needlessly long meeting and little or no progress. After all, the goal is to determine the curriculum, and deep questions will only slow down progress towards this practical goal. Likewise, if a scientist is endeavoring to sort out the nature of the cosmos, deep questions can be a similar trap: she will be asking ever deeper questions rather than gathering data and doing math to answer her shallower questions.

Philosophy, as Socrates showed with his Socratic method, can endlessly generate deep questions. Questions such as “what is the nature of the universe?”, “what is time?”, “what is space?”, “what is good?”, “what’s for lunch?”, and so on. Also, as Socrates showed, for each answer given, philosophy can generate more questions. It is also often claimed that this shows that philosophy has no answers as every alleged answer can be questioned and only raises more questions. Thus, philosophy seems to be bad for scientists.

A key assumption is that science is different from philosophy in a key way—while it raises questions, proper science focuses on questions that can be answered or, at the very least, it gets down to the business of answering them and (eventually) abandons a question if it turns out to be a distracting deep question. Thus, science provides answers and makes progress. This, obviously enough, ties into another stock attack on philosophy: philosophy makes no progress and is useless.

One obvious reason philosophy is seen as not making progress and as useless is that when enough progress is made on a deep question, it often becomes a matter for science rather than philosophy. For example, ancient Greek philosophers, such as Democritus, speculated about the composition of the universe and its size.  These were considered deep philosophical questions. Even Newton considered himself a natural philosopher. He has, of course, been claimed by the scientists (many of whom conveniently overlook the role of God in his theories). These questions are now claimed by physicists, such as Tyson, who now see them as scientific rather than philosophical questions.

Thus, it is unfair to claim that philosophy does not solve problems or make progress. When philosophy makes progress in an area, that area often becomes a science and is no longer considered philosophy. However, progress is impossible without the deep questions and the work done by philosophers before the field was claimed to be a science.

At this point, some might grudgingly concede that philosophy did make some valuable contributions in the past, but philosophy is now an eddy rather than the current of progress.

Philosophy has been here before—back in the days of Socrates the Sophists contended that philosophical speculation was valueless and that people should focus on getting things done—that is, achieving success. Fortunately for contemporary science, philosophy survived and philosophers kept asking those deep questions that seemed so valueless then.

While some might see philosophy as a curious relic of the past, it is worth considering that some of the deep, distracting philosophical questions are well worth pursuing. Much as how Democritus’ deep philosophical questions led to the astrophysics that a fellow named Neil loves so much.

 

 

A Philosopher’s Blog is Now on Substack!

You can subscribe and read for free.

https://aphilosophersblog.substack.com/

A fundamental question of science, philosophy and theology is why the universe is the way it is. Over the centuries, the answers have fallen into two broad camps. The first is teleology, the view the universe is the way it is because it has a purpose, goal or end for which it aims. The second is t is the denial of the teleological view. Members of this camp often embrace purposeless chance as the “reason” why things are as they are.

Both camps agree on many things, such that the universe seems finely tuned. Theorists vary in their views on what a less finely tuned universe would be like. On some views, the universe would be just slightly different while on other views small differences would have significant results, perhaps even a lifeless universe. Because of this apparent fine tuning, a concern for philosophers and physicists is explaining why this is the case.

The dispute over this big question mirrors the dispute over a smaller question, namely why living creatures are the way they are. The division into camps follows the same pattern. On one side is teleology and the other side is its rejection. Interestingly, it might be possible to have different types of answers to these questions. For example, the universe could have been created by a deity (a teleological universe) who decides to let natural selection sort out life forms (non-teleological). That said, the smaller question does provide some ways to answer the larger question.

The teleological camp is very broad, with members including Aristotle and Joel Osteen. In the United States, the best-known form of teleology is Christian creationism. This view answers the large and the small question with God: He created the universe and the inhabitants. There are other religious teleological views—the creation stories of various other cultures and faiths are examples of these. There are also non-religious views. Among these, probably the best known are those of Plato and Aristotle. For Plato, roughly put, the universe is the way it is because of the Forms (and ultimately the Good). Aristotle does not put a personal god in charge of the universe, but he saw reality as eminently teleological. Views that posit laws governing reality also seem, to some, within the teleological camp. As such, the main division in the teleological camp tends to be between religious theories and the non-religious theories.

Obviously enough, teleological accounts have fallen out of favor in the sciences—the big switch took place during the Modern era as philosophy and science transitioned away from Aristotle (and Plato) towards a more mechanistic and materialistic view of reality.

The non-teleological camp is at least as varied as the teleological camp and is as old. The pre-Socratic Greek philosophers considered what would now be called natural selection and the idea of a chance-based, purposeless universe is ancient.

One non-teleological way to answer the question of why the universe is the way it is would be to take an approach like Spinoza, only without God. Which, some might point out, would not be like Spinoza at all. This would be to claim that the universe is what it is as a matter of necessity: it could not be any different from what it is. However, this might be unsatisfactory as one can still why it is necessarily the way it is.

The opposite approach is to reject necessity and embrace a random universe—it was just pure chance that the universe turned out as it did and things could have been different. So, the answer to the question of why the universe is the way it is would be blind chance. The universe plays dice with itself.

Another approach is to take the view that the universe is the way it is and finely tuned because it has “settled” down into what seems to be a fine-tuned state. Crudely put, the universe worked things out without any guidance or purpose. To use an analogy, think of sticks and debris washed by a flood to form a stable “structure.” The universe could be like that—where the flood is the big bang or whatever got it going.

One variant on this would be to claim that the universe contains distinct zones—the zone we are in happened to be “naturally selected” to be stable and hospitable to life. Other zones could be different—perhaps so different that they are beyond our epistemic abilities. Vernor Vinge explores the idea of variable physics in his novel A Fire Upon the Deep.  Or perhaps these zones “died” thus allowing an interesting possibility for fiction about the ghosts of dead zones haunting the cosmic night. Perhaps the fossils of dead universes drift around us, awaiting their discovery.

Another option is to embrace the idea of a multiverse. This allows an analogy to natural selection: in place of a multitude of species, there is a multitude of universes. Some “survive” the selection while others do not. Just as we are supposed to be a species that survived the natural selection of evolution, we live in a universe that survived cosmic selection. If the model of evolution and natural selection is intellectually satisfying in biology, it would seem reasonable to accept cosmic selection as also being intellectually satisfying—although it will be radically different from natural selection in many obvious ways.

 

 

 

A Philosopher’s Blog is Now on Substack!

You can subscribe and read for free.

https://aphilosophersblog.substack.com/

Despite being seen as an academic liberal (with all associated sins), I have long had a mixed view of affirmative action in education and employment. As an individualist who believes in the value of merit, I hold that college admission and hiring should be based entirely on the merit of the individual.  That is, the best qualified person should be admitted or hired. This is based on the principle that admission and hiring should be based on earning the opportunity and this is fairly and justly based on whether an individual merits the admission or job.

To use a sports analogy, the person who gets the first-place award for a 5K race should be the person who runs the race the fastest. This person has merited the award by winning. To deny the best runner the award and give it to someone else in the name of diversity would be absurd and unfair—even if there is a lack of diversity among the winners.

However, I know about the foundational institutionalized inequality in America and that addressing it can, on utilitarian grounds, allow treating some people unfairly for the greater good. There is also the matter of the fairness of the competition, which allows me to believe both in merit and affirmative action.

In my 5K analogy, I assume the competition is fair and victory is a matter of ability. Everyone one runs the same course, and no one possesses an unfair advantage, such as having a head start or using a bike. In such a fair competition, the winner earns the victory. Unfortunately, the world beyond the 5K is rigged and unjust.

Discrimination, segregation and unjust inequality remain the order of the day in the United States. So, when people are competing for admission to schools and for jobs, some people have unfair advantages while others face unfair disadvantages. For example, African-Americans are more likely to attend underfunded and lower quality public schools and they face the specter of racism that still possessed the body of America. So, when people apply for college or for jobs they are not meeting on the starting line of a fair race which will grant victory to the best competitor. Rather, people are scattered about (some far behind the starting line, some far ahead) and some enjoy unfair advantages while others carry unfair burdens.

Many of these advantages and burdens involve employment and education. For example, a family that has a legacy at a school will have an advantage over a family whose members have never attended college. As such, affirmative action can shift things in the direction of fairness by, to use my 5K analogy, moving people to bring everyone closer to the starting line for a fairer competition.

To use a problematic analogy, 5K races usually divide awards by age and gender (and some have wheelchair divisions as well). As such, an old runner like me can win an age group award, even though the young fellows have the advantage of youth in competing for the overall awards. The analogy works in that the 5K, like affirmative action done properly, recognizes factors that influence the competition that can be justly addressed so that people can achieve success. The analogy, obviously enough, does start to break apart when pushed (as all analogies do). For example, affirmative action with awards will never make me as fast as the youth, whereas affirmative action in college admission can allow a disadvantaged student to gain an education to match those who have enjoyed advantages.   It also faces the obvious risk of suggesting that the competitors are inferior and cannot compete in the open competition. However, it does show that affirmative action can be squared with fair competition.

In closing, I do believe that a person of good conscience can be concerned about the ethics of affirmative action. After all, it does seem to run contrary to the principles of fairness and equality by seeming to grant an advantage to some people based on race, gender and such. I also hold that a person of good conscience can be for affirmative action—after all, it is supposed to aim at rectifying disadvantages and creating a society in which fair competition based on merit can properly take place. Unfortunately, the most vehement foes of affirmative action are white supremacists and misogynists who do not argue in good faith. Ironically, the anti-DEI folks in positions of power, such as certain Trump regime officials, seem to have been gifted with these positions despite their utter lack of merit. That is, they exemplify the claimed horrors of affirmative action gone wild.

 

 

A Philosopher’s Blog is Now on Substack!

You can subscribe and read for free.

https://aphilosophersblog.substack.com/

Since education is expensive, it is reasonable for a student to expect a return on their investment (ROI). Given that the taxpayers contribute to the education of students, it makes sense that they also receive a return on their investment.

A practical measure of the ROI for a student is often the salary of the job they get relative to the cost of their education. Roughly put, a student should be able to work out of their school debt and be able to live with the job that education is supposed to get them. In terms of the ROI for the taxpayer, the return is similar: students funded by the taxpayers are supposed to get jobs and repay the investment through the taxes they pay. The student becomes the taxpayer, thus enabling the next generation of students to also become taxpayers. One could also factor in the role of the worker as a consumer and the impact of the very few who become job creators.

Because the cost of education grew so high, some folks placed their hopes on the free market. The idea was that for-profit schools would provide a high-quality product (education that leads to a job) at a lower cost than the state and traditional private schools. As might be suspected, the ideal turned out very different from the real.

While state schools obviously receive state funds, the for-profit schools received massive federal support. Unfortunately, this money was ill-spent: 20% of the for-profit school students defaulted on student loans within three years of entering the repayment period. About half of all student loan defaulters went to such for-profit schools, although these schools made up only 13% of the student population. The estimate was that about half the loans funneled through students to the for-profit schools were lost to default, which is not a good investment for the taxpayer.

Students most often default on loans due to financial hardship. As might be imagined, not earning an adequate paycheck leads to hardship. While there are over 2,000 programs where the students had loan debt, but whose earnings put they below the poverty line, 90% of these programs were at for-profit schools. As such, these schools were a bad investment for both taxpayers and students. While public and traditional private schools did account for the other 10%, they have been a better investment for taxpayers and students. This is not to say that such schools do not need improvement—but it is to say that the for-profit model was not a solution and probably never will be. For all the obvious reasons you suspect.

There were some attempts, such as in 2011, to impose regulations against the predatory exploitation of students (and taxpayers) by institutions. Not surprisingly, these were countered by the well-paid lobbyists working at the behest of the for-profits. Under the Trump regime, the stated goal is to destroy the Department of Education, so little help for students can be expected from that department.

Interestingly, some states pushed hard for performance-based funding for public institutions. For example, my adopted state of Florida has seen the Republican dominated state legislature micro-managing of education and imposing their professed ideology. In any case, we have been operating under a performance-based model in which funding is linked to achieving goals set by the state. Naturally, for-profit schools do not fall under the same rules as public schools, which could give them an advantage.

Some might suspect the performance-based funding approach is cover for reducing funding even more. This approach also shifts funding towards schools that have more political influence—which is supported by looking at where the money goes.

It might be suspected that performance-based funding was designed to harm public schools and push students towards for-profit schools. These schools often enjoy political connections and would benefit from reduced public education opportunities. Of course, the profits of such schools come largely at the expense of students and taxpayers. They are well-subsidized by the state in a new twist on the old corporate welfare system.  Shockingly enough, there has been little conservative rage at this wasteful socialism and these academic welfare queens.

 

 

A Philosopher’s Blog is Now on Substack!

You can subscribe and read for free.

https://aphilosophersblog.substack.com/