I am presenting at the University of Florida’s 5th Annual Assessment conference on 3/22/2021; I endeavored to work in as much philosophy as I could into the presentation. Now that I have this done, I can resume my usual scurrilous blogging.
Introduction
This brief introduction provides the context of the discussion to follow. My involvement with assessment at Florida A&M University began in 2004. I was assigned to the newly created General Education Assessment Committee (GEAC) and participated, as a philosophy professor, in the assessment process for the Philosophy and Religion Unit. When I became the facilitator for the unit, I assumed the responsibility for completing its assessment tasks. Some years ago, I was appointed as a co-chair of GEAC and am now the chair. The Simplify, Automate, and Motivate (SAM) method was developed in the context of both roles: a professor who must wrangle information from unit colleagues to complete assessment forms and a committee chair who must wrangle information from university colleagues to complete assessment forms. While my position is hardly unique, being both a professor and something of an assessment administrator provides a useful perspective (or two) on earning faculty engagement with assessment.
The Challenge
One fundamental challenge of assessment is earning faculty buy-in for the process. Failure to achieve this can have a range of negative consequences. One area of negative consequences is in the realm of data. If faculty buy-in is not earned, they are more likely to provide incomplete assessment data or even no data at all. They are also more likely to provide low-quality data and might even provide fabricated data to simply get the process over with. De-motivated faculty will tend to provide garbage data and, as the old saying goes, garbage in, garbage out.
A second area of negative consequences is in closing the assessment loop. Even if faculty provide adequate data, without buy-in they are more likely to neglect the other parts of the process such as their improvement narratives, reflections, and (most importantly) applying these results to their classes. Because of this, earning quality faculty buy-in is part of the foundation of good assessment. Fortunately, there are ways to help earn the participation of faculty in the process and these include the SAM method. This involves Simplifying the assessment process, Automating the assessment process, and Motivating faculty. I will begin with Simplification.
Simplification
A complicated assessment process is analogous to the tax code or the Windows Registry. This is to say that it is problematic, convoluted, torturous, difficult, and inconsistent. Dealing with such a process often requires special knowledge of all its difficult ways. Even with such knowledge, errors are likely and there can be punitive aspects to such processes that have the potential to create adversarial relationships between faculty and assessment leadership. Complicated processes often have a random element as well—one can never be quite sure how the process will work this time around.
As a rule, people find complicated systems a deterrence to participation. They can be challenging to understand and typically impose an unnecessary cost in time and resources on those involved. As such, people generally try to minimize their involvement with such systems. Simplifying and streamlining the faculty aspects of assessment makes these aspects easier to understand and lowers the cost of participation. Doing this increases the likelihood that faculty will buy into and participate more willingly in the process. Effective simplification can also improve the quality of assessment by focusing faculty effort onto key areas of assessment so that their time and resources are used more effectively.
While merely reducing the size of something need not make it simpler, the process of simplification often has the virtue of reduction—which can also be beneficial. As an example, in 2018 Florida A&M University’s General Education Assessment Committee’s faculty data contribution guide was 13 pages long and the data collection forms were four pages long. The forms were also somewhat complicated, with numerous check boxes and many text boxes to be filled out—one faculty member said they reminded them of tax forms. That was a clear red flag—if you are being compared to the IRS, then you know the faculty are not happy with the process. While the detailed guide was retained after some modification (for those faculty who wanted such a guide) a short and simple video focused on the essentials was created to guide faculty through the process. While it will never go viral, it did prove more popular than the PDF guide. Which, admittedly, set a bar low enough to challenge the greatest master of limbo.
More importantly, the forms were simplified down to the essentials. This was done through a review process in which the committee carefully considered the distinction between essential and non-essential information. The online version of the forms allowed for even greater simplification since irrelevant questions would not appear to respondents, making the basic form simple—not like a tax form at all. This simplification has also helped improve faculty participation in the data collection process. Again, the original low levels of participation set a low bar.
While a systematic guide to simplification is beyond the scope of this work, philosophy does provide some excellent general principles for this process. Our good dead friend Aristotle provides a fine starting point for simplification: ask whether a specific part of something contributes in a positive way to that thing. If not, remove it, “For a thing whose presence or absence makes no visible difference, is not an organic part of the whole.” In the case of parts that have a clear negative impact on the process, those should be the first removed. As such, the practical test of the importance of some aspect of a thing is to consider the consequences of its removal. When erring, it is better to err on the side of simplification—at least in terms of improving participations. Because of the nature of bureaucracy, it can almost always be improved by removing parts; so that is also a good general principle to follow. Faculty feedback can be helpful here—it is a good idea to ask faculty about the process and take serious suggestions about simplification seriously. While simpler is often better, simplification is not without its hazards.
There is always the risk of oversimplifying the assessment system or some aspect of it and this can have the negative consequence of making the assessment less useful. Consider, as an example, the above-mentioned forms used to collect General Education data. While a simplified form makes it quicker and easier for faculty to provide data, this comes at the cost of not gathering as much data as the original form. In the case of these forms, every item removed was data that was not gathered. For example, the earlier version of the form required faculty to include data about how many students met each score on the rubric rather than providing general competent/not competent data. As such, the simplified form does not collect information about the degrees of student performance. As would be suspected, faculty never request the detailed forms—though that is an option.
In terms of another general guide to what to keep and remove, Aristotle’s and Confucius’ advice should be taken when simplifying: one must find the mean between the two extremes. That is the mark of virtue. Using the form example, a well-designed data collection form is a balancing act between having a form robust enough to get the data that is needed and simple enough to not invite comparisons to tax forms. Ideally, the form should have the right questions, at the right time, for the right reasons and presented in the right way—to borrow and modify Aristotle’s notion of determining virtue.
A consequentialist approach also serves well here: each addition to the system of assessment should be assessed in terms of its costs and benefits. This evaluation should occur on both an individual level and in total. To illustrate, while each individual part taken in isolation might create more benefit than cost, the cost of the entire system could exceed its benefits. As an example, consider data collection forms once more. In general, having a relevant piece of information about student performance is more beneficial for assessment than not having that information. On this approach, almost any sensible data collection entry on a form would thus have more benefit than harm when assessed in isolation. But adding all these questions to a form would have an overall negative consequence: this mega-form would impose an absurd burden on faculty. While this form would, if completed properly, yield a bounty of information, faculty would be less inclined to complete it and not appreciate having to waste so much time on such a needlessly long form. Fellow philosophers will recognize this as the classic rule vs. act utilitarianism issue.
It should be noted that something can be sophisticated without being complicated. Such a thing is like the original iPod Nano. It is consistent, straightforward and ‘user friendly’ while also being sophisticated and useful. Unlike the complicated system, interacting with such a system would not require special knowledge of its complicated ways. As such, a simplified system need not be simplistic—it could be quite sophisticated. One way to operate a sophisticated system with greater simplicity and ease is with automation.
Automation
As a matter of psychology, people are more likely to stick to a default inclusion when opting out requires effort. An excellent example of this is retirement savings: when employees are automatically enrolled in a retirement plan and must opt out, they enroll in the plan at a significantly higher rate than cases in which employees must opt it to the plan. This generalizes to most human behavior and can be used to increase faculty participation in assessment. While it might appear that I have forgotten about automation and taken up a new topic, the connection between defaults and automation will, I hope, be made clear shortly.
While making faculty participation in assessment the default and requiring them to opt out might result in more participation, the obvious problem is that it is generally much easier to opt out of assessment than participate. As such, merely making participation the default is likely to have no positive impact on participation and might cause some resentment on the part of faculty—they might dislike the assumption that they will do extra work. The fix is, of course, to make opting out require more effort than participating.
As a faculty member, I would never suggest making the method of opting out more burdensome than participating as a means of coercing participation. This would merely serve to annoy faculty and lower the quality of participation. As such, the better solution is to develop means of participation that come with minimal cost to the faculty—ideally so low that even an easy opt-out would be more work than participating.
One way of doing this is to have a default in which faculty agree to allow others to gather and assess data from their classes. But this still puts a burden on those who do the gathering and assessment, and these are often other faculty. As with almost any task, one obvious way to make it easier is to automate it as much as possible. As such, combining default participation with automated assessment can improve faculty participation. The default participation, properly handled, decreases the chances of faculty opting out and the automation lowers the cost of participation. In cases where no effort is required on the part of faculty, participation would probably be quite good.
While it might be tempting to make such effortless participation unavoidable, faculty must always be given a choice to opt out as a matter of ethics and practicality. In terms of ethics, professors are the moral custodians of their courses and its data and to force them to share the data would be morally problematic—with the obvious exception of final student grades being inputted into the grading system. There is also to practical concern that faculty could be put off by mandatory participation and this could impact the quality of their participation.
While some faculty will choose to opt out of participation, effective automation can reduce this number. One can, of course, have certain aspects of assessment that are default and others that require opting in—as a rule, the default participation should be for aspects of assessment that take no or minimal effort on the part of faculty. As an illustration, automated data gathering from classes could be set with participation as default while manually providing student papers as assessment artifacts should require opting in. Regardless of whether the default is participation or not, faculty should always be informed of an automated (or manual) retrieval of data from their classes. This is both a matter of ethics and professional courtesy.
In the process of discussing automating some aspects of assessment at Florida A&M University, faculty expressed reasonable concerns about people (or software) poking around inside their classes on the LMS. This was not because faculty had anything to hide (one hopes) but because of reasonable concerns about academic freedom, intrusions into privacy, and worries that such “poking about” might cause glitches or issues. As such, faculty should be informed about such matters and, obviously, such automation should be designed to avoid causing such problems. Addressing these concerns can go a long way towards earning buy-in.
Since effective automation of assessment reduces the effort required of faculty, increasing automation will tend to increase faculty participation even in areas of assessment that are not set with opting in as the default. Fortunately, there are low-cost ways to automate assessment using resources that are already available.
Many faculty already use of Canvas (and other Learning Management Systems) and these support the creation of certain assignments and their automatic scoring. Since such assignments can be easily imported, this allows for the creation and deployment of automated assessment instruments using the LMS. As an example, the Philosophy and Religion Unit at Florida A&M University developed an Argument Basics Assessment Instrument (ABAI) for conducting an automated pre and post assessment of student competence in key components of critical thinking. The data from this instrument is used both in the unit assessment and the General Education assessment of the Critical Thinking competency area. Collecting such pre and post data is essential to quality assessment and automation can make this easier.
Similar automated assessment instruments (AAI) can be created and deployed for a variety of purposes and these require minimal effort on the part of faculty. At most, they would need to import the instruments and collect the data from them. In some cases, the instruments could be pre-loaded into classes and the data collected without requiring faculty involvement.
Faculty participation can also be improved by creating quality automated instruments that are relevant to the classes they will be used in. That is, this allows one to offer faculty pre-written tests they might find appealing to use. Faculty can also create their own instruments, perhaps assisted by the assessment folks—such assistance can also earn faculty good-will and increase buy-in.
An example of minor automation is using an online form for data collection, as opposed to such methods as submitting data via files. Using a well-designed online form is generally easier than, for example, completing a PDF form and emailing it to those collecting the assessment data. The form can also reduce the workload of those collecting the data: they do not need to deal with files (or, worse, paper forms) and the data collection can be designed to do some of the work.
As an ideal, an automated system could extract assessment data from classes on the LMS and perform various relevant functions to create useful information. While obviously well within current technology, there is the obvious problem of securing the resources to create such a system. For most schools, a realistic option is establishing some degree of integration between the school’s LMS and whatever software it might be using for assessment purposes, such as Nuventive to make data collection and analysis easier. Florida A&M University is currently conducting a test of such an integration and it will expand after the pilot study. I, of course, will be among the first GENED Guinea pigs.
While Simplification and Automation lower the cost of participation (and Automation can yield some positive benefits for participation) there remains the challenge of Motivation.
Motivation
Faculty often have a negative view of assessment and this impacts their motivation. Even if the system of assessment is simplified and effectively automated, de-motivated faculty will tend to perform the bare minimum to avoid negative consequences. This will result in lower quality assessment and can lead to a spiral of ever-increasing demotivation. As such, the issue of motivation needs to be addressed.
While the specific causes can vary, there are some general factors that tend to negatively impact faculty perception of assessment. One is that assessment is often perceived as something imposed externally by the administration perhaps with little consideration given to legitimate faculty concerns and workloads. In many cases this can be the result of the difference between the roles of faculty and administrators. Faculty see themselves as educators and researchers who engage in administrative tasks out of necessity (or compulsion). They did not attend graduate school to learn to be administrators and correctly see their primary roles as educating, researching, and mentoring. As such, administrative tasks will be seen as being, at the very best, necessary evils. And most often as unnecessary evils that take time away from their primary role.
While administrators often come from the ranks of faculty, the role of an administrator is to administrate. Even though they might have once been educators, they generally no longer do that and hence generally do not have the same view of teaching, research, and mentoring as current faculty. For them, the administrative tasks are their main concerns—they are not seen as necessary evils, but as key parts of their job. Administrators are also often isolated from faculty. To illustrate, I have been on numerous committees where I had to explain to well-meaning administrators that faculty would generally not be available to do administrative work in the summer because most faculty are on nine-month contracts rather than being year-round employees. I have also had to explain, on many occasions, that faculty generally do not get any extra compensation for doing extra administrative work—many tasks that are paid work for administrators are unpaid labor for faculty. As such, many issues can arise simply because of the failure of faculty and administrators to understand each other’s situation. With good faith effort to understand the challenges and needs of faculty and administrators, many of these issues can be resolved and thus improve the quality of assessment.
Another factor is that assessment is often seen as a waste of resources because it is perceived as existing to create administrative positions or to generate data to appease the whims of bureaucrats or legislators. There is also the common perception that even if assessment is well-intentioned, it is useless. In discussing assessment with a fellow philosophy professor at another university, they made the impassioned point that professors have assessed their teaching performance since the time of Plato using the knowledge they gain from their profession. They added, with equal passion, that providing numbers to non-educators for cataloging and review has no value for improving the quality of their classes. One could, of course, go on cataloging problems; but it is more profitable to offer some solutions.
As a faculty member tasked with motivating other faculty to participate in assessment, I have found that a pragmatic appeal can be quite useful, if a bit blunt. Florida A&M University, like all state colleges and universities in Florida, is subject to performance-based funding. This process requires the collection of data and schools are rewarded and punished based on their assessed performance. As would be suspected, this allows for a pragmatic appeal to faculty. While it is somewhat hyperbolic, my stock line is that providing data (and using it to improve student performance) impacts their continued employment. This can be quite effective.
All accredited colleges and universities must, obviously enough, undergo the accreditation process. Assessment is, of course, a key part of this process. For example, GEAC is responsible for the relevant GENED assessment for this review process and representatives from the committee are called in to answer questions during the process. Because of the importance of assessment in this process, I can honestly tell faculty that their continued employment requires participation. These facts are presented in a pragmatic and factual manner—being a faculty member I know that threatening faculty would be counterproductive and unethical. But clearly and honestly presenting the stakes in such a factual manner can be a great motivator. Fortunately, there are also positive options for motivation.
One way to provide positive motivation is to address the faculty concern that assessment is useless to them. This can be done by showing faculty the value of the assessment process in the context of what faculty value: the quality of the education they provide to their students. Naturally, the assessment process must be useful to faculty—something that is not always the case. A key part of this is for those involved in assessment to have a clear role in this aspect of assessment. If this is something that faculty could do on their own and the role of the assessment personnel is to merely collect, catalog and report, then there would seem to be little meaningful value in having dedicated assessment: one could replace it with a database.
This is not to say that everyone in assessment must have this role—there are many roles in assessment. Most schools have committees involved with assessment that have faculty members, such as Florida A&M University’s GEAC and ILAC (Institutional Level Assessment Committee) and these committees can assist faculty in closing the loop. If faculty see the value for them in the assessment process, then they are more likely to engage in quality participation.
One might also note the fact that, traditionally, faculty could be motivated by appeals to what is good for students or the institution to do extra work without compensation. However, the ascendence of the business model has weakened these traditional motivating factors and, as a practical matter, motivation needs to be considered within the context of the reality of modern education being ever more shaped by the conception of university as business. Universities have consciously embraced this model, and often profess to see themselves more as brands and less as institutes of learning and research. Assessment itself was, of course, originally an intruder from the world of business.
In the context of the university as business model, the most obvious motivating factor is money (or release time from other tasks—which is also essentially money). If faculty were adequately compensated for assessment work, they would be more inclined to participate. In fact, with appealing compensation linked to performance, the quantity and quality of participation in assessment would improve dramatically. If the resources are available, this would almost certainly be the best solution to the motivation problem: compensate those working on assessment adequately and assessment will tend to improve.
Unfortunately, most universities lack either the resources or the desire to use resources in this manner. That is, while assessment is presented to faculty as being important, the opposite message is sent by the lack of resources provided for this allegedly important task. After all, institutions spend resources on what those in charge regard as valuable. Fortunately, there are motivators that are low or no cost. While these are also often used in business contexts, they are also commonly used in other cases where someone wants to motivate people but does not want to provide adequate compensation in actual resources. Or, less cynically, when people want to show their gratitude in non-financial ways.
During a recent GEAC meeting, the committee members discussed this matter and came up with some reasonable low to no cost solutions. The most obvious free motivator is a sincere expression of gratitude, perhaps at a meeting or event where such thanks would be appropriate. Faculty, like most people, often express the view that sometimes all they really want is to be appreciated in their efforts. This is, quite literally, the least that should be done for faculty.
One extremely low-cost motivator is the certificate of appreciation—this puts the expression of gratitude into a tangible and visible form. There is, however, the risk that over doing it with certificates can make them meaningless or even something of a joke.
Another low-cost motivator is a letter evidencing service. These are much in demand by faculty going up for tenure and promotion and provides a compensation that faculty value. In the case of GEAC, explicitly offering such letters for service has proven to be an effective recruiting tool. On the downside, they only work for faculty seeking tenure and promotion and, as one would suspect, some faculty depart the committee after earning tenure and promotion.
Digital badges have become quite popular; they seem to be modeled on achievements in video games and rely on a similar psychological mechanism. Some do, of course, represent skills and accomplishments and could be seen as icon or emoji versions of resume entries. Given the current popularity of badges, they can be worth considering as means of motivating faculty to participate in assessment. These could be created and distributed within the university—essentially digital icons performing the same role as certificates. There are also services that offer badge systems—although these often involve a subscription cost. Badges might, however, be something of a passing fad—or they might be like Pokémon—something that will endure, and people will want to catch them all. Yes, I have suggested creating Assessémons to incentivize faculty.
Conclusion
While it can be challenging to earn faculty buy-in, it is worth the effort in order to improve the quantity and quality of participation in assessment. While SAM is not a magic bullet, effectively Simplifying and Automating the assessment process can increase faculty participation by lowering its cost. Motivating faculty is even more critical since demotivated faculty will not use even a simplified and automated system willingly or well.