I am doing another presentation at an assessment conference in October and I thought I would share the text of the presentation. I included some philosophy stuff, but it also gives some insight into what academics do when we are not destroying America with Woke Post-Toasted Communism.
I am the chair of the General Education Assessment Committee and the philosophy professor completing our reports. The Simplify, Automate, Motivate, and educated (SAME) method was developed in both roles: a professor wrangling department colleagues and a committee chair wrangling university colleagues. Yes, I sometimes must wrangle myself and I can be a lot of trouble.
A fundamental challenge is earning faculty buy-in and failure has a range of negative consequences. Without buy-in faculty are more likely to provide incomplete data, low quality, or no data. They might even provide fabricated data. De-motivated faculty tend to provide garbage data and “garbage in, garbage out.”
A second area is in closing the loop. Even with adequate data, de-motivated faculty are likely to neglect other parts of the process. There are ways to help earn buy-in and these include the SAME method. I will begin with Simplification.
A complicated assessment process is analogous to the tax code: problematic, convoluted, torturous, difficult, and inconsistent. Complicated systems deter participation because they are challenging to understand and impose an unnecessary cost. Effective simplification makes it easier to understand and lowers the cost of participation. This increases the likelihood of faculty buy-in. Proper simplification also improves assessment quality by focusing faculty effort and using resources more effectively.
The process of simplification often has the virtue of reduction. As an example, in 2018 Florida A&M University’s General Education Assessment Committee’s faculty data contribution guide was 13 pages and our data collection form was four pages. The form was so complicated, one faculty member compared it to a tax form. That was a red flag—if you are being compared to the IRS, you know the faculty are not happy. While a rewritten version of the guide was retained, a short and focused video guide was also created. The form was simplified to the essentials. The online version allowed for greater simplification: irrelevant questions would not appear. This improved faculty participation.
While a systematic guide is beyond the scope of this presentation, philosophy provides some general simplification principles. Our good dead friend Aristotle provides a fine starting point: ask whether a part of something contributes positively. If not, remove it, “For a thing whose presence or absence makes no visible difference, is not an organic part of the whole.” Parts that have a negative impact should be excised immediately. When erring, it is usually better to err on the side of simplification— but simplification is not without its hazards.
Simplification can sometimes make assessment less useful. For example, while the simplified form mentioned earlier is easier to complete, it collects less data. As a guide to what to keep, Aristotle’s and Confucius’ mark of virtue serves well: one must find the mean between the two extremes. For example, a well-designed form balances between the extremes of demmanding excessive data and being uselessly simplistic. The form should have the right questions, at the right time, for the right reasons and presented in the right way.
A consequentialist approach also serves here: each addition should be assessed in terms of costs and benefits individually and in total. While a part taken in isolation might create more benefit than cost, the cost of the entire system could end up exceed its benefits. In general, having information about student performance is more beneficial than not having it. So, any sensible data collection entry on a form would appear good when assessed in isolation. But adding all these questions to a form would have an overall negative consequence: this mega-form would impose an absurd burden on faculty.
A simplified system need not be simplistic—it could be quite sophisticated. One way to operate a sophisticated system with greater simplicity and ease is with automation.
As a matter of psychology, people are more likely to stick to a default inclusion when opting out requires effort. For example, when employees are automatically enrolled in a retirement plan and must opt out, they enroll at a significantly higher rate than when they must opt in. This can be used to increase faculty participation.
While making faculty participation in assessment the default might increase participation, it is less likely to do so if it is much easier to opt out than participate. I would never suggest making opting out more burdensome as a means of coercing participation. This would only antagonize faculty. A better solution is developing participation that has a minimal cost. Effective automation provides a way to do this. Default participation, properly handled, decreases chances of faculty opting out and the automation lowers the cost of participation. In cases where no effort is required, participation would probably be quite good.
While it might be tempting to make effortless participation unavoidable, faculty must always be given a choice to opt out as a matter of ethics and practicality. In terms of ethics, professors are the moral custodians of their courses and its data. There is also to practical concern that faculty could be put off by mandatory participation. What, then, are some ways to automate at low cost?
Most faculty already use a Learning Management System (LMS) and these support the creation of automatically scored tests. Since such tests can be easily imported, this allows for the deployment of automated assessment instruments (AAI). As an example, the Philosophy and Religion Unit at Florida A&M University developed an Argument Basics Assessment Instrument (ABAI) which provides an automated pre and post assessment of student competence in critical thinking.
Such premade AAI would require minimal effort on the part of faculty. At most, they would need to import the instruments and collect the data. In some cases, the instruments could be pre-loaded into classes and the data collected without requiring faculty involvement. Faculty might also find such premade tests appealing—less work for them to do.
As an ideal, an automated system could extract assessment data from classes on the LMS and perform create useful information. While technologically feasible, there is the problem of cost. For most schools, a more realistic option is establishing some degree of integration between the school’s LMS and whatever software it might be using for assessment purposes.
While Simplification and Automation lower the cost of participation (and Automation can yield some positive benefits for participation) there remains the challenge of Motivation.
Faculty’s negative view of assessment often de-motivates them. Even if the system is simplified and automated, de-motivated faculty will often perform the bare minimum. This will lower assessment quality and can lead to a de-motivation spiral.
While specific causes vary, there are general demotivating factors. One is the perception of assessment as imposed with little consideration of faculty concerns and workloads. This can arise from role differences between faculty and administrators. Faculty see themselves as educators and researchers who do administrative tasks from necessity (or compulsion). At best, such tasks are necessary evils. For administrators, such tasks are not necessary evils, but their job.
Administrators can also be unaware of key facts about faculty. For example, I sometimes must explain that tasks that are paid work for administrators are often unpaid labor for faculty. So, issues can arise simply because of the failure of faculty and administrators to understand each other’s situation. With good faith effort to understand the challenges and needs of faculty and administrators, many of these issues can be resolved and thus improve the quality of assessment.
Assessment is often seen as existing to create administrative positions or to generate data to appease the whims of bureaucrats or legislators. There is also the perception that even well-intentioned assessment is useless. I could go on cataloging problems, but it is more profitable to offer solutions.
As a faculty member, I have found a pragmatic appeal can be useful, if blunt. Florida A&M University is subject to performance-based funding. Schools are rewarded and punished based on their assessed performance. While somewhat hyperbolic, my stock line is that participating in assessment impacts the continued employment of faculty. This can be effective.
All accredited schools must undergo the accreditation process. Assessment is now a key part of this process. So, I can honestly tell faculty that their continued employment depends on faculty participation. Threatening faculty would be counterproductive and unethical. But clearly and honestly presenting the stakes can be a great motivator. Fortunately, there are also positive options.
One way provide positive motivation is to address the view that assessment is useless. This can be done by showing faculty the value of assessment in the context of what they value: the quality of education. Naturally, the assessment process must be useful to faculty. Those involved in assessment must have a clear role in this. If faculty could just do this on their own, then there would be no value in having dedicated assessment: it could be replaced with a database.
Most schools have assessment committees with faculty members, such as our GEAC and ILAC (Institutional Level Assessment Committee) and they can assist faculty in closing the loop. If faculty see the value for them in assessment, then they are more likely to engage in quality participation.
Traditionally, faculty could be motivated by appeals to unpaid extra work for the good of the students. However, the ascendence of the business model has weakened these traditional motivators. As a practical matter, motivation must be considered within the expanding conception of the university as business. Universities have consciously embraced this model and profess to see themselves more as brands and less as institutes of learning and research. Assessment itself was originally an intruder from the business world.
In this context, money (or release time) is the most obvious motivator. If faculty were adequately compensated for assessment, they would be more motivated. Linking compensation to performance, the quantity and quality of participation in assessment would improve. If resources are available, this would be the best solution to the motivation problem: compensate those working on assessment adequately and assessment will (probably) improve.
Unfortunately, most universities lack either the needed resources or desire. While assessment is presented as important, the opposite message is sent by the lack of resources. Fortunately, there are low or no cost motivators. These are used in business and other contexts when someone wants to motivate but is unwilling to provide adequate compensation. Or, less cynically, when people want to show their gratitude in non-financial ways.
The GEAC committee members discussed this matter and proposed some options. An obvious free motivator is a sincere expression of gratitude. Faculty sometimes just want to be appreciated. This is, quite literally, the least that should be done.
One low-cost motivator is the certificate of appreciation—this puts the expression of gratitude into a tangible and visible form. There is a risk that over doing it can make them meaningless or even a joke.
Another low-cost motivator is a letter evidencing service. These are in demand by faculty going up for tenure and promotion and provides a compensation that faculty value. In the case of GEAC, explicitly offering such letters for service has proven to be an effective recruiting tool. On the downside, they only work for faculty seeking tenure and promotion and some faculty drop out afterwards.
Digital badges have become quite popular; they are modeled on achievements in video games and rely on a similar psychological mechanism. Some represent skills and accomplishments and could be seen as icon or emoji versions of resume entries. Given the current popularity of badges, they are worth considering. These could be created and distributed within the university—essentially digital icons performing the same role as certificates. There are also services that offer badge systems—although these often involve a subscription cost. Badges might be a passing fad—or they might be like Pokémon—something that will endure, and people will want to catch them all. Yes, I have suggested creating Assessémons to incentivize faculty. The final factor is educating the educators.
As the discussion of motivation implies, faculty are often unaware of the process, value, and purpose of assessment. This lack of knowledge can have a negative impact on faculty buy-in. If the faculty do not know how the process works, they can find participation confusing, frustrating, and difficult. As noted above, this can be addressed by simplification—but even a simplified system can be a problem if faculty do not know how to use it. Therefore, educating faculty on the relevant parts of the process is critical to earning buy-in.
If the faculty do not know the value of assessment, they will not be inclined to participate. After all, people are rarely enthusiastic about things that appear to lack worth. While providing incentives can address this (making assessment valuable to do), faculty also need to believe (correctly) that their participation has worth. This requires ensuring that it does have value and making this value clear to faculty.
If the faculty do not know the purpose of assessment, they will be disinclined to participate—why do something when the purpose is unknown (or lacking)? Addressing this requires that assessment have a meaningful purpose and informing the faculty of this purpose.
Informing faculty does come with challenges. For example, special, mandatory meetings to learn about assessment can serve to de-motivate faculty. Faculty generally dislike meetings and having more meetings is not a good way to win us over. As such, it is preferable to minimize the cost to faculty: the information can be provided, in a concise form, at existing meetings and short, informative videos can also be useful. Somewhat ironically, while everyone is Zoomed out, concise and focused Zoom training can be more appealing than in-person training: clicking a button is easier than travelling to a meeting on campus.
In sum, if assessment is made worth doing and knowing about, educating faculty about it will help earn their participation.
While it can be challenging to earn faculty buy-in, it will improve the quantity and quality of participation. While SAME is not a magic bullet, effectively Simplifying and Automating the assessment process can increase faculty participation by lowering its cost. Motivating faculty is even more critical since demotivated faculty will not use even a simplified and automated system willingly or well. And faculty must be Educated to understand not only the how but the why of assessment.
[…] Continue reading . . . […]