As noted in the previous post, I need to write a presentation for an upcoming conference. Here is the second part, on automating assessment.
Automation
As a matter of psychology, people are more likely to stick to a default inclusion when opting out requires effort. An excellent example of this is retirement savings: when employees are automatically enrolled in a retirement plan and must opt out, they enroll in the plan at a significantly higher rate than cases in which employees must opt it to the plan. This generalizes to most human behavior and can be used to increase faculty participation in assessment. While it might appear that I have forgotten about automation and taken up a new topic, the connection between defaults and automation will, I hope, be made clear shortly.
While making faculty participation in assessment the default and requiring them to opt out might result in more participation, the obvious problem is that it is generally much easier to opt out of assessment than participate. As such, making participation the default is likely to have no positive impact on participation and might cause some resentment on the part of faculty—they might dislike the assumption that they will do extra work. The fix is, of course, to make opting out require more effort than participating.
As a faculty member, I would never suggest making the method of opting out more burdensome than participating as a means of coercing participation. This would merely serve to annoy faculty and lower the quality of participation. As such, the better solution is to develop means of participation that come with minimal cost to the faculty—ideally so low that even an easy opt-out would be more work than participating.
One way of doing this is to have a default in which faculty agree to allow others to gather and assess data from their classes. But this still puts a burden on those who do the gathering and assessment, and these are often other faculty. As with almost any task, one obvious way to make it easier is to automate it as much as possible. As such, combining default participation with automated assessment can improve faculty participation. The default participation, properly handled, decreases the chances of faculty opting out and the automation lowers the cost of participation. In cases where no effort is required on the part of faculty, participation would probably be quite good.
While it might be tempting to make such effortless participation unavoidable, faculty must always be given a choice to opt out as a matter of ethics and practicality. In terms of ethics, professors are the moral custodians of the courses and its data and to force them to share the data would be morally problematic—with the obvious exception of final student grades being inputted into the grading system. There is also to practical concern that faculty could be put off by mandatory participation and this could impact the quality of their participation.
While some faculty will choose to opt out of participation, effective automation can reduce this number. One can, of course, have certain aspects of assessment that are default and others that require opting in—as a rule, the default participation should be for aspects of assessment that take no or minimal effort on the part of faculty. As an illustration, automated data gathering from classes could be set with participation as default while providing student papers as assessment artifacts could require opting in. Regardless of whether the default is participation or not, faculty should always be informed of an automated (or manual) retrieval of data from their classes.
In the process of discussing automating some aspects of assessment at Florida A&M University, faculty expressed reasonable concerns about people (or software) poking around inside their classes on the LMS. This was not because faculty had anything to hide (one hopes) but because of reasonable concerns about academic freedom, intrusions into privacy, and worries that such “poking about” might cause glitches or issues. As such, faculty should be informed about such matters and, obviously, such automation should be designed to avoid causing such problems. Addressing these concerns can go a long way towards earning buy-in.
Since effective automation of assessment reduces the effort required of faculty, increasing automation will tend to increase faculty participation even in areas of assessment that are not set with opting in as the default. Fortunately, there are low-cost ways to automate assessment using resources that are already available.
Many faculty already use of Canvas (and other Learning Management Systems) and these support the creation of certain assignments and their automatic scoring. Since such assignments can be easily imported, this allows for the creation and deployment of automated assessment instruments using the LMS. As an example, the Philosophy and Religion Unit at Florida A&M University developed an Argument Basics Assessment Instrument (ABAI) for conducting an automated pre and post assessment of student competence in key components of critical thinking. The data from this instrument is used both in the unit assessment and the General Education assessment of the Critical Thinking competency area. Collecting such pre and post data is essential to quality assessment and automation can make this easier.
Similar automated assessment instruments can be created and deployed for a variety of purposes and these require minimal effort on the part of faculty. At most, they would need to import the instruments and collect the data from them. In some cases, the instruments could be pre-loaded into classes and the data collected without requiring faculty involvement.
Faculty participation can also be improved by creating quality automated instruments that are relevant to the classes they will be used in. That is, this allows one to offer faculty pre-written tests they might find appealing to use. Faculty can also create their own instruments, perhaps assisted by the assessment folks—such assistance can also earn faculty good-will and increase buy-in.
An example of minor automation is using an online form for data collection, as opposed to such methods as submitting data via files. Using a well-designed online form is generally easier than, for example, completing a PDF form and emailing it to those collecting the assessment data. The form can also reduce the workload of those collecting the data: they do not need to deal with files (or, worse, paper forms) and the data collection can be designed to do some of the work.
As an ideal, an automated system could extract assessment data from classes on the LMS and perform various relevant functions to create useful information. While obviously well within current technology, there is the obvious problem of securing the resources to create such a system. For most schools, a realistic option is establishing some degree of integration between the school’s LMS and whatever software it might be using for assessment purposes, such as Nuventive to make data collection and analysis easier. Florida A&M University is currently conducting a test of such an integration and it will expand after the pilot study. I, of course, will be among the first GENED Guinea pigs.
While Simplification and Automation lower the cost of participation (and Automation can yield some positive benefits for participation) there remains the challenge of Motivation.
[…] Source link […]