[an error occurred while processing this directive]
Recognizing the difficulty of creating outcomes and their corresponding measures, IR&E is providing the following suggested outcomes and measurement strategies. These are broad, general outcomes that we feel apply to all departments. This list of outcomes was compiled using current AIER improvement plans and items currently represented in the AIER process under Quality, Demand, and Cost.
By explicitly stating them as “outcomes,” we hope to accomplish several things: to improve the flow of the report, to help departments link assessment results in these areas to the planning process, and to allow some flexibility in measurement strategies. We feel that measures should be program-specific, and we hope to decrease our reliance on national, university-wide surveys at the department level. To that end, we have provided some alternate ideas for measurement strategies.
We will continue the practice of using at least two measures for every outcome, but those measures do not have to be, and in most cases should not be, made every year. This is particularly true of surveys, unless you are targeting a specific group each year. In general, surveying graduating seniors each year is reasonable, but surveying all students each year is not.
Each measure should include a benchmark value used to interpret the results of that measure. A benchmark is used to describe either what is acceptable or desired, but not some future unachievable ideal. For example, “The average departmental score for the course evaluation question `Professor showed respect for students` will be at least 5.25, with no course score in the department below 3.0.” Note that the targets are specific, constant values, and not stated as a mandated increase each year. In this case, they are below the EMU average. If comparisons to some other group are used, the group should be very similar to the department or students in the department. For example, it might be appropriate to compare course evaluation scores within clusters but not across clusters (if we had clusters!).
Outcomes, measures, and the benchmark values will typically remain unchanged for many years. Each time data are collected for a particular measure, a quick comparison to the benchmark can be made and noted. Every few years, a more complete analysis of the results should look for trends over time. The report should include a note about when the next full analysis is planned. For the example above, data can actually be recorded every semester. An analysis might only be performed every 3 or 4 years as long as new values are above the benchmark targets and not substantially below the previous values.
For example: theories, vocabulary, methodologies, significant events, studio/lab techniques, problem solving, critical reflection, professional ethics, modes of inquiry
These vary by discipline and are generally well represented in the current reports. Keep up the good work.
For example: academic advising, career/graduate advising, student satisfaction, self-confidence & self-efficacy
Advising (academic, career, etc.) will be available and effective.
Students and department faculty will interact in ways to promote learning and personal growth.
For example: inter- and intra-office communication/cooperation, faculty development, workload, research
Faculty, staff, and student workers will support each other in their work.
For example: Website, public relations, recruitment, fundraising
The department will promote its programs and EMU as a whole in its interaction with the public.
For example: managing space, staff, equipment resources; curriculum planning and accreditation, etc.
The department exercises good stewardship in its use of resources.
The department will offer quality programs and services and be responsive to external factors.