7400.685-080 - Research Methods in FCS
School of Family and Consumer Sciences
Spring Semesters - Tuesday Evenings 5:20-7:55pm in 209 Schrank Hall South
Instructor: David D. Witt, Ph.D.
Chapter 12 - Evaluation Research
Applied Research: Social Impact Assessment
Also generally known as applied research, there are at least two broad types of evaluative research. Social Impact Assessments are designed to predict the outcome of a specific change in the status quo. The topic of interest ranges widely here, from the impact of changing the way citizens contribute to the social security system to changing the way school districts are funded to the addition of a new program in city or county government. Here the idea is to predict when, where, how much change will occur, and who will benefit and/or suffer damage from a program's implementation. Impact studies can also be designed to predict the eventuality of doing nothing, as in the case of global warming, homeland security measures, or allowing medical care and insurance costs to go unchecked. Family Impact Analysis is a conceptual "spin-off" of social impact analysis, and has, at its focus, the impact of governmental policy.
Applied Research: Evaluation Studies
Evaluation studies are specifically designed to measure whether or not an agency, program or other social entity is living up to its stated goals (i.e., its mission or stated purpose). As FCS graduates, you will probably be more likely to encounter or participate in evaluation studies because of the likelihood that you will work, at some time in your career, for an agency or business involving the public, public funding, or other factors necessitating a demonstration of accountability. Entire academic graduate programs are devoted to this topic, often in schools for public policy and research.
We've stated repeatedly in the class that hypotheses or research questions derive from theory introduced in the review of literature. This is a clue to revealing that which is to be evaluated in evaluation research. The ideas, concepts or actions that create a social program would come in the form of funding of some type - with the funding comes a charge or mission to solve a social problem, ease social concerns, or maintain a social institution. It is the charge or mission that becomes the topic of evaluation, but that focus is not without logic problems, and evaluation studies almost always result in the creation of additional problems if poorly implemented.
For example, public schools are charged with the responsibility of producing students that are in possession of specific skills. How might the funding agent, in this case state and local government, be assured that schools are "doing their job"? Statewide manditory competency examinations (also known as proficiency examinations) were chosen some time ago. Here in Ohio, having been mandated by a change in the Ohio Revised Code (3313.61) that students will perform at some satisfactory level on an array of subjects, including math, reading, writing, science and citizenship, local school systems currently test students at various times during the term of each student's education. The consequence of failure for the student is that he or she will not graduate from high school. Further, because competency examinations are aggregated at the individual school level, specific schools are "graded" on their aggregate performance. Thus, the consequence of failure to the school is to receive less funding or possible elimination. This was the logic of the current educational policy known as "No Child Left Behind", a national policy which concentrates almost exclusively on accountability. When evaluation measures are implemented, they often do not include measures of important extra-school variables that probably contribute to failure - poverty, family disruption, unemployment, neighborhood safety issues, and so on.
Facing dire fiscal futures without high performance measures, some teachers and researchers have charged that this has created the tendency is for many schools to lean towards "teaching to the test" by issuing practice exams and concentrating on preparation for testing, rather than on actual learning. Whether or not this is true, there is little argument that the focus on accountability has resulted in a highly politicized learning environment. A simple Google Search using the term "leave no child behind" reveals a plethora of opinions, both official and unofficial. In the meantime, new national agendas are created and students are often left behind anyway. One teacher of over 30 years put it this way.
Evaluation research is only as trustworthy as the methodology used to guide it. Like all social research, evaluation requires a firm grasp on the definition of that which is to be evaluated. Anyone likely to be involved in the evaluation of an agency should take the time to review the agency's mission, specifically note all the performance expectations, and then very carefully consider the measures that will be used to evaluate performance. Thus, the key to reliable and valid evaluation research lies in the methodology - measurement and observation - used to assess the institutional or agency goals as they are stated. Without agency-wide understanding of the institutional mission, evaluation will be confusing and unreliable.
By Way of Example: How might one evaluate the University of Akron, given its stated mission: We seek to differentiate ourselves as the public research university for northern Ohio, the University dedicated to the education and success of its students and to the production, integration, and dissemination of knowledge for the public good.
As a student, what would your priorities be? Would the priorities of a professor differ? Or those of an administrator? Or, for that matter, a UA police officer or a department secretary? It is a daunting task to account for our performances on some global measure, particularly if the global measures are nto clearly defined. Despite measurement difficulties, each department in the university is required to maintain its own mission statement, which is evaluated in strategic planning. Each department is promised, if performance measures indictate it, either a bigger budget allocation or a smaller budget cut, depending on decisions made by the legislature. Within this changing political and economic environment, solutions to our problems of funding may be a radical departure from the status quo absent reliable performance measures. The political and economic environment in which federally/state supported institutions exist is rapidly becoming more competitive. As the economy fails to perform, state budgets begin to shrink which leads to smaller appropriations for things like education, prisons, health care, and such. So accountability and evidence of efficient management become more key factors in maintaining funding. Virtually all public service and "social action" programs face the ultimatim: Perform or die away.
Institutions like UA are feeling enormous pressure to "measure up" to performance and accountability "benchmarks". We want to be able to say with documented evidence that we 1) do as well or better than other institutions on a variety of measures, 2) are providing the services we say we are providing, and 3) we are planning on increasing our performance into the future. Our administration has initiated an Academic Plan which includes our mission and vision for the future, which I encourage every UA student to read. Research methods students should read the plan in the context of evaluation research to get some idea of how difficult it is to justify a large institution's existence.
Evaluation research can gobble up enormous institutional resources. Keeping track of an array of performance measures means putting "company time" into the effort. People have to fill out forms, someone has to collect and code those forms, yet others are required to interpret and double check the numbers, and still others will have to write reports, submit them, serve on committees charged with interpreting results. All this work must be done while the real work of the institution continues. If ongoing evalaution is left undone until funding is threatened, the institution will have to hasten the evaluation process, and may leave important measures out of performance equations while relying on easily measured but unreliable information.
For example, how might a drug rehabilitation clinic show its "cure rate" if the client population is homeless or itinerant in some way. The clinic can show the number of hours spent in various tasks, but documenting actual effectiveness is more difficult. Further, the mission/purpose of the clinic must be valued by the funding agency. You are all aware of social programs initiated by one political administration, only to be scuttled by the next set of elected officials. That is to say, evaluation is only one aspect of the total survivability potential. Research has shown for many years that high quality head start programs necessarily results in better overall learning outcomes for children at risk, yet only about 1/3 of our country's children who qualify for headstart actually receive it. Similarly, the Ohio legislature has been presented with the evidence that a college educated population will earn more money and pay higher taxes for longer periods of their life, yet Ohio ranks near the bottom of rankings for support for higher education with tax dollars.
Conversely, when implemented with exacting measures on social enterprises that are well defined, Evaluation Research can actually benefit the organization being evaluated. Evaluation research can come in the form of Needs Assessment, Monitoring, Outcome Assessment, and Efficiency Analysis (Touliatos et al, 1988). In fact, all four types of evaluation can, and probably should, be implemented simultaneously using the same set of measures. While all these aspects of evaluation researcher driven to a great extent by the need for continuing funding, especially in a time of declining resources and support, there remains an ethical reason for the research effort. Everyone agrees that a program that is not needed should be discontinued. One that cannot demonstrate its worth is likely to be discontinued.
Obviously there ought to be an actual need for the services provided by a social program. Documenting the initial and ongoing need is the trick. "Needs assessment refers to the diagnostic procedure that identifies the nature and scope of a problem and the size and location of a target population that requires special services. The population may be individuals, famileis, subcultural groups, organizations, communities, regions, and so on." (Touliatos et al., 1998, p. 323).
Existing data may include government documents and statistics and institutional statistical reports, such as those issued by the FBI (in their Uniform Crime Report), the Ohio Board of Regents, or National Institue of Mental Health. More likely, a researcher would have to implement localized surveys of the population being served to assess community need. The basic methods for survey research would apply here. The idea is to document a need with data, and show how the proposed program or effort would provide solutions by design. The local survey would supply the conceptual needs but the program would have to be designed to meet the needs that emerge from research.
Once need is established and a program is instituted, continuous monitoring of the population being served should provide data to illustrate 1) the effect of the program in meeting population needs, and 2) any changes in the actual needs of the population. Course evaluations are tools for program monitoring. More open-ended, qualitative data collection accomplishes the goal of providing information about change. Quantitative monitoring accomplishes the goal of assessing the success of delivery of the program and becomes more important as the size of the population increases. The idea with monitoring is to purposefully engage in constant program development.
Usually done at the end of a program's funding period, but actually important to the life of the program on a more standardized timetable, outcome assessment asks the simple question "Has the program accomplished its goals?" The answer to the question lies in the number of successes the program can count and any known and planned positive consequences as a result of the program. The tendency to claim every possible success, whether related to the program or not, is strong here and should be resisted to avoid unnecessary conflict with other agencies and sheer obfuscation (a;most a requirement for public office).
Cost-Benefit Analysis, cost-effectiveness analysis. While a program might realistically be shown to meet its objectives by delivering services to intended population, it could be wasting money in the process.
An example of performance:
While keeping in mind that a little data can be misleading, consider the following, according to the UA Factbook 2003:
One might ask whether an academic unit is pulling its weight within the University, the College of Fine & Applied Arts, and even within the School itself.
Within the University: This chart, modified with added information to the right in bold letters indicates FCS as the 11th largest tuition generating unit at UA. Among the 61 revenue generating units, FCS is outperformed by only 10 departments, and 8 of those 10 are units with large General Education Courses counted in their revenues. If one were to remove those 8 schools from the analysis, the School of Family and Consumer Sciences becomes the 3rd larged producer of tuition revenues on the campus.
Within the College of FAA: This table shows that FCS is the 2nd largest producer among the college's 7 schools.
While the School of Communication produces over $10 a year (including the mandatory Gen. Ed. speech courses) with 20 full time faculty (a Herculiean feat), FCS produces over $8 million a year with 18 full time faculty. In terms of full time faculty members, the school ranks 4th in the college.
Within the School itself: This table shows that the Child and Family Development (CD/FD Divison) produces 54% of the total revenues brought in by the school. Of the over $8 million dollars generate annually, the CD/FD Division generates over $5 million in revenues with one third (6) of the total (18) full time faculty in the school. Further, this table shows that CD/FD untimately produces almost seven times what it costs to employ these 6 faculty for a year. This figure (ROI w/Subv) is calculated by dividing the total revenues by the total cost of the faculty (salary + benefits).
The UA administration has previously used an ROI benchmark of 1.8 without state subsidy as the lower limit of acceptable effeciency. According to this table, the CD/FD exceeds that benchmark with an ROI of 3.81 - which means they earn the cost of their employment back to the university almost 4 times.