What are the best ways to evaluate the effectiveness of a middle school program?

The importance of program evaluation at the school site lies in its value as a basis for change to improve learning. Effective schools continually evaluate themselves and make improvements. Evaluation begins by involving stakeholders in planning, establishing goals, and specifying measurable outcomes. Instead of using only one method of data collection techniques, a variety of different methods can be of great value. These might include survey questionnaires, focus groups, interviews, case studies, and existing school records.

Preliminary Decisions

The purpose of each evaluation determines which methods are necessary. If the purpose of evaluation is to judge a program’s worth, a comprehensive, statistical analysis of large amounts of data might be required. However, if the purpose of the evaluation is an estimate of its quality and scope, an informal process of interviews and group discussions might be appropriate. In either case collecting baseline data for comparison allows for the study of trends, both positive and negative. Program evaluation often requires attention to validity, reliability, sampling, and other factors that can impact the quality of the evaluation.

Six steps of program evaluation

Determine best practices, identify program goals, and specify outcomes, processes, and inputs. Recommendations of the Carnegie Council on Adolescent Development (1989), the NMSA position paper, This We Believe (1995), and other national publications can provide the structure for school goals and expected outcomes. Desired outcomes should be measurable, such as achievement, behavior, and attitudes. Outcomes used by the National Association of Secondary School Principals (NASSP) in a comprehensive study of middle level leaders and schools (Keefe, Valentine, Clark, & Irvin, 1994) are typical of outcomes used to measure program effectiveness. They are:

  • student outcomes (total student achievement, student satisfaction, productivity, self-efficacy, proportion of students completing the school year, percent of students receiving disciplinary referrals, percent of students passing all courses)
  • teacher outcomes (satisfaction, perceptions of autonomy and participation, teacher climate)
  • climate variables (difficulty related to change, administration teams, school goals)

Establish baseline by designing assessment measures to capture current practice. Two types of evaluation are often combined at the school level: (a) self-evaluation which may use dialogues, journals, and informal performance measures to assess strengths and weaknesses, identify improvement strategies, and monitor progress, and (b) external evaluation, using outside evaluators and their assessment instruments. In the interests of time, feasibility, and purpose, an external evaluation may be the most appropriate type of evaluation, particularly at the district level. However, in considering sustained school improvement, 20 years of research point to the importance of school and community involvement in evaluating goals, sharing visions, and making decisions in the evaluation process. Many schools find that a combination of both external and internal evaluation yields the most accurate results. Schurr’s (1992) book contains a variety of instruments that can be used as is or adapted by a school.

Each type of evaluation requires performance indicators, identified in the goals and representative of those areas where data will be collected. Indicators come from needs assessments, program objectives, and answers to such questions as, “What do you want to know, and what will you accept as evidence for any claims that are made?” Possible indicators are achievement gains; attitude improvement; satisfaction scales; sense of belonging for students to peers, students to teachers, and students to the school; drop-out rates; and numbers and kinds of referrals.

In general, no single assessment measure is considered appropriate for each outcome, process, or input because each school site is unique. Most researchers agree that a variety of measures assure an evaluation reflecting the viewpoints of most stakeholders (Baugher, 1981). As teachers, principals, parents, and students may have varying opinions of what constitutes the success of a program, good evaluations involve many people from different perspectives and use a variety of methods to gather information.

Collect data of current practice. Evaluators must determine who will be sampled, how the process of data collection will be designed, and how each piece of information will be analyzed.

School evaluation procedures include tests, surveys and interviews, and other processes in addition to using committees and outside evaluators (McEwin, Dickinson, & Jenkins, 1996). Other systematic assessment procedures include observations of classroom and school-wide activities, interviews, portfolios, tests and test-like measures, demonstrations, student projects, student exhibitions, and team meeting agendas. Less formal and indirect measures include reading about students in the newspaper; seeing student enthusiasm for learning and their willingness to participate; getting feedback from graduates; reviewing data regarding student attendance, behavior, and self-discipline; speaking with parents; and reviewing reports from high school teachers. These are known as anecdotal records.

Compare data of current practices with best practices and baseline data. Effectiveness is determined by the degree of match between the goals and outcomes. Such comparisons require organizing and summarizing data. The process may use scores on various tests and scales; opinions, reactions, and attitudinal responses grouped and summarized by frequency distributions; averages; and comparisons to baseline data.

For example, an evaluation of an advisory program may address the number of discipline referrals as learner outcomes, the frequency of advisory periods as implementation information, and the attitude of teachers and students toward advisory programs.

Once the information is organized and summarized, the narrative summarizations are compared to program goals. The findings provide either the basis for judgments of program adequacy/worth or the guidelines for program improvement. These conclusions and judgments should be supported with evidence, i.e., data.

Prioritize and develop plans to address discrepancies. Program evaluations guide the development of annual objectives and action plans to accomplish objectives.

Report and maintain assessment data for continuous program improvement and compiling trend data. When program assessment is generated and conducted by administrators, teachers, and other stakeholders, the result can be continuous improvement based on standards and learner outcomes. As Marshak (1995) concluded about his rural district’s assessment of its own programs and goal of helping schools become effective learning organizations, “Our experience suggests that focused, rigorous program assessment … can help to develop school cultures where continuous improvement is the watchword not only for students but also for educators.”

 

Related Articles
  • Baugher, D. (Ed.). (1981). New directions for program evaluation: Measuring effectiveness. San Francisco: Jossey- Bass.
  • Keefe, J. W., Valentine, J., Clark, D. C., & Irvin, J. L. (1994). Leadership in middle level education, Volume II: Leadership in successfully restructuring middle level schools. Reston, VA: National Association of Secondary School Principals.
  • McEwin, C. K., Dickinson, T. S., & Jenkins, D. M. (1996). America’s middle schools: Practices and progress: A 25 year perspective. Columbus, OH: National Middle School Association.
  • Schurr, S. L. (1992). How to evaluate your middle school: A practitioner’s guide for an informal program evaluation. Columbus, OH: National Middle School Association.
References
  • Baugher, D. (Ed.). (1981). New directions for program evaluation: Measuring effectiveness. San Francisco, CA: Jossey-Bass.
  • Burnham, B. R. (1995). Evaluating human resources, programs, and organizations. Malabar, FL: Krieger.
  • Carnegie Council on Adolescent Development (1989). Turning points: Preparing American youth for the 21st century. New York: Carnegie Corporation.
  • Clark, S. N., & Clark, D. C. (1987). Middle level programs: More than academics. Middle School Journal, 19(1), 24-26.
  • Clark, S. N. & Clark, D. C., (1993). Middle level school reform: The rhetoric and the reality. The Elementary School Journal, 93(5), 447-460.
  • Keefe, J. W., Valentine, J., Clark, D. C., & Irvin, J. L. (1994). Leadership in middle level education, Volume II: Leadership in successfully restructuring middle level schools. Reston, VA: National Association of Secondary School Principals.
  • Lounsbury, J. H., & Clark, D. C. (1990). Inside grade eight: From apathy to excitement. Reston, VA: National Association of Secondary School Principals.
  • McEwin, C. K., Dickinson, T. S., & Jenkins, D. M. (1996). America’s middle schools: Practices and progress: A 25 year perspective. Columbus, OH: National Middle School Association.
  • Mac Iver, D. J. (1990). Meeting the needs of young adolescents: Advisory groups, interdisciplinary teaching teams, and school transition programs. Phi Delta Kappan, 71, 458-464.
  • Marshak, D. (1995). District-based program assessment: One way to create “Schools that learn.” (ERIC Document Reproduction Service No. ED 392 828).
  • National Middle School Association. (1995). This we believe: Developmentally responsive middle level schools. Columbus, OH: Author.
  • Rutman, L., & Mowbray, G. (1983). Understanding program evaluation. Newbury Park, CA: Sage.
  • Schurr, S. L. (1992). How to evaluate your middle school. Columbus, OH: National Middle School Association.
  • Wolf, R. M. (1990). Evaluation in education: Foundation of competency assessment and program review. New York: Praeger.

Copyright 1999 National Middle School Association. Used on NCMLE web site with permission of NMSA.

back