The Philosophical Battle of Creating Assessments

Who should be in charge of creating assessments for Student Learning Objectives (SLO)?  Assessments are at the heart of the SLO process and are the main gauge of student performance evaluated by educators (RSN, 2014).  Therefore, this question is of the utmost importance. There are two main schools of thought on the answer.  One is that standard assessments that have been pre approved should be used by teachers and districts to streamline validity.  The other, a more flexible approach, is that an assessment can be “...any measure that allows students to effectively demonstrate what they know and can do…” (RSN, 2014, p. 14) and can be created by teacher teams and include standardized assessments.  

Let’s look at both of these options in the framework of three important criteria for assessment as laid out by the Ohio Department of Education (2016).  These criteria are (a) Alignment to Standards, (b) Stretch, and (c) Validity and Reliability.  

First, how well do these two schools of thought fit into the need of assessments to align to standards?  This alignment to standards in an assessment would mean that items on the assessment would cover all the standards for that grade or subject, not cover standards outside of the scope of the course or grade, and distribute questions relative to the time spent on each standard (ODOE, 2016).  Pre approved or commercially constructed assessments may have the advantage here as they are inherently based on standards and have content directly based on standards either common core or state level.  However, these pre-created tests may break the last caveat in assessment selection noted above.  Since these tests are produced at a national or state level, they may include or be missing questions pertaining to standards not covered in a course or grade, thereby detracting from the validity of the test itself.

Teacher team or educator created assessments may lack the streamline connection to subject or grade as provided by pre approved tests.  In addition, if teacher teams from different districts and schools all create assessments it will be more complex to measure student growth on the macro scale at a district or state level since there will be no standardized format or list of questions.  This could also be seen as an advantage, however.  Especially in the current landscape of pandemic reconstruction, teachers will know what standards have been prioritized at district and school levels and may be able to create assessments that will better represent the actual standards that have been addressed.  Even without the facet of a pandemic this would be true.

How do these approaches align with the idea of stretch?  Stretch refers to the ability of an assessment to show the growth of the lowest and highest achieving students (ODOE, 2016).  Although many pre approved tests have this in mind, they may be less flexible in this area.  Since they are by definition standard, they may not have the questions that will show the achievement of both the highest and lowest learners.  Another consideration is that often content is not changed for students with different needs, but rather accommodations are created for those students to take the same test.  

In contrast, a teacher team built assessment may have input from teachers of those high and low students, giving the opportunity to create sections and questions that will rigorously examine growth of both groups.  The downfall to avoid here is to create a test specifically geared toward low or high learners as it skews the validity and reliability of the data collected.  Teachers input can also create an assessment that is not simply loaded with accommodations for students, but created with students in mind.

Finally, how do these models stand up to the criteria of reliability and validity?  In other terms, assessments should create consistent results and measure what they are intended to measure (ODOE, 2016).  Assessments can be evaluated for their validity and reliability on four important guideposts.  

  • The assessment should not have overly complicated vocabulary, unless testing reading skills, or have overly complicated language.

  • Test items should be written clearly and concisely.  Performance assessments must have clear steps.

  • Clear rubrics and scoring guides should be provided, especially for performance assessments.

  • Testing conditions should be consistent across classes. (ODOE, 2016).

Pre approved assessments may or may not follow these guideposts.  Many times these assessments have been created by teams that think through the wording of questions endlessly.  This does not mean that there have not been many examples of questions that were not understandable to students because of wording or even the choice of activity listed in a story problem.  Standardized assessments are often created based on majority groups that can leave some minority groups confused by wording or topic (Kim & Zabelina, 2015).  These tests can also have confusing scoring guides or may not even have rubrics since the use of standardized performance assessment is still relatively young.  They do have the distinct advantage of consistency in conditions and instructions since those are often strictly laid out.

Teacher created assessments may solve some of these problems.  Since teachers know their students, they may be able to create assessments that are manageable for students as well as avoiding culturally biased pitfalls of topics.  Rubrics can be created by teacher teams for specific performance assessment when those assessments are created.  However, the problems that may arise are lack of validity and reliability from poorly constructed tests or rubrics and guides.  Consistency of test conditions would be on the shoulders of teachers, schools, and districts to ensure that they were as standardized as possible.  Avoiding large incongruities would definitely be more work for the educators on the ground.

Overall, there are both pros and cons to each system.  A basic philosophical difference lies in the two different schools of thought on assessing for SLOs.  In the corner of pre approved assessments the philosophy is one of consistency and uniformity to gather data.  Its logic and practicality are obvious.  In the other corner, the philosophy is that of shifting power to the teachers and educators that work with students every day. Trading uniformity for the hope of deeper understanding.  John R. Troutman McCrann (2018) sums up this philosophy in one pithy comment. He says, “...it starts and ends with a very simple idea: I, the students' teacher, have expert knowledge about my students and...content standards, so I ought to have the power to assess those students' growth on those standards” (para. 5).  The issue is a complex one, but one that demands debate to ensure the students that represent that “S” in SLO achieve the most that they can.

References

Kim, K. H., & Zabelina, D. (2015). Cultural Bias in Assessment: Can Creativity Assessment Help? International Journal of Critical Pedagogy, 6, 129–147. Retrieved from http://libjournal.uncg.edu/ijcp/article/viewFile/301/856#:~:text=Standardized tests intend to measure,language backgrounds, socioeconomic status, and

Ohio Department of Education (ODOE). (2016). A Guide to Using SLOs as a Locally-Determined Measure of Student Growth (Rep. No. Guidebook). Retrieved from https://education.ohio.gov/getattachment/Topics/Teaching/Educator-Evaluation-System/Ohio-s-Teacher-Evaluation-System/Student-Growth-Measures/Student-Learning-Objective-Examples/SLO-Guidebook-041516.pdf.aspx

Reform Support Network (RSN). (2014). A toolkit for implementing high-quality student learning objectives 2.0. Retrieved from https://www2.ed.gov/about/inits/ed/implementation-support-unit/tech-assist/toolkit-implementing-learning-objectives-2-0.pdf

Troutman McCrann, J. R. (2018). Putting Assessment Back in the Hands of Teachers. Educational Leadership, 75(5), 41-45. Retrieved from http://www.ascd.org/publications/educational-leadership/feb18/vol75/num05/Putting-Assessment-Back-in-the-Hands-of-Teachers.aspx