Midway through my third year at the Brooklyn Arts Academy, not too long after our disappointing school quality review, the principal asked all teachers to read a long article on the limitations of “traditional” methods of assessments for underperforming students. The paper, written by a public school teacher from Chicago, detailed how one school increased student achievement by doing away with grades and report cards in favor of records assessing student performance in very specific areas. Our principal invited a guest speaker from this school to promote this form of assessment, and teachers were asked to join a pilot group on a voluntary basis.
I opted out of the pilot group, preferring instead to remain consistent with the system I’d had in place from the beginning of the year. I tallied grades according to student performance on exams, projects, daily class work and homework. This approach to assessment acknowledged many factors of student performance beyond particular learning goals. A student who worked hard and always turned in homework had a chance of receiving the same or better grade than a student who skipped classes but still demonstrated understanding of class material. By contrast, with an outcomes-based assessment system students need only demonstrate “mastery” or “proficiency” of specifically articulated “learning outcomes.” Students have multiple pathways to demonstrate such outcomes (essay writing, oral exams, etc) and can do so at any time in a given marking period. Class attendance, homework, and effort are not necessarily assessed.
At the end of that year at the Brooklyn Arts Academy, we were informed that all teachers would be required to use outcomes-based assessments the following year. I even heard that our school had paid $10,000 to a consultant to help set up an on-line computer program that would organize and track our student’s performance on each outcome (this program was decidedly not user-friendly, as I had to click from student to student when entering data, instead of being able to press tab like I could using “Gradekeeper”). The administration asked us to write 12-15 outcomes for our classes for each semester and encouraged us to align them with “key cognitive strategies” as outlined in David Conley’s book “College Knowledge”.
This task proved quite difficult. At least within social studies, our curriculum, as standardized by the state of New York, is organized by content. Thus, we had to think about how to write outcomes that spoke both to content and skills. As a result, I wound up with the following outcomes for my first unit about political systems:
- Students can demonstrate an understanding of multiple points of view by comparing and evaluating their own cultural and societal values (about parenting, gender, government, etc.) to those of Confucian China
- Students can explain the basic philosophies of both Thomas Hobbes and John Locke, and can use evidence to support an argument about which philosopher’s ideas more accurately describe human nature
- Students can evaluate the strengths and weaknesses of both democratic and totalitarian societies
- Students can apply knowledge about governmental systems by designing a school based on the principles of democracy, monarchy or totalitarianism.
For each outcome, I then had to write a rubric of what it meant for a student to be “highly proficient,” “proficient,” or “not yet proficient.” These rubrics were difficult to conceptualize because students could submit many different forms of evidence for each learning outcome. For example, for my outcome about Hobbes and Locke, would a detailed Venn diagram be considered evidence of proficiency, even if it led to a poorly written essay? If the student filled in the Venn diagram accurately but could not articulate his or her ideas orally, has he or she shown adequate understanding? It was challenging to provide clear criteria to my students for what they needed to show me to be considered proficient or highly proficient in a given outcome.
Though returning teachers had been told to write outcomes over the summer, the new teachers were finding out about it near the beginning of the school year and had to scramble to compose them. There was little coordination between the outcomes written by teachers in a given department. For example, there was no clear coherency between the ninth-grade global teacher’s outcomes and my own for 10th grade global, since we both wrote them independently of one another.
Our department was instructed to create a scope and sequence of outcomes that would scaffold the development of specific skills as students moved from the ninth grade to the 12th grade. We worked on this during twice-weekly department meetings throughout the course of the year, though without ever having a model of what the end product should look like. Meanwhile, we taught our current students using the learning outcomes we came up with on our own.
I still taught historical content-knowledge that I hoped would cultivate a deep understanding about certain places and times and the ability to make connections between them. I simply adapted my own way of teaching to a new method of assessment. Similarly, there was no paradigm shift in the way my students approached their schoolwork. The outcomes-based assessment system may have been meant to break the institutionalized norm of doing schoolwork in order to gain points and make a grade, but many of my students just treated HP as the new A and NY as the new F.
But, at least for the sake of the school and its upcoming SQR, the school could now point to a system of data collection that charted specific academic strengths and weaknesses of the students in each of their teachers’ classes. After last year’s “underdeveloped with proficient features” SQR rating, the administration was determined to not be rated so poorly again, and outcomes-based assessments was only the beginning.