We know that content knowledge alone cannot determine students’ success. This is true in school—where a student’s likelihood of graduating can depend on mastering broad content requirements while also displaying persistence and other positive personal attributes. It is also true in college—where success has been linked to such qualities as conscientiousness, creativity, and academic self-efficacy. And it is true in the workplace—where employers repeatedly say they value skills such as teamwork, higher order thinking, and the ability to adapt to the demands of the job.
The non-academic competencies that are associated with success go by many names—including employability skills, interpersonal and intra-personal skills, or social and emotional learning.
Educators and policymakers are increasingly focusing on these competencies (and on aspects of school culture and climate that influence students’ motivation to learn). Yet, these skills rarely appear in course lists. Rather, they cut across the curriculum, frequently incorporated into existing lessons. Social and emotional learning is not just a function of what is taught but how material is taught. For example, leadership and teamwork skills may develop if students are given opportunities to work on problems in groups, taking on different roles and relationships while being asked to reflect on their own performance and that of their peers.
While organizations are working to develop resources and guides for teaching these skills, there remains a key gap in our knowledge: how to assess whether students have mastered them.
If teachers are expected to support growth in social and emotional competencies, they need assessments that can help them understand grasp of these skills and dispositions, how they’ve grown over time, and what instructional approaches work best.
But there are obstacles to making this vision a reality: There are few tools designed for measuring these skills that support instruction in K-12 classroom settings, and too few educators know anything about the tools that do exist.
Although researchers have developed numerous personality measures, most are designed to be used by trained psychologists and are typically not intended to inform the practices of educators. In recent years, researchers have sought to introduce assessments of non-academic competencies to support instruction.
The CORE consortium of school districts in California, large urban school systems that together educate about a million students, now use surveys of teachers, students, and parents to measure such things as students’ learning “mindsets” and self-management skills. (They also measure school culture and climate and other, non-academic contributors to student success that can’t be taught to students). One survey question, for instance, asks students to comment on such statements as “There are some things I am not capable of learning,” and “Challenging myself won’t make me any smarter.” Another asks students to respond to the statement, “I came to class prepared.”
Results from pilot tests suggest that the measures are consistent and correlate in ways you might expect with other measures such as grades, participation in class and graduation rates. Still, more research is needed to determine how well they connect with student achievement in the long term.
Trusting the data generated by surveys and self reporting becomes more important when accountability raises the stakes. The CORE districts address this concern by reporting results at the school rather than the student level, and by having the survey results count for only a small portion of schools’ performance ratings. And the accountability stakes are very low in the CORE network; schools are supported rather than sanctioned if they struggle. Still, it remains to be seen how the indicators fare under tougher conditions.
Concern about the dependability of non-academic measures of student success is one reason that no state has yet adopted a measure of these non-academic competencies as an indicator of school quality in its Every Student Succeeds Act (ESSA) accountability plan. Although many educators have argued that states should focus their “fifth indicator” on one or more such competencies, researchers—including many of the developers of these measures—have argued that the available evidence does not yet support high-stakes uses.
The potential corruptibility stems in large part from the fact that the measures rely heavily on the perceptions of students, teachers, and parents. They are subjective and thus can be manipulated through coaching if school staff feel undue pressure to raise performance. Assessments like teacher checklists that do not depend on self-reporting may be less susceptible to manipulation. But few such measures have been widely tested in school settings.
These challenges point to the need for a method of making information about assessments widely available to potential users. With support from the Funders Collaborative for Innovative Measurement, RAND is currently developing an online repository of measures of interpersonal and intra-personal and higher order cognitive competencies.
When it launches in the spring, it will provide information on more than 200 measures in a compact format that can be searched and reviewed. That will include such information as what the measure assesses, what grade levels it works best for, how it is scored and whether there is a fee for using it. It will also include any evidence of how well it works in assessing social and emotional learning.
The repository will provide an easily searchable archive of key information about measures that have been used in K-12 schools, ranging from surveys of student engagement and motivation to checklists teachers fill out about performance and organizational skills, Further, by bringing information together in one place, it will allow researchers and developers to identify non-academic competencies that lack good measures and target development of new assessments.
But providing more information on assessments is just the first step. Teachers will need time and training to incorporate these competencies into their lessons and to use the new assessments effectively. Still, for the work to be successful, we must be able to measure its impact.
Brian Stecher is a senior social scientist at the nonprofit, nonpartisan RAND Corporation and a professor at the Pardee RAND Graduate School. He is a FutureEd research advisor. Laura Hamilton is associate director of RAND Education, a senior behavioral scientist, and an adjunct faculty member in the University of Pittsburgh’s Learning Sciences and Policy program.