You are here

Measuring learning: Where do we stand and where do we go?

Measuring learning: Where do we stand and where do we go?

Elizabeth Buckner, Research Associate, Education Policy and Data Center

With a new set of global development goals on the post-2015 horizon, the education community has been working to shift the focus and investment in education from universal access to access plus learning.

Through a broadly inclusive process of stakeholder consultations, The Brookings Institution and UNESCO’s Institute of Statistics (UIS) convened the Learning Metrics Task Force (LMTF), which outlined an agenda for developing concrete and globally comparable indicators of learning.

First asking what learning is important for all children, the task force identified seven learning domains they consider essential to preparing “children and youth for their future lives and livelihoods” – literacy, numeracy, science and technology, culture and the arts, social and emotional learning, physical well-being and learning approaches and cognition.

We know, however, that the competencies young people need in their future lives may not be what is taught and tested in school. Therefore, among the small set of indicators for tracking learning at the global level that the LMTF identified as both feasible and important was one to track “exposure to learning opportunities across all seven domains of learning.” Tracking students’ exposure to learning remains an ambitious goal, however, in part due to a lack of publically available, cross-national data on the content of national education systems.

Bringing empirical data to better understand students’ exposure to various domains, FHI360’s Education Policy and Data Center (EPDC) undertook Phase 1 of the National Learning Assessments Mapping Project. The project asked: to what extent are the seven domains identified by the LMTF reflected in current national assessments and exams? It then classified, categorized, and mapped publically accessible subject-specific information on a randomly selected group of 53 nations drawn from a list of high-priority low and middle income nations.

In this sense, EPDC’s National Learning Assessment Mapping Project should be understood as a yardstick for gauging the feasibility of a globally coordinated effort to track students’ exposure to the seven learning domains.

What have we learned from this process?

First, there are many obstacles to data collection. National assessment data are not always publically available – of the 53 nations initially selected for the analysis, 42 had information on national learning assessments and of these, only 35 had information on which subjects are tested and at what levels. 

Additionally, it was not always easy to map country-specific assessment information onto the LMTF’s seven learning domains, largely because school learning is typically structured around subject matter, which is quite different from the LMTF domains.

In some cases, students may be exposed to numerous learning domains within one curricular subject – for example, they may learn important skills from the learning approaches and cognition domain in the context of their reading courses. Information on national assessments will not necessarily reflect this integration. For the purposes of the National Learning Assessments Mapping project, we only knew if a country assessed learning approaches and cognition if it was specifically stated as an examination topic, such as “critical thinking” or “problem-solving skills" or “quantitative and qualitative reasoning.” In fact, the best way to assess critical thinking that students will need in their futures may actually be in an applied context, integrated into math or other subjects. Moving forward, we must think carefully about how the LMTF’s seven domains map onto school subjects, and what to do in the case that school subjects cover many learning domains.

Secondly, Phase 1 of the National Learning Assessment Mapping Project shows that even while children become exposed to a greater number of learning domains as they grow, they are seldom tested on more than four domains. The vast majority of national exit exams and standardized tests focus substantially on literacy and numeracy, and of all the exams reviewed in N-LAMP, only 15% assess student performance on five or more domains. Literacy and numeracy are tested in 140 of 142 identified assessments, while social and emotional learning is assessed in 25 assessments, physical well-being in eight and learning approaches and cognition in seven.

The two least-tested subjects are, unsurprisingly, those that are also the hardest to measure. This focus on basic skills and knowledge makes a lot of sense – literacy, numeracy and content knowledge of science and social science are much easier to assess in a formal school setting than are other important skills endorsed by the LMTF, including resilience, leadership, and moral and ethical values.

This raises important questions for moving forward – how can we encourage countries to develop the broad range of skills all children need when we do not have clear understandings of how these meta-cognitive and non-cognitive skills can be best incorporated into the structure of formal schooling? These debates are just as important for developed countries as they are in developing nations, given the growing concern over issues such as lack of free play and physical movement among children in elementary schools even here in the United States.

It is also possible that not all domains need to be incorporated into the formal school setting and even more likely that all skills may not need to be tested. However, we know from countless studies around the world that teachers tend to emphasize subjects tested on national exams – and in today’s outcomes-oriented era, teachers are often held accountable for student performance on tested subjects, while untested subjects receive less time and emphasis.

What does this mean for LMTF and the Post-2015 Agenda?

The National Large-Scale Assessment Mapping Project has pointed out the need for additional data collection and has also raised some important questions for stakeholders to grapple with. To better gauge exposure to all seven domains, it is clear that future data collection efforts must focus on curricular and textbook content as well as assessment data in order to provide better data students’ exposure to learning opportunities above and beyond their inclusion in national assessments.

It has also raised some important questions for the LMTF. First, although the LMTF aims to track exposure to all seven domains, does the LMTF consider all seven equally important? Are students expected to be exposed to all seven in their formal school settings, or are some better developed in informal settings? This is an area for ongoing discussion.

Another area for ongoing discussion is the relationship between curricular content and assessment policy and practice. Does the LMTF’s recommendation to track exposure to all seven domains imply assessment of some kind? If so, how? This question is particularly difficult to answer with respect to those domains that are difficult to measure in standardized formats and the two least likely to be already incorporated into assessments – social and emotional learning and learning approaches and cognition. Although difficult, it may be worth asking what competency in these domains looks like, apart from a standardized exams, and how they can be incorporated into the post-2015 educational agenda as well. 


Add new comment