Assessment beyond the assessments: Using data visualization to promote ILSA literacy

Abstract

 Presenter (s) Mariusz Gałczyński
Title Assessment beyond the assessments: Using data visualization to promote ILSA literacy

Since the mid-1990s, over half of the world’s countries have taken part in international large-scale assessments (ILSAs) carried out by the OECD and the IEA. Moreover, 35 of 39 “advanced economies” (IMF, 2017) have participated in at least 3 out of 4 major ILSAs: PISA, TIMSS, PIRLS, and ICCS. Such widespread participation across perpetual testing cycles not only signifies tremendous investment of fiscal and human resources over more than two decades, it also represents fidelity to a global testing culture that has permeated all areas of education (Smith, 2016). In the search for best practices among “top performers,” ILSA league tables have influenced stakeholders’ perceptions of educational quality and have globalized education governance via policy reactions that assume deficits in the systems which fail to rank at very top (Ingersoll, 2003; Rutkowski et al., 2010).

In order to interpret ILSAs more validly and meaningfully, the concept of ILSA literacy demands that we frame results in much broader context (Gałczyński, 2014). This is because the notion of literacy only gains meaning when “situated within a theory of cultural production and viewed as an integral part of the way in which people produce, transform, and reproduce meaning” (Freire & Macedo, 1987). ILSA literacy thus applies insights from comparative education and globalization studies to the framework of assessment literacy, which involves acquisition of a varied range of skills needed to develop critical understanding about the functions and roles of assessment within education and society (O’Loughlin, 2013).

This poster showcases data visualization as a research method (Chen, 2006; Keim et al., 2013; Kirk, 2016) by reorganizing student achievement scores into comparison matrices. Each depicted matrix offers a meta-analysis of TIMSS, PIRLS, ICCS, and PISA, illustrating longitudinal trends and generating baseline references. While the term “meta-interpretation” (Weed, 2005) more precisely describes the “interpretive synthesis” of comparing analogous student populations within participating countries, data visualization via comparison matrices facilitates easy visual identification of specific kinds of trends and incongruities in student achievement. With scores replaced by symbols denoting achievement in relation to international scale averages, results are juxtaposed across multiple age/grade levels and in relation to multiple literacies. Rows group together relatively coincident ILSA administrations in order to depict achievement results across roughly contemporary student cohorts.

Three representative comparison matrices are depicted on the poster: The first matrix features the eight educational systems which fully participated in all four major ILSA cycles between 2015-2016; rather than drawing conclusions about the quality of a country’s educational system from the results of any single ILSA, we become able to compare student achievement across all content areas targeted in TIMSS, PIRLS, ICCS, and PISA. The second matrix is also constructed with ILSA data from 2015-2016, but it narrows our focus to participating countries from Arab States and Africa; by subtracting countries from the “Global North,” we are prompted to consider how the developing world participates in ILSAs and what is represented by the “international” scale average in league tables. The third matrix charts the history of ILSA participation for select European countries, and it reveals trends of improving or declining achievement over time.

As illustrations of ILSA literacy, these comparison matrices offer baseline reference points that elicit broad questions about the purpose, implementation, and consequences of ILSAs: What kinds of data do we need to evaluate educational quality and justify policy transfer between systems? How should league tables be constructed to inspire tangible goals from “global” comparisons? And with even more data available from sources like ICILS or PISA 2018’s global competence framework, which ILSA testing cycles should continue into the future and for how long?

Mariusz Gałczyński is an independent education researcher and specialist; this poster represents his work as a doctoral candidate (ABD) at McGill University. From 2016-2019, he served as Managing Director of CIES. Currently he works as a secondary English Language Arts teacher at International Studies Charter School in Miami’s Little Havana neighborhood—as well as Miami-based instructor for the Learning Across America program, a partnership between Borough of Manhattan Community College and Cultural Hi-ways. His diverse research interests and publications include interdisciplinary multicultural education, social justice and equity, teacher education and professionalization, and assessment literacy.

Contact Mariusz Gałczyński on Twitter or Instagram @MariuszEDU.

3 Responses

  1. Jorge Delgado

    Interesting study Mariusz. One of my main concerns about education is when we limit what students should learn from the educational experiences. Limited curriculum for limited expected learning. In other words. Using standardized testing simplifies it even more and ignores the role of context. I like that you recognize the limitations and concerns with the widespread use of international tests and how literacy is necessary. That’s even more complicated if you add quality and policy transfer. Nice job!

  2. I’ve been searching for hours on this topic and finally found your post. totosite, I have read your post and I am very impressed. We prefer your opinion and will visit this site frequently to refer to your opinion. When would you like to visit my site?

Leave a Reply