Bottom-up education quality improvement with a mobile app!


 Presenter (s) Mark Hoeksma, Edukans; Sila Ünal, University of Amsterdam; Amber van Heinsbergen, University of Amsterdam; Annemijn van Zijverden, Utrecht University; Anouk Kopp, Utrecht University; Isabel de Haan, Utrecht University

In many low and middle-income countries, the focus in education improvement has shifted from access to quality. The need for simple and psychometrically sound instruments to assess education quality is well-recognised. ICT applications could increase efficiency, stakeholder participation and unlock process-improvement through data-aggregation, but this potential is scarcely utilised (Solar, Sabattin & Parada, 2013). Against this background, an international education NGO from the Netherlands, has developed an online mobile app for school assessment and tested it in Ethiopia and Suriname. The tool is both standardized because it defines school quality through a set of fixed indicators as well as participatory as it promotes a bottom-up approach (Cheng & Moses, 2016). It involves all stakeholders to work together in a school assessment procedure resulting in action points for school development. It focuses on quality in five domains: learning environment (LE) , teaching (T), learning (L), school leadership (SL) and parent and community involvement (PC).

This poster provides a theoretical framework and presents the findings of two empirical studies that were carried out in 2019. In both studies, a comparison was made of the implementation of the application in Ethiopia and Suriname. The first study looked at the extent to which working with the app supported a bottom-up approach. The second study focused on the reliability, construct validity and consequential validity of the definition of quality in the above mentioned five domains.

Construct validity was defined as the extent to which an assessment instrument elicits what it intends to measure (Messick, 1995). Consequential validity is was understood as meaningfulness, fairness, and transparency of the assessment procedure (Cizek et al., 2008, Frederiksen and Collins, 1989, Linn et al., 1991). For reliability, quantitative analysis of the internal consistency of the scales was carried out. Qualitative data collection methods were used to gain insight in the extent to which participatory approaches were applied and to gain insight in the validity of the instrument.

The results of the first study showed that the app supported a bottom-up approach in both country contexts. However, in many cases parents and community members were reticent in interacting. Particularly in Suriname many felt restraint to express. The reliability of the assessment instrument proved sufficient in both countries. The results also indicated the construct validity in both contexts was reasonably strong although in Ethiopia the users missed several indicators that in their perception were crucial (e.g. government education policy). The findings on consequential validity were mixed. Limited time for preparation and the novelty of working with a tablet were found to be obstacles for fairness, meaningfulness and transparency.

The results highlight the importance of understanding the often complex local contexts in the use of the app and lead to recommendations for strategies for training and contextualised use.

2 Responses

  1. Connor

    This is a fascinating piece of research, thank you for sharing. What compelled you to sample schools in Ethiopia and Suriname? Did you conduct a needs assessment before testing the application in these locations? And how did you react to the stakeholder reticence?

Leave a Reply