s a multi–billion–dollar industry with global reach, Educational Technology (EdTech for short) has the unique potential to transform the education system by providing access to educational materials at scale through personalized approaches. However, post–pandemic reports such as the UNESCO GEM Report, clarified that in the past 10 years, EdTech has been introduced into classrooms more on business terms than on terms beneficial for learners. The EdTech industry has not moved the needle on education inequity and addressing fundamental educational outcomes. As we look ahead to Education 2050, how can EdTech realign with its core mission of scaling positive educational impact?
When data drives sales instead of learning
Data is crucial to gauge impact, but not all data collected by EdTech is used to support all children equitably. Like other businesses in the technology sector, some EdTech companies have been found to be misusing data for advertising, resale, and prolonging children's screen time. The most popular educational apps investigated by U.S. researchers used manipulative features to make children use the apps as long as possible, rather than advancing their learning.
Not surprisingly, data concerns, coupled with the implications of generative AI, have dominated the discussion around EdTech’s quality. In 2024, EdTech quality discussions have become entangled with debates on data safety, the addictive nature of social media, and the umbrella concern of governments seeking a comprehensive solution to children’s “screen time.” Yet, while sharing parallels with social media in its data use, EdTech is, or should be, a unique category given that it is fundamentally designed for education.
Evaluating EdTech: current solutions
In an ideal world, EdTech data should be leveraged to assess functionality, enabling researchers to objectively compare different technologies and provide quality recommendations. However, the reality is that EdTech companies typically do not share data with scientists or governments, and if they do, it is done selectively, hindering unbiased evaluations. Over the years, parents and teachers have sought a list of recommended apps, but the academic community, traditionally cautious about endorsing commercial entities, has refrained from validating specific tools or products.
International research organizations, frequently operating as private consultancy firms, offer various badges and seals of approval to EdTech companies. While these provide some indication of quality, they do not offer a scientific assessment of what works for which child and in which context, and thus cannot be used for national implementation. The absence of a government entity overseeing the educational value of EdTech, coupled with academics refraining from endorsing specific tools, is further complicated with an oversaturated EdTech market driven more by revenue considerations than by learning impact.
There is an ongoing discussion about governments creating national catalogues of recommended EdTech resources that would assist teachers and parents in navigating the many digital options, distinguishing solutions driven by learning rather than business motives.
Together with certifications, national catalogues and recommended lists serve as practical incentives for companies seeking visibility among buyers, such as schools and homes. Some certifications, particularly those that were created by researchers and follow research–based criteria, can be a good signal for policy–makers and schools interested in tools that have been empirically tested for their added value to learning and pedagogy.
Impact evaluations in education have always been about diverse measures employed by diverse authorities; it is widely acknowledged that major clearinghouses employ disparate standards, resulting in divergent recommendations on what works. Nevertheless, we do have rubrics and frameworks to assess impacts in specific domains, such as personalized learning, for example. Specific frameworks can be used as criteria for comparing tools across each other and independently verified recommendations for national catalogues and certifications.
Expert consensus on recommended resources, such as Smart Buys by the World Bank and UNESCO, highlight why objective measurement based on transparent and continuous data is crucial to keep pace with the evolving landscape of technologies. The International Certification of Evidence of Impact in Education has consolidated the various measurements, frameworks and criteria to arrive at some common benchmarks used for EdTech evaluations.
The complexity of the EdTech ecosystem necessitates such a multi–faceted approach to evaluating and supporting “what works.” But a holistic approach to evidence evaluations can only be sustained if it is underpinned by a strong “evidence mindset.”
The ‘evidence mindset’
The evidence mindset is an emerging term in education, philanthropy, and business sectors. Its adherents seek to instil the view that evidence is not an external quick stamp of approval but rather something embedded and intrinsic to an individual or an organization. Organizations with an evidence mindset recognize that research is a process, not a product to be purchased for validation. The research process can uncover counter–evidence and organizations with an evidence mindset transparently report both positive and negative results.
Research, in essence, can identify patterns and features for improvement. Companies embracing this understanding conduct evaluations of their solutions with measures characterized by the “R principles:” regular, reliable, relevant, and responsible measures. The approach feeds into a continuous cycle of improvement tailored to different technology solutions and pedagogical questions–something that researchers describe as the EVER routine.
Nurturing the evidence mindset in the EdTech ecosystem is something that all stakeholders can do: EdTech venture capitalists by running workshops and mentorship sessions within their portfolio companies; EdTech philanthropists by subsidizing research studies and invest in academia–industry partnerships and governments by creating financial incentives that clearly define the metrics they expect from funded initiatives. Adopting the “evidence of impact” mindset is crucial for paving the way for the next generation of EdTech that is built on proven effectiveness rather than unfounded hype.
a global affairs media network
The power of the ‘evidence mindset’ in education technology
Photo by Thomas Park on Unsplash.
June 28, 2024
By fostering the ‘evidence mindset’ in technology organizations targeting education, funders can positively impact millions of learners worldwide, writes Natalia Kucirkova.
A
s a multi–billion–dollar industry with global reach, Educational Technology (EdTech for short) has the unique potential to transform the education system by providing access to educational materials at scale through personalized approaches. However, post–pandemic reports such as the UNESCO GEM Report, clarified that in the past 10 years, EdTech has been introduced into classrooms more on business terms than on terms beneficial for learners. The EdTech industry has not moved the needle on education inequity and addressing fundamental educational outcomes. As we look ahead to Education 2050, how can EdTech realign with its core mission of scaling positive educational impact?
When data drives sales instead of learning
Data is crucial to gauge impact, but not all data collected by EdTech is used to support all children equitably. Like other businesses in the technology sector, some EdTech companies have been found to be misusing data for advertising, resale, and prolonging children's screen time. The most popular educational apps investigated by U.S. researchers used manipulative features to make children use the apps as long as possible, rather than advancing their learning.
Not surprisingly, data concerns, coupled with the implications of generative AI, have dominated the discussion around EdTech’s quality. In 2024, EdTech quality discussions have become entangled with debates on data safety, the addictive nature of social media, and the umbrella concern of governments seeking a comprehensive solution to children’s “screen time.” Yet, while sharing parallels with social media in its data use, EdTech is, or should be, a unique category given that it is fundamentally designed for education.
Evaluating EdTech: current solutions
In an ideal world, EdTech data should be leveraged to assess functionality, enabling researchers to objectively compare different technologies and provide quality recommendations. However, the reality is that EdTech companies typically do not share data with scientists or governments, and if they do, it is done selectively, hindering unbiased evaluations. Over the years, parents and teachers have sought a list of recommended apps, but the academic community, traditionally cautious about endorsing commercial entities, has refrained from validating specific tools or products.
International research organizations, frequently operating as private consultancy firms, offer various badges and seals of approval to EdTech companies. While these provide some indication of quality, they do not offer a scientific assessment of what works for which child and in which context, and thus cannot be used for national implementation. The absence of a government entity overseeing the educational value of EdTech, coupled with academics refraining from endorsing specific tools, is further complicated with an oversaturated EdTech market driven more by revenue considerations than by learning impact.
There is an ongoing discussion about governments creating national catalogues of recommended EdTech resources that would assist teachers and parents in navigating the many digital options, distinguishing solutions driven by learning rather than business motives.
Together with certifications, national catalogues and recommended lists serve as practical incentives for companies seeking visibility among buyers, such as schools and homes. Some certifications, particularly those that were created by researchers and follow research–based criteria, can be a good signal for policy–makers and schools interested in tools that have been empirically tested for their added value to learning and pedagogy.
Impact evaluations in education have always been about diverse measures employed by diverse authorities; it is widely acknowledged that major clearinghouses employ disparate standards, resulting in divergent recommendations on what works. Nevertheless, we do have rubrics and frameworks to assess impacts in specific domains, such as personalized learning, for example. Specific frameworks can be used as criteria for comparing tools across each other and independently verified recommendations for national catalogues and certifications.
Expert consensus on recommended resources, such as Smart Buys by the World Bank and UNESCO, highlight why objective measurement based on transparent and continuous data is crucial to keep pace with the evolving landscape of technologies. The International Certification of Evidence of Impact in Education has consolidated the various measurements, frameworks and criteria to arrive at some common benchmarks used for EdTech evaluations.
The complexity of the EdTech ecosystem necessitates such a multi–faceted approach to evaluating and supporting “what works.” But a holistic approach to evidence evaluations can only be sustained if it is underpinned by a strong “evidence mindset.”
The ‘evidence mindset’
The evidence mindset is an emerging term in education, philanthropy, and business sectors. Its adherents seek to instil the view that evidence is not an external quick stamp of approval but rather something embedded and intrinsic to an individual or an organization. Organizations with an evidence mindset recognize that research is a process, not a product to be purchased for validation. The research process can uncover counter–evidence and organizations with an evidence mindset transparently report both positive and negative results.
Research, in essence, can identify patterns and features for improvement. Companies embracing this understanding conduct evaluations of their solutions with measures characterized by the “R principles:” regular, reliable, relevant, and responsible measures. The approach feeds into a continuous cycle of improvement tailored to different technology solutions and pedagogical questions–something that researchers describe as the EVER routine.
Nurturing the evidence mindset in the EdTech ecosystem is something that all stakeholders can do: EdTech venture capitalists by running workshops and mentorship sessions within their portfolio companies; EdTech philanthropists by subsidizing research studies and invest in academia–industry partnerships and governments by creating financial incentives that clearly define the metrics they expect from funded initiatives. Adopting the “evidence of impact” mindset is crucial for paving the way for the next generation of EdTech that is built on proven effectiveness rather than unfounded hype.