Online programs used for coronavirus-era school promise results. The claims are misleading

This story about education software was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for the Hechinger newsletter.

Coronavirus school closures in all 50 states sent educators and parents scrambling to find online learning resources to keep kids busy and productive at home. Website traffic to the homepage for IXL, a popular tool that lets students practice skills across five subjects through online quizzes, spiked in March. Same for Matific, which gives students math practice tailored to their skill level, and Edgenuity, which develops online courses.

All three of these companies try to hook prospective users with claims on their websites about their products’ effectiveness. Matific boasts that its game-based activities are “proven to help increase results by 34 percent.” IXL says its program is “proven effective” and research “has shown over and over that IXL produces real results.” Edgenuity boasts that the first case study in its long list of “success stories” shows how 10th grade students using its program “demonstrated more than an eightfold increase in pass rates on state math tests.”

These descriptions of education technology research may comfort educators and parents looking for ways to mitigate the effects of lost learning time because of the coronavirus. But they are all misleading.

None of the studies behind IXL’s or Matific’s research claims was designed well enough to offer reliable evidence of their products’ effectiveness, according to a team of researchers at Johns Hopkins University who catalog effective educational programs. Edgenuity’s boast takes credit for substantial test score gains that preceded the use of its online classes.

Companies that provide online learning technology try to hook prospective users with claims about their products’ effectiveness. But, as in the case of IXL, which claims its program is “proven effective,” the studies behind the claims weren’t designed well enough to offer reliable evidence, university researchers said.
Companies that provide online learning technology try to hook prospective users with claims about their products’ effectiveness. But, as in the case of IXL, which claims its program is “proven effective,” the studies behind the claims weren’t designed well enough to offer reliable evidence, university researchers said.

Misleading research claims are increasingly common in the world of ed tech. In 2002, federal education law began requiring schools to spend federal money only on research-based products. As more schools went online and demand for education software grew, more companies began designing and commissioning their own studies about their products. There is little accountability to make sure companies conduct quality research and describe it accurately, so they’ve been free to push the limits as they try to hook principals and administrators.

This problem has been exacerbated by the coronavirus as widespread school closures forced districts to turn to online learning. Many educators have been making quick decisions about what products to lean on as they try to provide remote learning options for students.

Coronavirus learning: Is online school program backed by Facebook’s Mark Zuckerberg the answer?

A Hechinger Report review found dozens of companies promote their products’ effectiveness on their websites, in email pitches and in vendor brochures with little evidence or shoddy backing to support their claims. Some companies are trying to gain a foothold in a crowded market. Others sell some of the most widely used education software in schools.

Many companies claim that their products have “dramatic” and “proven” results. In some cases, they tout student growth that their own studies admit is not statistically significant. Others claim their studies found effects that independent evaluators say didn’t exist. Sometimes companies make hyperbolic claims of effectiveness based on a kernel of truth from one study, even though the results haven’t been reproduced consistently.

The Matific study that found a 34% increase in student achievement, for instance, includes a major caveat: “It is not possible to claim whether or how much the use of Matific influenced this outcome as students would be expected to show some growth when exposed to teaching, regardless of what resources are used.”

IXL’s research simply compares state test scores in schools where more than 70% of students use its program with state test scores in other schools. This analysis ignores other initiatives happening in those schools and the characteristics of the teachers and students that might influence performance.

About those teachers… They wanted respect. It only took a coronavirus pandemic and worldwide economic collapse

Edgenuity boasts of contributing to an eightfold increase in the rate of 10th graders’ passing state math tests at Altamont High School in Utah. The claim is based on measuring growth starting two years before the school introduced Edgenuity, rather than just one year before. Over two years of actually using Edgenuity, 11th grade pass rates dropped in the featured school, ninth grade pass rates fell, then recovered, and 10th grade pass rates doubled – a significantly less impressive achievement than the one the company highlights.

Matific did not respond to repeated requests for comment. IXL declined to comment on critiques that its studies weren’t adequately designed to make conclusions about the impact of its program on student test scores. Edgenuity agreed it shouldn’t have calculated student growth the way it did and said it would edit its case study, though at the time of publication, the misleading data still topped its list of “success stories.”

More than $12 billion spent on education programs

When shoddy ed tech research leads educators to believe programs might really help their students, there are consequences for schools as well as taxpayers. Districts spent more than $12 billion on ed tech in 2019.

In some places, principals and administrators consider themselves well-equipped to assess research claims, ignore the bunk and choose promising products. But many people making the decisions are not trained in statistics or rigorous study design. They don’t have the skills to assess whether promising findings in one group of students may realistically translate to their own buildings. And, perhaps most importantly, they often don’t have the time or resources to conduct follow-up studies in their own classrooms to assess whether the products paid for with public money actually worked.

“We’re spending a ton of money,” said Kathryn Stack, who spent 27 years at the White House Office of Management and Budget and helped design grant programs that award money based on evidence of effectiveness. “There is a private-sector motive to market and falsely advertise benefits of technology, and it’s really critical that we have better information to make decisions on what our technology investments are.”

In 2006, Jefferson County Public Schools, a large Kentucky district that includes the city of Louisville, began using SuccessMaker. The Pearson product is designed to supplement reading and math instruction in kindergarten through eighth grade. From 2009 to 2014, records provided by the district show it spent about $4.7 million on new licenses and maintenance for SuccessMaker. (It was unable to provide records about its earlier purchases.)

Typically within the district, school principals get to pick the curriculum materials used in their buildings, but sometimes purchases happen at the district level if administrators find a promising product for the entire system.

SuccessMaker, which boasted on its webpage that it was “proven to work” and designed on “strong bases of both underlying and effectiveness research,” never lived up to that promise. In 2014, the district’s program evaluation department published a study on the reading program’s impact on student learning, as measured by standardized tests. The results were stark.

“When examining the data, there is a clear indication that SuccessMaker Reading is not improving student growth scores for students,” the evaluation said. “In fact, in most cases, there is a statistically significant negative impact when SuccessMaker Reading students are compared to the control group.”

The district stopped buying new licenses for the education software – but only after it had spent millions on a product that didn’t help student learning.

Students are ghosting teachers: Without in-person classes, many students have essentially gone missing, teachers say

Companies can grade their own software

That same school year, Pearson paid for a study of SuccessMaker Reading in kindergarten and first grade. The company said the study found positive effects that were statistically significant, a claim it continues to make on its website, along with this summary of the program: “SuccessMaker has over 50 years of measurable, statistically significant results. No other digital intervention program compares.”

An independent evaluator disagrees.

Robert Slavin, a Johns Hopkins professor, wanted to watchdog companies selling schools software to be purchased with federal money. The 2015 federal education law, the Every Student Succeeds Act, sets guidelines for three levels of evidence that qualify as purchases: strong, moderate and promising. Companies get to decide for themselves which label best describes their studies. A year after the law passed, Slavin started Evidence for ESSA.

“Obviously, companies are very eager to have their products be recognized as meeting ‘ESSA Strong,’ ” Slavin said, adding that his group is trying to fill a role he hopes the government will take on. “We’re doing this because if we weren’t, nobody would be doing it.”

Slavin’s organization tries to fill the gap by offering schools an independent assessment of the research that companies offer for their products.

When Slavin’s team reviewed SuccessMaker research, it found that well-designed studies of the education software found no significant positive outcomes.

Slavin said Pearson contested the Evidence for ESSA determination, but a follow-up review by his team returned the same result. “We’ve been back and forth and back and forth with this, but there really was no question,” Slavin said.

Pearson stands behind its findings. “Our conclusion that these intervention programs meet the strong evidence criteria … is based on gold-standard research studies – conducted by an independent third-party evaluator – that found both SuccessMaker Reading and Math produced statistically significant and positive effects on student outcomes,” the company said in a statement.

Though the law requires schools to spend federal money only on research-based products, there’s little accountability to ensure that companies such as Matific conduct quality research and describe it accurately in their marketing materials.
Though the law requires schools to spend federal money only on research-based products, there’s little accountability to ensure that companies such as Matific conduct quality research and describe it accurately in their marketing materials.

SuccessMaker didn’t fare well when judged by another evaluator, the federally funded and operated What Works Clearinghouse.

Launched in 2002, the What Works Clearinghouse assesses the quality of research about education products and programs. It did a review of SuccessMaker Reading in 2009 and updated it in 2015. The conclusion: The only Pearson study of the program that met What Works’ threshold for research design showed the program has “no discernible effects” on fifth and seventh graders’ reading comprehension or fluency. (The Pearson study included positive findings for third graders that What Works did not evaluate.)

The Hechinger review of dozens of companies identified seven instances of companies giving themselves a better ESSA rating than Slavin’s site and four examples of companies claiming to have research-based evidence of their effectiveness when What Works said they did not. Two other companies tied their products to What Works’ high standards without noting that the organization had not endorsed their research.

Online school is hard enough: Now imagine you’re still learning to speak English

‘There isn’t a ton of great evidence’

Despite almost 20 years of government attempts to focus on education research, “we’re still in a place where there isn’t a ton of great evidence about what works in education technology,” said Stack, the former Office of Management and Budget employee. “There’s more evidence of what doesn’t work.”

In fact, out of 10,654 studies included in the What Works Clearinghouse in mid-April, only 188 – less than 2% – concluded that a product had strong or moderate evidence of effectiveness.

Part of the problem is that good ed tech research is difficult to do and takes a lot of time in a quickly moving landscape. Companies need to convince districts to participate. Then they have to provide them with enough support to make sure a product is used correctly, but not so much that they unduly influence the final results. They must also find (and often pay) good researchers to conduct a study.

When results make a company’s product look good, there’s little incentive to question them, said Ryan Baker, an associate professor at the University of Pennsylvania. “A lot of these companies, it’s a matter of life or death if they get some evidence up on their page,” he said. “No one is trying to be deceitful. (They’re) all kind of out of their depth and all trying to do it cheaply and quickly.”

Many educators have begun to consider it their responsibility to dig deeper than the research claims companies make. In Kentucky’s Jefferson County, administrators changed their approach to picking education software, in part because of pressure from state and federal agencies.

Felicia Cumings Smith, the assistant superintendent for academic services in Jefferson, joined the district two years ago after working in the state Department of Education and at the Bill and Melinda Gates Foundation. (The Gates Foundation is one of the many funders of The Hechinger Report and is a partial funder of USA TODAY’s education team.) Throughout her career, she has pushed school and district officials to be smart consumers in the market for education software and technologies. At Jefferson, she said things have changed since the district stopped using SuccessMaker. The current practice is to find products that have a proven track record of success in similar student populations.

“People were just selecting programs that didn’t match the children that were sitting in front of them or have any evidence that it would work for the children sitting in front of them,” Cumings Smith said.

Jefferson County, one of the 35 largest districts in the country, has an internal evaluation department to monitor the effectiveness of products it adopts. Many districts can’t afford that. Even in Jefferson, the programs that individual principals choose to bring into their schools get little follow-up evaluation.

A handful of organizations have begun to help schools conduct their own research. Project Evident, Results for America and the Proving Ground all support efforts by schools and districts to study the impact of a given product on their own students’ performance. The ASSISTments E-TRIALS project lets teachers perform independent studies in their classrooms. This practice helps educators better understand if products that seem to work elsewhere are working in their own schools. These efforts reach relatively few schools nationwide.

Sudden school closures have complicated the problem as educators rushed to find online options for students at home. “This is such a crisis that people are, quite understandably, throwing into the gap whatever they have available and feel comfortable using,” Slavin said.

Online testing has problems, too: Online AP exam issues prompt College Board to offer email option. It’s too late for thousands of students who will have to test again.

Slavin sees an opportunity in the chaos. Educators can use the next several months to examine ed tech research and be ready to use proven strategies – whether they’re education software or not – when schools do reopen, making them savvier consumers.

“Students will catch up, and schools will have a taste of what proven programs can do for them,” he said. “At least, that is my hope.”

This article originally appeared on USA TODAY: Coronavirus online school: Claims by games, programs are misleading

Source Article

Next Post

9 things you need to know before your appointment

Wed Jan 3 , 2024
Teeth whitening is becoming more accessible – both over-the-counter and at your dentist – and fortunately the processes have come a long way. If you’re considering whitening your teeth, we spoke to the experts about everything you should know first. The procedure It’s a relatively straightforward procedure. Teeth whitening involves […]

You May Like