A Scientist Assaults The Science Of Testing

Bruce Alberts wants students to interpret data, not parrot it

Bruce Alberts' quest to reform science testing began more than a decade ago. Teaching biochemistry to medical students at the University of California at San Francisco, he was appalled to find they "were not really learning anything." The future doctors easily parroted back biochemical terms but failed to grasp the concepts. The culprit? Multiple-choice tests, Alberts charges, similar to those that form the backbone of America's vast standardized testing industry. Only when he and UCSF colleague Diane S. Colby turned to essay questions were they able to boost students' understanding of biochemistry--and their interest in it. "It showed the power of tests to shape what students learn and how they study," says Alberts.

Now, as president of the National Academy of Sciences, the 59-year-old Alberts is using his prestigious post to try to end reliance on standardized multiple-choice tests. He's especially taking aim at the biology SAT II test, taken by 50,000 high schoolers each year. By emphasizing memorization and word association over conceptual knowledge, these tests are poor judges of students' abilities, he argues. Worse, the relentless obsession with scores has had a pernicious effect on education: Teachers prepare students to be good test-takers but not necessarily good thinkers. And universities contribute to the vicious cycle by depending on test scores for admissions decisions.

CRITICAL THINKING. The toll is immense, Alberts believes. Our current science education system has turned off countless youths to the thrill of science--and left too many people bereft of the analytical skills needed to, say, interpret claims of global warming or to exercise the kind of critical thinking that is valuable in many aspects of life--such as analyzing financial data. "We're wasting everyone's time," Alberts warns. "We're saying science is some boring chore and the nation really doesn't need it."

Alberts is one of a growing number of critics of widespread standardized testing. For years, opponents have argued that the exams don't actually measure anything important to students' eventual success, either in academia or later life. Now, there's increasing evidence to back them up. A study published in June by Wendy M. Williams, professor of human development at Cornell University, and Yale University psychologist Robert J. Sternberg, compared students' scores on the psychology Graduate Record Examination (GRE) to the students' later accomplishments. With one small exception, they found no link between scores and any measure of performance, except for first-year grades. And even that correlation was weak. "It's obvious that the tests are not corresponding to real performance," concludes Williams. (The exception was a weak link between a part of the GRE designed to test analytical reasoning and the quality of dissertations in male students only.)

Meanwhile, physicists have been raising similar concerns. "The GRE physics subject test may do more harm than good," says Harvard University physicist Howard Georgi. Many top graduate programs rarely accept anyone whose score falls below a certain level. That slams the door on students who might be better scientists than some who ace the tests. For instance, among his own graduate students, "I've observed that women did surprisingly poorly on the GRE considering what good physicists they are," Georgi says. When he asked one outstanding student why her score was so low, "she told me that the physics GRE was simply too `nerdy' to be taken seriously by an intelligent woman," he recalls.

Promising physicists and psychologists aren't the only ones hurt by the standardized gatekeepers. Barred by its board of regents from considering race or gender in admissions decisions, the University of California system now faces dramatic declines in black and Hispanic students. Of 196 African-American students who applied to the University of California at San Diego medical school, for example, not one was admitted. The reason: Test scores were used as a key criterion. That's a serious misuse of the exams, critics say. "If the tests don't predict anything important, then why are we using them?" asks Cornell's Williams.

What's ironic is that standardized testing once played a very different role. The field sprang out of a need during World War I to quickly evaluate huge numbers of potential soldiers. "For rapid, crude selection, these tests are the best you can do," explains Paul Black, professor emeritus of science education at King's College London. According to the critics, that's precisely the problem: the tests aren't good for much beyond crude screening. Later, the tests helped students from public high schools open the doors to snobbish private universities. "Originally, people looked on the tests as an opportunity rather than as a gatekeeper," explains Hessy Taft, senior examiner for Educational Testing Service Inc. (ETS), which develops the SAT and other tests.

OPPORTUNITIES? But now, the whole educational establishment has come to rely on the tests. Scores even affect property values, as affluent parents choose homes in school districts with high test scores. "It's almost a crime the way our society is fixated on scores," says Yale's Sternberg. People increasingly look at the tests as gatekeepers, not opportunities. That's a fact that no one but admissions officers really likes. "We use the tests in the wrong way," says Harvard's Georgi. Instead, he envisions a test that's pretty easy, so every competent person would do well, but those who really couldn't handle graduate school would be identified. In other words, return to the idea of a very crude selection. Even ETS cautions that scores predict little more than first-year grades. "No rational person would tell you that you should base admission only on test scores," says ETS' Taft. And yet the reality is that many admissions committees for elite universities--facing many more qualified candidates than they can handle--do just that.

That's why Bruce Alberts has been pushing for change on two fronts. One is pressuring universities to place less reliance on the tests. Alberts' dream is for top universities to tell high school students not to take the biology SAT exam at all. The universities would rely instead on criteria such as hands-on experience doing science.

Some university presidents, including Stanford University's Gerhard Casper, have been supportive of the idea. But change has been slow. Alberts hopes to force the issue during the annual meeting of the Association of American Universities in mid-October. His effort may get a boost from a recent University of California task force. Worried about the huge drop in minority admissions, the task force recommended that the SAT and other standardized tests be dropped as requirements.

At the same time, Alberts has been toiling on another front--trying to make changes in the actual biology SAT. Here, with the help of like-minded educators at ETS and the College Board, which oversees the exam, he has helped bring about modest progress. His chief ally is Indiana University biology professor J. Jose Bonner. Like Alberts, Bonner decries the way science teaching--and testing--has come to focus on facts and conclusions rather than the process of science.

ROOTING OUT ROTE. With Alberts pushing from the outside and Bonner and ETS's Taft from the inside, ETS has made two changes to the biology SAT that will take effect this fall. One is fewer questions (80 instead of 95) with a higher percentage that ask students to interpret data rather than to regurgitate facts (30% instead of 20% to 25%). "By increasing the part that's reasoning-oriented, we'll give teachers the freedom to back off requiring so much memorization," Bonner explains. The other change is offering students a choice of two versions of the exam, one concentrating on ecology and one focusing on molecular biology. That way, students won't have to learn so many facts.

For the testing industry, these changes "were so dramatic they almost didn't go through," says Bonner. The major hurdle: ETS statisticians who worried that they wouldn't be able to ensure that, say, a 600 score meant the same thing year after year.

But to Alberts, the changes are frustratingly small and slow in coming. Why care if a 600 score always means the same thing if it doesn't measure anything important, he asks. It's far better to substitute essay tests, which tap students' creative juices--and encourage teachers to teach real problem-solving skills. Such tests already exist. Britain, for instance, firmly resisted jumping on the multiple-choice bandwagon. Here at home, ETS has added an essay section to the Advanced Placement tests. At Cornell, Williams is producing a new psychology test she hopes could replace the psychology GRE. And in Alberts' old biochemistry course at UCSF, the tests he and Colby developed have even had a remarkable effect: Some students report that the exams' problem-solving puzzles stimulate them to want to learn more biochemistry--instead of viewing the course as yet another dull hurdle on their way to their M.D.s. That's the kind of testing, says Alberts, that can lift education out of its doldrums and "bring both science and business what they need: people who think."

Before it's here, it's on the Bloomberg Terminal.