Unassociated

The recent domination of U.S. spelling bees by home-schooled students isn't hard to explain: many of their parents see rote memorization as a valuable learning technique, while this old-fashioned practice is frowned upon by most teachers who learned their craft in schools of education.  John Derbyshire explores how indispensable memorization is to advanced learning in a National Review article that's loaded with clever mnemonic devices (e.g. "One idle damn Sunday, Dad killed cheating thief and lied to cover it," as a way to remember the Ten Commandments).

"Thanks for the memories," by John Derbyshire, National Review, June 25, 2001. Article not available at www.nationalreview.com.

In Washington, DC, where the high school graduation rate is only 57 percent, what happens to the other 43 percent?  Many later try to earn their diplomas by passing the GED, but only 34 percent of those taking the GED exam in DC pass it (compared to 70 percent nationwide).  In this week's Washington City Paper, Garance Franke-Ruta puts a human face on these glum statistics in a long cover story on dropouts trying to turn their lives around. 

"Filling in the blanks," by Garance Franke-Ruta, Washington City Paper, June 8, 2001. http://www.washingtoncitypaper.com/cover/cover.html

National Center for Education Statistics

The federal government's National Center for Education Statistics publishes two indispensable volumes each year, without which nobody has essential data at hand. One is The Digest of Education Statistics, consisting of hundreds of pages of numbers sans interpretation or commentary. The other is the more selective and subjective Condition of Education. The latest edition is just out. Weighing in at almost 300 pages, it is largely free of the creeping politicization that beset some of these volumes in the late Clinton administration. And it's full of telling factoids and trend lines. Here are a few from the section on teachers:

  • Among (1992-93) college graduates who became teachers (within 5 years of graduation), 55 percent majored in education. (So much for subject mastery.)
  • Those who didn't prepare for teaching careers while in college but became teachers anyway were more likely (35%) to have scored in the top quartile of their entering college class than those (14%) who both prepared to teach and became teachers. (Sounds to me like praise for alternative certification.)
  • Those who taught in private schools were more likely (33%) than those who taught in public schools (15%) to have ranked in the top quartile of their entering college class. (Private school teachers, of course, need not be certified.)
  • Those who had taught but were no longer teaching in 1997 had higher scores than those who remained in teaching. (The ablest leave the classroom.)
  • In 1999, 41% of U.S. eighth
  • ...

The Colorado Commission on Higher Education (CCHE) wanted to determine whether the state's ed schools were tailoring their teacher training programs to the state's academic standards for students as well as to new performance standards (set by the state Board of Education) for schools of education. The CCHE asked the National Association of Scholars (NAS) to examine four teacher ed programs. In turn, the NAS commissioned a study by David Warren Saxe, a professor of social studies education at Penn State (and a member of the Pennsylvania state board of education). His report was submitted to Colorado authorities last year but has only now become public.  After examining reams of documents describing the teacher training programs (which were provided by the ed schools themselves) and visiting all four schools, Saxe has penned a devastating critique.   His report paints a vivid picture of ed schools bowing to the gods of progressivism and national accreditation rather than to traditional academic content and effective instruction, notwithstanding state policies requiring the latter. Saxe finds many of the training programs to be overly politicized, with courses and training sessions emphasizing social justice, cultural relativism, racism and homophobia.  Much of the coursework is at best irrelevant and at worst destructive to training teachers who will be expected to help their students master the state's content standards. We wonder how many others among the nation's hundreds of teacher preparation programs could be described in similar terms.  You can obtain a copy of the report by sending a...

The charter schools of the Lone Star State have been much in the news of late, particularly as the legislature grappled with a possible moratorium on their creation. That didn't happen, but people are understandably interested in how they're doing. After all, by 1999-2000, there were 142 such schools enrolling nearly 25,000 youngsters. How are they doing? The Texas Public Policy Foundation (TPPF) commissioned two Texas A & M economists to examine four years worth of charter data. The most encouraging result: Texas's so-called at-risk charter schools (about half of all the state's charters, each serving more than 75 percent at-risk kids) showed stronger pupil achievement gains than regular public schools. But the other charters aren't. The authors also note that many youngsters entering a charter for the first time, especially if it's a brand new school, experience a one-year drop in scores on the Texas statewide tests. They comment that, at a time when many new charters are opening and many children are newly enrolled in them, "A one-year look at average changes in test scores for charter students will mainly capture the decline in performance of the new entrants." They further observe that charters are serving "a disproportionately large share of at-risk students, minority students and economically disadvantaged students." And, being economists, they note that the charter schools are cost-efficient. If you'd like to see for yourself, the fastest way to contact TPPF is by surfing to http://www.tppf.org.

In a commentary published by the Hoover Institution which appeared in assorted magazines this week, Harvard economist Caroline Hoxby explains how she overcame her skepticism about standardized testing when she realized how cost-effective it is as a tool to foster desirable education change. For the same money a typical district spends annually on student testing, she estimates, it could reduce class size by two-thousandths of a student, raise teacher salaries by one quarter of one percent, or offer two hours of after school activities per year. Testing, she concludes, is no cure-all but it's more powerful than other uses of the same money.

"Conversion of a Standardized Test Skeptic," by Caroline Hoxby, Hoover Institution, May 31, 2000

Bill Sanders's system of value-added analysis, which sorts through mountains of student achievement data to identify the effect that teachers and schools are having on student performance, is one of the important analytic breakthroughs of the past decade in education. But it's complicated and a lot of people don't yet understand it. For a wonderfully simple explanation, see a recent piece in the Rocky Mountain News by Linda Seebach.

"New Tools Measure School Performance," by Linda Seebach, Rocky Mountain News, May 19, 2001

I'd immediately drop my membership in Phi Delta Kappa, an educators' honor society of sorts, except then I'd lose my subscription to its eponymous monthly magazine, and that would mean losing touch with the conventional wisdom that I sometimes need to orient myself. With rare exceptions, you can count on this for education geo-positioning: you want to be pointed approximately 180?? from where the Kappan is headed. This is especially true of Anne Lewis's monthly "commentary" from Washington and Gerald Bracey's absurd "research" column. These are entirely predictable and utterly tendentious (though at least Ms. Lewis doesn't have the gall to also name an annual "report" after herself!) So are the editor's letter and the appalling monthly report from Canada by a left-wing teachers' union activist. But even after ignoring the regular chaff, one must contend with the articles. Occasionally there's something worth reading. (As former Senator Russell Long remarked, even a blind hog finds an acorn now and then.) In May, for example, we find a decent piece by Michael Kirst and colleagues on some unintended consequences of California's sweeping class-size reduction program. But then we also find a dismaying essay by the eminent (and usually sensible) education telejournalist, John Merrow on the inherent fallacy of high-stakes testing; an anti-standards tantrum by Donald Thomas and William Bainbridge (of SchoolMatch); a loving interview with Linda Darling-Hammond; and an essay by Perry Glanzer on character education that may represent a new low in moral equivalency. He argues that, because the former...

Here's another worthy product of the Wisconsin Policy Research Institute, setting forth the issues that the Badger State would have to grapple with if it wanted to institute some form of performance- or merit-based compensation system for its public school teachers. This analysis focuses specifically on school-wide performance pay systems, i.e. those tied to gains made by an entire school (with rewards meted out to the whole staff of that school) rather than the children in individual classrooms and their individual teachers. (Wisconsin's current testing system wouldn't lend itself to that kind of system anyway.) Nor does it address differential compensation for teachers in scarce specialties or hard-to-staff schools. But it's a thoughtful, thorough look at the issues associated with school-wide performance pay and worthy of attention by those interested in this reform. Contact the Wisconsin Policy Research Institute, Inc. at P.O. Box 487, Thiensville, WI 53092. Phone (262) 241-0514. Fax (262) 241-0774. E-mail wpri@execpc.com. Or surf to http://www.wpri.org.

The Los Angeles Times last month published a parent's sordid tale of gaming the magnet school system in LA Unified School District to help get her child into her school of choice.  In the article, Gale Holland described how a system designed to help minority kids escape from overcrowded, substandard schools has morphed into a form of education poker.  Students are admitted to magnet schools under a complex set of rules that take into account their race, the racial balance of the school to which a student is applying, and many other factors, including how often you've been rejected by a magnet school in the past.  This has led many to apply to schools where they expect to be turned down as a way of accumulating priority points that can be used the next time around. Parents play a particularly fiendish variation of the game to get their kids into magnet schools for gifted children, the author writes.  She concludes, "The real problem is that the magnet system is too small."  Parents, we know, will go to great lengths to find good schools for their children; the only limiting factor seems to be the availability of options.  The great irony is that magnet schools are viewed as bona fide public schools despite the fact that they accept children based on test scores and race, while other schools of choice--such as charter and voucher schools--which generally accept all comers, are faulted by critics as not being true public schools simply because...

As the big education bill limps through Congress, much debate centers on how to determine whether states are making real achievement gains, how to track those gains (or losses), and how best to compare states with each other - and with the country. In the New York Times of June 6, columnist Richard Rothstein contends that Congress should forget about state-specific tests and instead rely exclusively on mandatory state participation in the National Assessment of Educational Progress (NAEP) which, he says, is a better test that yields better data based on sampling just 2500 or so kids per state while dampening "teach to the test" temptations. As you may recall, the original Bush proposal - and the pending Senate bill - use NAEP as an external "audit" of a state's results on its own test; the House bill would let states use NAEP or some other instrument of their own choosing, but again only for audit purposes. All versions assume, indeed require, that each state will also give its own test, at least in reading and math, to every child in grades 3-8.

I'm a long time NAEP partisan, indeed one of the (many) parents of state-level NAEP, and I strongly favor its use by states for external audit purposes. But we mustn't expect it to bear too much weight or be too precise. The most notable feature of NAEP trend data, after all, is the flat line. Scores just don't vary much from year to year. NAEP is really...

Pages