The 2014 RHSU Edu-Scholar Public Influence Rankings
Rankings were restricted to university-based researchers. They exclude think tankers and advocates (e.g. Checker Finn or Kati Haycock) whose job description is to influence the public discourse. After all, the point is to nudge what is rewarded and recognized at universities. (The term "university-based" provides some useful flexibility. For instance, Tony Bryk currently hangs his hat at Carnegie. However, he is an established academic with a university affiliation and campus digs. So he's included. The line is admittedly blurry, but it seems a reasonable compromise.)
No exercise of this kind is without complexities and limitations. The bottom line: this is a serious but inevitably imperfect attempt to nudge universities, foundations, and professional associations to do more to cultivate, encourage, and recognize serious contributions to the public debate.
The top scorers? All are familiar edu-names, with long careers featuring influential scholarship, track records of comment on public developments, and outsized public and professional roles. In order, the top five were Linda Darling-Hammond, Diane Ravitch, Howard Gardner, Eric Hanushek, and Tony Wagner. Rounding out the top ten were Larry Cuban, Paul E. Peterson, Robert Slavin, Yong Zhao, and Joseph Murphy. Notable, if not too surprising, is that the top ten are all veteran, accomplished scholars who have each authored a number of (frequently influential) books, accumulated bodies of heavily cited scholarly work, and are often seen in the public square and working with state and district leaders. That reflects the intent of the scoring rubric, which weights the broader public influence of a scholar's work as much as their more recent visibility.
Stanford University and Harvard University both fared exceptionally well, with Stanford placing six scholars in the top 20 and Harvard placing four. New York University, the University of Oregon, and the University of Virginia were the other institutions to place multiple scholars in the top 20.
In terms of the most scholars ranked, Stanford topped all others with 21. Harvard came a close second with 19, and Columbia and Vanderbilt tied for third with a dozen ranked scholars. Overall, more than five dozen universities claimed a spot.
A number of top scorers penned influential books of recent vintage. For instance, among the top ten, just in the past year, Diane Ravitch came out with Reign of Error, Rick Hanushek and Paul Peterson with Endangering Prosperity, Linda Darling-Hammond with Getting Teacher Evaluation Right, Howard Gardner with The App Generation, and Larry Cuban with Inside the Black Box of Classroom Practice.
As with any such ranking, this exercise ought to be interpreted with appropriate caveats and caution. Given that the ratings are a snapshot of where things stand as we start 2014, the results obviously favor scholars who penned a successful book or influential study in 2013. But that's how the world works. And that's why we do this every year.
A few scholars tended lead the field in any given category. For those keeping score at home, here's a quick review of the category-killers:
- More than a score of veteran scholars maxed out on Google Scholar were Darling-Hammond, Howard Gardner, Hanushek, Robert Slavin, Joseph Murphy, Richard Elmore, Martin Carnoy, Robert Pianta, Helen Neville, Henry Levin, Deborah Ball, Camilla Benbow, Anthony Bryk, David Berliner, John Bransford, Lynn Fuchs, Helen Ladd, Marilyn Cochran-Smith, Kurt Fischer, Kenneth Zeichner, and Steve Raudenbush.
- When it came to book points, Gardner, Carnoy, Nel Noddings, Cuban, Peterson, Carol Tomlinson, and Ravitch each maxed out. Ravitch scored the highest Amazon ranking at 19.9, as well as the highest Klout score at 8.2.
- With regards to mentions in the education press, only Ravitch hit the cap, while Ravitch, Darling-Hammond, Gardner, Hanushek, and Wagner each hit the cap when it came to blog mentions. When it came to newspaper mentions, only Ravitch and Darling-Hammond maxed out.
If readers want to argue the relevance, construction, reliability, or validity of the metrics, I'll be happy as a clam. I'm not sure that I've got the measures right or even how much these results can or should tell us. That said, I think the same can be said about U.S. News college rankings, NFL quarterback ratings, or international scorecards of human rights. For all their imperfections, I think such efforts convey real information--and help spark useful discussion.
That's what I've sought to do here. Meanwhile, I'd welcome suggestions for possible improvements and am eager to hear your critiques, concerns, questions, and suggestions. So, take a look, and have at it.