Friday, June 08, 2007

Summertime Advice for Law Students

This post concerns a couple of topics I've meant to post about for some time now, but have just not previously gotten around to. Today's as good a day as any to do it, I suppose.

It's summertime, and many law students may be thinking about enjoying their summer clerking positions, taking a break from thinking about how to be more successful in the classroom, etc. Nonetheless, there have recently (in a very relative sense) been a couple of topics floating around about how law students can be more successful. Because I have many friends involved in legal education, I thought I'd pull these couple of ideas and the related posts together here, so they can be shared with those students who might have some time this summer (or later -- this post won't be going anywhere) to work on being more successful students next year.

1. More Effective Reading:

Raymond Ward, over at the (new) legal writer blog, had a POST back in March on the subject of the correlation between "how" law students (and other legal readers) read and how "successful" they are, whether in terms of better grades or efficiency in legal practice. The post was referring to work done by Professor Leah Christensen of St. Thomas Law School in Minneapolis/St. Paul.

Professor Christensen's papers would be good summer reads for law students to gain an appreciation of this correlation and to consider ways in which their own reading might be more effective. In the short term this could translate to improved grades. In the long term this could translate to more success in practice. Both papers are available for free on The Social Science Research Network:

Legal Reading and Success in Law School (August 2006).
The Paradox of Legal Expertise: A Study of Experts and Novices Reading the Law (2007).

2. Secrets of Successful Legal Writing Students:

Also back in March, Ward POSTED about a research paper authored by Anne Enquist of Seattle U. School of Law studying the secrets of successful legal writing students. In the paper, Professor Enquist discusses a study done at Seattle U. comparing several legal writing students, their individual habits and practices, and their respective levels of succcess in the legal writing class. As Ward noted:

The study suggests (not surprisingly) that a systematic, organized approach is the key:

[T]he study reveals not only the results of working harder but the specifics of working smarter. The secrets to working smarter included note-taking and note-reviewing strategies; how to divide one’s time between researching, drafting, revising, editing, and proofreading; how to research and read cases efficiently; strategies for efficient time management; techniques for organizing one’s research and staying organized while writing; and accessing the professor as a primary resource. Pitfalls to avoid included procrastination, poor management of distractions, and scapegoating.

Professor Enquist's paper is also available on

Unlocking the Secrets of Highly Successful Legal Writing Students (March 2007).

Tuesday, June 05, 2007

Update: Fordham Symposium on Judicial Selection Articles Published

This references a past post on here, about a symposium on judicial selection that occurred in April 2006 at Fordham Law School in New York.

The articles concerning that symposium have now been published in the Fordham Urban Law Journal, Volume 34, dated January 2007. Thanks to Norman Greene for leaving a comment to let us know.

See: ENotes listing of contents for 34 Fordham Urban Law Journal.

Brian Leiter's New "Rankings" of Law School Faculty Citations.

Professor Brian Leiter, of University of Texas-Austin School of Law, is in the midst of conducting a study to determine the top law schools based on citations to their law school faculty. He has done this sort of ranking before (see 2005 Rankings for example). In the 2005 results, Leiter indicated:

This is a ranking of the top 30 law faculties based on a standard “objective” measure of scholarly impact: per capita citations to faculty scholarship. We looked only at the top quarter of each faculty, largely for logistical reasons--it made the study more manageable--but partly because the scholarly standing of a school depends more on its best faculty, than its average.

On June 3, Leiter posted the following (bolding in the original) on his Law School Reports Blog:

Over the summer, I plan to carry out a new citation study, and in the fall, we may (finally) undertake a new on-line reputational survey. To that end, we've compiled draft lists of faculty for 49 law schools that appear likely to rank in the top 35-40 by these different measures. We may add a few more faculties to the mix, especially for purposes of a reputational survey. The faculty lists include only academic faculty (an effort has been made to exclude clinical, adjunct, and legal writing faculty, since these studies will focus on scholarly output). The ranking study will be per capita across the whole faculty, since many have worried that by looking only at the top quarter of the faculty in the past produced distorted results because of one or two very highly cited faculty at certain schools (e.g., Chemerinsky at Duke, or Delgado at Pittsburgh). . . .

Leiter's call has generated some criticism, which he has not handled particularly well. Specifically, the decision to exclude ALL clinical and legal writing faculty from the study would seem to leave a gaping hole in the credibility of a study about which law school faculty is most oft-cited. When Richard Neumann of Hofstra Law, for example, pointed out the flawed methodology of completely ignoring all clinical and legal writing faculty, Leiter responded and referred to Professor Neumann as "someone named Richard Neumann," suggested that Professor Neumann has a "chip on the shoulder," and referred to Professor Neumann's comments as "irrational and self-serving." Never mind that Professor Neumann is nationally known, has authored significant text books, teaches both clinical and non-clinical courses, and is likely more recognizable in the community of law school faculty than Leiter.

It's worth noting that Leiter is also, per his own request for input, starting with a list of schools he "expects" to be the top schools. So not only is he ignoring a potentially very large portion of highly cited legal scholarship by clinical and legal writing faculty at law schools, he is further beginning with a list of schools he expects to be ranked at the top -- based in part on past studies that have also ignored significant legal scholarship. And then he seems shocked when others point out the flaws in his basic premises. It's worth noting that Leiter *does* include part-time and adjunct faculty in his evaluations (such as Judges Posner and Easterbrook at The University of Chicago) -- while, again, leaving out full-time, tenured or tenure-tracked faculty in clincial and legal writing.

Leiter seems to completely ignore the changing landscape of legal education. Clinical and legal writing faculty, unlike as recently as 10 years ago, are now increasingly tenured or tenure tracked, and engage in every bit as meaningful scholarship on a variety of legal topics as non-clinical faculty. Most clinical and legal writing faculty are now eligible for summer research grants, and studies by the Association of Legal Writing Directors (ALWD) indicates that there is a significant trend toward requiring legal scholarship of such faculty. It is not difficult to find information about the growing impact of clinical and legal writing faculty in the world of scholarship -- peruse the sources in this post at The Legal Writing Professors Blog for some examples.

As a final matter, it's worth noting some very basic reasons why Leiter's study is flawed, beyond the seemingly obvious fact that ignoring a significant number of law schools entirely, and then ignoring a significant portion of the faculty at the schools that are considered, casts doubt on the credibility of the survey results. In the comments on Leiter's blog, Dean Stephen Ellman of the New York Law School, commented:

In sum, it is probably true that a citation survey of only non-clinical faculty would measure the impact of the work of those who are, in general, the most likely to be the most active authors -- though not without some striking omissions and no doubt with the inclusion of a number of very unproductive classroom teachers. But such a survey would miss not only the work of a sizable number of people who are genuinely and productively committed to scholarship, by reason of professional obligation and personal inclination, but also the impact of the institutional decisions that have concentrated these people at some schools rather than others.

It seems fair to add that the task of separating clinical and nonclinical faculty is itself so difficult that undertaking it is bound to generate errors along the way. Many clinicians, of course, have titles that are identical to those of their nonclinical colleagues, and so they cannot be distinguished by title. Many also cannot be distinguished by contractual status, since they hold full, regular tenure. In addition, many clinicians do not teach only clinically. Suppose a clinical professor also teaches Civil Procedure, and so his or her teaching time is 1/4 clinical, 3/4 nonclinical. Would this professor count as a clinician? What if his or her teaching load was half clinical and half nonclinical? Or 3/4 nonclinical and 1/4 clinical? Or suppose that the professor in question is, initially, a nonclinical faculty member, but over time comes to spend a portion of his or her time teaching in a clinic? Which of these people (and I think there are a lot of people who fit one or another of these models) would count as clinicians, and why? I think that figuring out the right time-share definition of "clinician" won't be simple, but it’s important also to keep in mind that formulating the definition may well be easier than collecting the data with which to apply it.

One other anomaly bears mentioning: a failure to count citations to scholarship by clinicians will mean that citations to otherwise comparable works of scholarship, perhaps appearing in the pages of the same law review issue, will be counted, or omitted, depending solely on the identity of their author. An issue of the Journal of Legal Education, for example, might include articles on law school pedagogy by both clinical and nonclinical faculty – but only citations to those written by the latter group would be counted. Clinicians, it should be noted, write on a great many topics; a symposium on torts, for instance, might feature articles by clinicians and nonclinicians, but again only those written by nonclinicians would be accounted for. Or to take one more example, there may be instances where a clinician from one school and a nonclinician from another school co-author a piece (I’m not speaking hypothetically, since I know of a book that fits this description precisely); citations to the book, or article, would be credited to the nonclinician and his or her school, but not to the clinician and the school he or she taught at.

Like most random law school rankings, Leiter's must be taken with a grain of salt. Leiter's failure to recognize the significant contributions that clinical and legal writing faculty make to legal scholarship, at least with respect to a good number of schools -- I think you could look at Mary Beth Beazley at Ohio State as one example of a clinical/legal writing scholar who is both highly published and very highly cited -- is reason enough to look at his results with a highly skeptical eye, no matter how highly his own school might end up ranking in his study (top 10 in 2005, by the way).