Reports that A- is the median grade in Harvard College have reopened the debate about grade inflation. Many of the arguments offered in response to the news are familiar. The venerable grade hawk Harvey “C-” Mansfield, who brought the figures to public attention, describes the situation as an “indefensible” relaxation of standards.

More provocative are defenses of grade inflation as the natural result of increased competition for admission to selective colleges and universities. A new breed of grade doves point out that standards have actually been tightened in recent years. But the change has been made to admissions standards rather than expectations for achievement in class.

According to the editorial board of the Harvard Crimson, “high grades could be an indicator of the rising quality of undergraduate work in the last few decades, due in part to the rising quality of the undergraduates themselves and a greater access to the tools and resources of academic work as a result of technological advances, rather than unwarranted grade inflation.” Matt Yglesias, ’03, agrees, arguing that “it is entirely plausible that the median Harvard student today is as smart as an A-minus Harvard student from a generation ago. After all, the C-minus student of a generation ago would have very little chance of being admitted today.”

There’s a certain amount of self-congratulation here. It’s not surprising that Harvard students, previous and current, think they’re smarter than their predecessors—or anyone else. But they also make an important point. The students who earned the proverbial gentleman’s Cs are rarely found at Harvard or its peers. Dimwitted aristocrats are no longer admitted. And even the brighter scions of prominent families can’t take their future success for granted. Even with plenty of money and strong connections, they still need good grades to win places in graduate school, prestigious internships, and so on.

The result is a situation in which the majority of students really are very smart and very ambitious. Coursework is not always their first priority. But they are usually willing to do what’s necessary to meet their professors’ expectations. The decline of core curricula has also made it easier for students to pick courses that play to their strengths while avoiding subjects that are tough for them. It’s less common to find Chemistry students struggling through Shakespeare than it was in the old days.

According to the Harvard College Handbook for Students, an A- reflects ”full mastery of the subject” without “extraordinary distinction”. In several classes I taught as an instructor and teaching fellow at Harvard and Princeton, particularly electives, I found that around half the students produced work on this level. As a result, I gave a lot of A-range grades.

Perhaps my understanding of “mastery” reflects historically lower demands. For example, I don’t expect students writing about Aristotle to understand Greek. Yet it’s not my impression that standards in my own field of political theory have changed a lot in the last fifty years or so. In absence of specific evidence of lowered standards, then, there’s reason to think that grade inflation at first-tier universities has some objective basis.

But that doesn’t mean grade inflation isn’t a problem. It is: just not quite the way some critics think. At least at Harvard and similar institutions, grades are a reasonably accurate reflection of what students know or can do. But they are a poor reflection of how they compare to other students in the same course. In particular, grade inflation makes it difficult to distinguish truly excellent students, who are by definition few, from the potentially much larger number who are merely very good.

Here’s my proposal for resolving that problem. In place of the traditional system, students should receive two grades. One would reflect their mastery of specific content or skills. The other would compare their performance to the rest of the class.

For example, a student might receive an A- for content, but a B for comparison. That wouldn’t necessarily indicate low standards. It might just as easily mean that the class was unusually strong. Contrary to the assumption that they hand out praise like candy, professors make this kind of distinctions all the time. It’s just that they do so privately in letters of recommendation, which are now more necessary than ever to make sense of inflated transcripts.

In general, you’d expect to see greater divergence between the two grades in upper division courses for which students self-select by interest and ability than you would in general education requirements or introductory surveys that attract a more mixed group. There, a high content grade would be likely to correspond to high comparative grade.

Mark Bauerlein has suggested a similar policy, in which a student’s grade is listed alongside the average grade in the course. That’s not a bad idea, but does little to prevent the confusion between objective and comparative measures that encourages grade inflation. After all, noting that a student received a B+ in a course with an average grade of B doesn’t provide much information about what the instructor thinks a B means

My proposal is also different from Mansfield’s practice of giving students an inflated formal grade and a “real” grade given in private. Evaluations of content or skill mastery and comparisons to the rest of the class are equally real. They just measure different things. Moreover, secret grades, like sealed letters of recommendation, offer students little help in determining where they stand relative to their classmates.

The approach would increase the burden on professors to think about how they grade. In particular, we would have to be much more specific about their standards for the content grade than we often are under the current arrangement.

That’s a good thing in an educational system characterized by increasing diversity of preparation and expectations. No grading policy can restore the consensus about the meaning of an A that may (or may not) have existed half a century ago. But we can be reflective and transparent about what we’re looking for—and about the information that our evaluations are intended to convey.