I was drawn to this description of Gail Jefferson, in the introduction of Talking About Troubles in Conversations, on reviewing academic papers.

Text by Drew, Heritage, Lerner and Pomerantz; paragraph breaks mine:

As a rule, she did not recommend rejecting papers; she would recommend “revise and resubmit”, always supported by incisive comments and suggestions that were often longer than the submitted paper and that managed to find some kernel or some gem of a phenomenon among an author’s possibly inchoate efforts.

She was selfless in her commitment to finding what was valuable and perceptive in a draft, what was unnecessary or misguided or just plain wrong, what was well grounded and what was unsupported by empirical evidence, what needed to be expressed more precisely, and how other evidence needed to be considered, and how a paper might be better shaped to the author’s project.

Her efforts would be considered “collegial” nowadays, but that term is too managerial to capture adequately the warmth and conviction she brought to feedback and reviewing, for the sake of the work. Thus did Erving Goffman [write] about the reviews of a paper he had submitted [that, while not entirely critical of Jefferson’s line of work, did point out an apparent gap]: “[…] her eleven pages of specific suggestions, however, were really quite remarkable, a product of a closer and more loving reading than anyone deserves.”

Some venues (like CHI) prompting you to write names of ideal reviewers when you submit your paper. Can I name dead people?—you might wonder. (I’d picked up the “Troubles” book after randomly stumbling upon this rather exceptional obituary of her.)


Of course, there are many ways in which this “selfless commitment” is complicated in practice. Writing such reviews requires time and effort. With all the venues I’ve reviewed for, that sort of effort comes with very little reward, save an occasional shout-out in a conference handbook (and sometimes they misspell your name). The public discourse surrounding reviewing these days, at least for large and ever-growing computer-science-adjacent venues, is basically that it’s a shitshow: good reviews are occasionally accomplished as heroic efforts in spite of the constraints. (And to be clear, I’m not dunking on reviewers. I’ve gotten some fabulously constructive reviews for work I’ve submitted, mostly, as it turns out, to CSCW.)

Writing generous reviews has gotten harder and will continue to do so, and this hardness doesn’t simply scale with the growing size of publication venues; in more interdisciplinary spaces, there’s a tricky combinatorial element, too. It’s easier to be generous with work that ultimately feels like part of a shared project. Would Dr. Jefferson, a sociologist, have been able to do very much if asked to review some big computational deep-learning-based chatbot effort? “Solving conversations with computers” doesn’t seem like a cause she would get behind, beyond the keyword match that all parties are interested in conversations.

“Gail Jefferson reviewing a paper that gets State of the Art Results on Conversational Understanding with a Neural Model” sounds like a contrived scenario, but publication venues are quite diverse these days, in terms of the ideas and aspirations that different authors draw on—think CHI, CSCW, FAccT, even many of the tracks at ACL. To a lesser extent, I’ve been in that boat many times—attempting to assess papers that are close to my work solely on the basis of text similarity.

Another complicating factor: our discourse seems to be shifting, rightly so, towards more extensively interrogating potential harms. And it’s hard, maybe fatally problematic, to try to find a “gem of a phenomenon” in a paper whose content could do a great deal of harm. Some work may simply be undeserving of a close and loving reading.


These caveats shouldn’t sink the underlying point: good reviews are good things that we need more of. If you’re in a position to do something about this at a structural level, you’re hopefully thinking hard about what, structurally, impedes or facilitates good reviewing. I personally have more in common, at least at the moment, with the reviewer opening up the 4-6 PDFs they’ve been assigned to read and cursing themselves for leaving it until the last minute yet again.

In those shoes, here’s what I take as a necessary if insufficient maxim: above all else, identify the authors’ project. That’s hard to do, especially if the author is fuzzy on what their project is (another boat I’ve been in, as an author). But at the very least, you could form hypotheses: maybe the work is describing a new phenomenon, maybe it’s proposing an approach to computationally model that phenomenon, maybe it’s taking some approach as given and using it to characterize a particular setting by way of this phenomenon. Even enumerating these possibilities is helpful to the authors.

All of these hypothesized projects have foundational implications for how the paper is to be read. I recall a venue (probably CSCW?) that, at least in a past cycle, required reviewers to explicitly state the authors’ intended contribution and, consequently, to name the criteria by which they’d evaluate that paper. That’s exactly it: the project, above the fanciness of a dataset or model, above the articulateness of the prose, determines what’s valuable, well-grounded and well-expressed. Often, the project points to the gem of a phenomenon that motivated the authors’ presently inchoate efforts, that drove them to actually try for a paper, that will be the foundation to build on in a resubmission.

Or maybe the project itself is fatally misguided. Calling that out is generous in its own way, since fancier experiments or better-written prose aren’t going to help. (Or maybe the paper’s already really good. In which case I think this maxim provides a framework for more concretely appreciating why it’s good.)

My intuition, by the way, is that this approach—of rooting your assessment of a paper in the authors’ project—is a particularly good way to engage with interdisciplinary work (this will take way more text to unpack). But to return to the above description of Dr. Jefferson, I like “warmth and conviction”. Beyond collegiality or civility, those are better, or at least more precisely-stated, qualities to bring to reviewing—and, come to think of it, academia in general.