Saturday, January 9, 2016

2016 Editors (Short Form)

To help SFF fans make nominations for the Best Editor (Short Form) category in the 2016 Hugo Awards, we've analyzed all the recommendations by eight prolific reviewers of 560 original works of short fiction from the top publications of 2015.

This post presents that analysis as a baseline for Hugo Award voters to customize and produce their own list of nominees.

Update February 9, 2016: George R.R. Martin asked to not be considered for this award.

To Rate the Editors, Rate Their Publications

We rated editors based on the recommendations their stories received. The idea is that good editors publish lots of stories that get good reviews, and they don't publish a lot of stories that get bad reviews. Many publications got few or no reviews of any sort, and that (plus the Hugo eligibility requirements) ruled out all but six professional magazines and six anthologies for our data-driven analysis. (See the details.)

Magazines
Anthologies
·         Analog
·         Asimov’s
·         Clarkesworld
·         F&SF
·         Lightspeed
·         Tor.com
·         Meeting Infinity
·         Mission: Tomorrow
·         Old Venus
·         Operation Arcana
·         The End Has Come
·         Twelve Tomorrows 2016

Of the six professional magazines, five of them have an editor-in-chief who makes the call on what to print, and that's clearly the person in contention for this award. Tor.com, however, assigns different editors to different stories, and two of those met the bar.

Picking nominees for Best Editor (Short Form) amounts to evaluating those two Tor.com editors plus the ten individuals who edited the other eleven publications. 

To get started, lets look at the data.

Quantity: The Magazines Dominate

For each publication, this chart divides the total number of words of original fiction published into four categories:
  1. Recommended means stories with at least one recommendation by a prolific reviewer.
  2. Not Recommended means stories that RSR thought had problems that reflected poorly on the editor.
  3. Not SFF means stories that weren't science fiction or fantasy.
  4. Ordinary is everything else.
It treats the Tor.com editors Ellen Datlow and Patrick Nielsen Hayden as if they had each written an anthology consisting of the stories they edited for Tor.com in 2015.


Here's how we're going to rate the editors: We're going to look at a variety of measures (e.g. "most original fiction published") and see which publications are in the top 5 (again, treating Datlow and Nielsen Hayden as "publications"). We'll give every publication one "point" every time it appears in the top-5 list according to a particular measure. Then we'll see which publications earned the most points, and finally we'll rank their editors accordingly.

The first measure we'll use is total quantity of original fiction (regardless of quality). Using this measure, we give one point each to Analog, Asimov's, Clarkesworld, F&SF, and Lightspeed for being the five most prolific SFF publishers in 2015.

For the second measure, we'll look only at Recommended word counts (just the blue part of the bars). Old Venus edges out Clarkesworld, but otherwise it's the same list as the first. So we give one additional point to Analog, Asimov's, F&SF, and Lightspeed, and we give one point to Old Venus as well for being the top-5 publishers of Recommended content.

Quality: A Different Story

This chart shows the categories as percentages of the total word count for each publication. It lets us judge quality regardless of quantity.


In the third measure, Ellen Datlow had the highest percentage of stories Recommended, followed by Old Venus, F&SF, Meeting Infinity, and Asimov's. These are the five publications with the highest percentage of Recommended fiction, and we give each one point.

In the fourth measure, Ellen was the only editor to have zero stories Not Recommended. The next best in that regard were Operation Arcana, Old Venus, F&SF, and Patrick Nielsen Hayden. Again, each gets one point for being the best 5 (i.e. having the lowest percentage) for Not Recommended.

For our final measure, accepting stories that are Not SFF wasn't a big problem for most publications, but F&SF deserves credit for being the only magazine that managed to publish none like that. We give it and Patrick Nielsen Hayden one point. (We see this as less challenging for anthologies, which have a theme, so we don't credit them for this.)

These are the final point totals:
  • F&SF (5)
  • Asimov's (3)
  • Old Venus (3)
  • Analog (2)
  • Lightspeed (2)
  • Ellen Datlow (2)
  • Patrick Nielsen Hayden (2)
  • Clarkesworld (1)
  • Meeting Infinity (1)
  • Operation Arcana (1)
Notice that F&SF tops the final list, even though it was not number one by any of the five measures. This is a good thing because we wanted a measure of overall excellence. F&SF earned the most points because, although it's never at the top, it's the only publication that's always near the top, no matter which measure we use.

This gives a ranking for publications. Now we need to turn that into a ranking for editors.

Putting it together

The NaΓ―ve List

John Joseph Adams edited both Lightspeed and Operation Arcana. Since neither was ever in the same list, we simply combine his points together. George R.R. Martin and Gardner Dozois coedited Old Venus, but Martin has asked people to give Dozois all the credit for the purpose of this award.

Editor Publication Points
C.C. Finlay F&SF 5
John Joseph Adams Lightspeed and Operation Arcana 3
Gardner Dozois Old Venus 3
Sheila Williams Asimov's 3
Ellen Datlow Tor.com 2
Patrick Nielsen Hayden Tor.com 2
Trevor Quachri Analog 2
Neil Clarke Clarkesworld 1
Jonathan Strahan Meeting Infinity 1

This gives us a list of four names: John Joseph Adams, C.C. Finlay, Gardner Dozois, and Sheila Williams. It's the same result as a subjective analysis that said "Asimov's and F&SF are the best magazines, Old Venus was the best anthology, and Adams deserves credit for having both a top magazine (Lightspeed) and a top anthology (Operation Arcana)."

The only problem is it doesn't take into account any of the other things that editors do (and we need to find a fifth name somehow).

Other Factors

Here are examples of things that arguably should count towards an editor's rating but which the charts above do not include. You should take a look and see if any of these is important to you.
  • Development: Did the editor develop new writers? How many writers eligible for the Campbell award did he/she publish? (We'll try to track this in the future.)
  • Pushing the Envelope: Did the editor publish anything special? (E.g. Clarkesworld's translated stories or Lightspeed's Women/Queers Destroy SF issues.)
  • Editorial Stance: Did the editor use his/her position to try to effect positive change in the SF community, either through editorials, interviews, or speeches?
  • Reprints: Reviewers only recommend original fiction, but many readers probably enjoy the reprints just as much. Clarkesworld and Lightspeed publish reprint stories every month, and some of these editors did extensive work on reprint anthologies, but the naΓ―ve list doesn't give them any credit for that. 
  • Art: Some magazines have far better cover art and illustrations than others.
  • Nonfiction: All the magazines have content ranging from science articles to book/movie reviews to author interviews to editorials. They take time and effort to produce, and they further separate magazines from anthologies. We didn't count them here, but you might want to reward an editor for quality nonfiction articles.
  • Author Service: Things like how long it takes for a magazine to accept or reject a manuscript, how promptly they pay authors, whether their contracts are reasonable, etc.
  • Publication Quality: Are there formatting errors in the print or electronic formats? Do they produce podcasts? Do they include word counts or some other way to identify the short fiction category? Can you buy back issues easily? Is the web site easy to navigate?
  • Blurbs: Do the blurbs before the stories give too much away?
  • Advancement: Did the editor elevate his/her publication from semi-prozine to professional this year?
  • Management: Attracted well-known film critics, book critics, science writers, etc. for nonfiction columns.
You can also analyze the data differently. Here are a few suggestions:
  • Drop one or more of our categories. (E.g. perhaps you don't agree with our ratings for Not Recommended stories. Or perhaps you don't mind the occasional story that is Not SFF.) 
  • Change the per-category list size from 5. 
  • Give different scores based on the position in each list.

Examples

We'll offer three examples, based on different ideas a person might have.

Example 1: "Giving Dozois three points for Old Venus is unfair. It should just be 1.5 points because Martin did half the work. And Neil Clarke deserves at least one extra point for all the innovative work he did bringing foreign-language translations to Clarkesworld." In that case, you'll end up with  John Joseph Adams, C.C. Finlay, and Sheila Williams and a 4-way tie between Datlow, Nielsen Hayden, Quachri, and Clarke. You might drop Nielsen Hayden (because he had zero Recommended stories) and Quachri (because you're mad that Analog can't fix the formatting of their Kindle stories) and so you'd nominate John Joseph Adams, Neil Clarke, Ellen Datlow, C.C. Finlay, and Sheila Williams.

Example 2: "The Tor.com ones shouldn't count because they didn't have to run a whole magazine. I don't agree with RSR's notion of 'bad' stories, and I'm fine with an occasional story that's Not SFF." In that case, you'd drop Ellen Datlow and Patrick Nielsen Hayden entirely, subtract one point from Operation Arcana and Old Venus, and two points from F&SF. That would give you John Joseph Adams, Gardner Dozois, C.C. Finlay, Trevor Quachri, and Sheila Williams.

Example 3: "I'm going to nominate Niall Harrison (editor of Strange Horizons), even though he's not on the list, because he accepted my manuscript and was so helpful to me. Otherwise, I don't think anything matters but total quantity of stories that weren't Not Recommended." In that case, you'll have Neil Clarke, C.C. Finlay, Niall Harrison, Trevor Quachri, and Sheila Williams.

We've made a partial list of extra accomplishments by the editors to help with this.

Good luck!


Where Did This Data Come From?

So where did we come up with all these numbers, and what do they really mean? From here down, we'll discuss the decisions we made and we'll offer some of the raw data.

What does an editor do?

The Hugo Award for Best Editor (Short Form) is for the people who select the stories that go into magazines and/or anthologies. These people generally have the title "editor-in-chief," as a little time spent reading their editorials and blog posts makes clear. (They constantly say things like "I accepted this story" or "I saved you from stories like that.") The award has nothing to do with copy editors, who correct spelling and grammar.

The exception is Tor.com, which assigns (and credits) different editors to different stories.

Given that, judging them by the quality of the stories that were printed in their magazines and anthologies makes good sense.

Who is eligible?

You can read the official definitions of the Hugo categories, but the short answer is anyone who edited short speculative fiction published in 2015, provided they produced the equivalent of at least one magazine issue or anthology during 2015, and had a lifetime total of four times that much. It doesn't matter if that work was a fanzine or a professional magazine or a big anthology. By taking the average across 12 months of Analog, Asimov's, and F&SF, we decided that "the equivalent of one magazine issue" amounts to 45,000 words of original fiction.

Only three of the Tor.com editors edited that much fiction in 2015, and only two meet the lifetime requirement. Of the editors of the nine anthologies we reviewed, only six met the lifetime requirement. Here are the three anthologies that were not eligible.

 Why exclude the smaller publications?

Since we didn't read a lot beyond the professional publications, we didn't really have a lot of choice, but we can offer evidence that none of the editors of semiprozines or fanzines would have made the list anyway.

Even though RSR didn't read the semiprozines, many of the 7 reviewers we follow did, so let's look at how many stories they recommended across all publications.

(Click to enlarge.)

The orange bar on Strange Horizons represents reviews by K. Tempest Bradford, who seems to have an unusual fondness for the publication. We're treating that as an outlier.

This shows that 82% of all recommended stories were in sources RSR reviewed, and 57% of recommended stories were in Asimov's, F&SF, Analog, Lightspeed, or Clarkesworld alone. Apex, Beneath Ceaseless Skies, and Strange Horizons all published about the same volume of fiction in 2015 as Clarkesworld did while Interzone published about half that much, yet they had only a quarter as many recommended stories. Therefore they wouldn't be competitive either in quantity or in quality, at least not the way we've chosen to measure them.

What this tells us is that the professionals really are different--very different--from the semiprofessionals, and therefore it really does make sense to consider only the editors of professional publications for the Best Editor (short form) award. That's consistent with the oft-heard claim that Best Editor (short form) is a proxy for "best prozine."

How do you evaluate an editor?

A. Story-based Metrics

Ultimately editors are gatekeepers responsible for bringing us good stories. We should credit them for good stories and penalize them for bad ones. To do this, we defined four quality levels.

1. Not SFF

These are all stories that RSR rated as not being science fiction or fantasy, regardless of whether anyone else liked them (see list).

It's worth discussing why we chose to ignore the other reviewers on this one. The truth is, almost all of the non-SFF stories were excellent, well-written stories. The trouble is, it isn't fair to include them with SFF. Once you drop the requirement that a story have a speculative element, it becomes far easier to write. You eliminate the need for infodumps. You greatly reduce the amount of suspension of disbelief you require from the reader. You make it easier for the reader to identify with your characters, and you can use the time and words you saved to focus on making the reader care about the characters.
A good editor should understand this and not get so excited about the quality of a story that he/she fails to notice that it's not an SFF story at all.

2. Not Recommended

A slightly different measure of an editor is how many bad stories they print. Bad stories range from things that never should have left the slushpile (1-star stories) to stories that make it hard for readers to suspend disbelief or which don't seem to have plots (2-star stories).

Currently, RSR is the only reviewer that tries to identify bad stories. In cases where we gave a story 1 or 2 stars but any of the other reviewers recommended it, we counted that story as ordinary: neither recommended nor not recommended.
These are stories that were recommended by any of the eight reviewers we surveyed (see list). These include both RSR 4 and 5-star stories. As mentioned above, when RSR recommended against a story that anyone else recommended, we treated it as ordinary.

4. Ordinary

These are all the rest of the stories. Either RSR gave them 3 stars and no one else recommended them (see list), or else someone did recommend them but RSR gave them 1 or 2 stars (see list).

Every story falls into one and only one of these four categories; there is no overlap.

B. Other Accomplishments

From reading editorials and doing a few Google searches, we know of a few special things that the editors did personally that ought to figure into any ranking. In no way is this complete. We'll happily amend this page to add new information that people bring to us.

1. John Joseph Adams (Lightspeed, Operation Arcana, and The End Has Come)

In 2015, he turned Lightspeed from a semi-prozine into a professional magazine. In addition to editing Lightspeed, he also edited the anthologies Operation Arcana and The End Has Come as well as one magazine we didn't review (Nightmare) and four more anthologies. He also produced the "Queers Destroy Science Fiction" issue of Lightspeed this year, which was a first.

2. Neil Clarke (Clarkesworld)

In addition to editing Clarkesworld, he also edited Forever Magazine and the anthology "Clarkesworld: Year Seven." A major accomplishment this year was the large number of translated science fiction stories he published. Another accomplishment was that in addition to 4 original short stories per issue he now publishes an original novelette. 

3. Gardner Dozois (Old Venus)

Gardner edits the reprints for Clarkesworld, and he edited "The Year's Best Science Fiction: Thirty-Second Annual Collection."

4. C.C. Finley (The Magazine of Fantasy & Science Fiction)

As editor of F&SF he changed their policy against accepting electronic submissions and apparently set up the submission system himself (it's hosted on his personal domain).

5. Jonathan Strahan (Meeting Infinity)

In 2015 he edited "Grand Crusades The Early Jack Vance Volume 5" and "The Best Science Fiction and Fantasy of the Year: Volume 9."

6. Sheila Williams (Asimov's Science Fiction Magazine)

Supervised a redesign of the Asimovs.com website.

Other Opinons

Richard Horton, in his article, More on the Best Editor, Short Form, Hugo, makes strong arguments for nominating Jonathan Strahan, John Jospeh Adams, and Sheila Williams. 

George R.R. Martin, in A Rocket For The Editor, Part Two, makes arguments for a variety of editors and discusses the category in broad terms. (As well as eliminating himself from the list.)

What Do You Think? What's the Best Way to Help People Make Best Editor (short form) Nominations?

Best Editor is a notoriously difficult category to nominate for, and while a number of people like our approach, it has also drawn some criticism. (See the comments relating to item #10 on File770's Pixel Scroll 1/14/16 I’m Not A Pixel, I’m A Free Scroll.) We're definitely interested in other ways to attack the problem, so please share your thoughts in the comments below.

6 comments (may contain spoilers):

  1. Wow. I realize there is a lot of work in this, and it's easy for me to sit here in my armchair and suggest more, so if you don't want to do this, I completely understand; you have produced something that I think a lot of people would consider a very helpful resource already.

    But something I think would be helpful would be a list of *which* stories a given editor edited. For example, as I recall you folks didn't rate "Penric's Demon" all that highly, whereas I loved it. Fannish tastes being what they are, stories Hugo nominators loved are likely to be all over the map, and people voting on the "edited stories I loved" metric would find such a correlation helpful.

    Another metric would be "found new authors that I loved." Maybe a star by those stories that were a first or second publication by a new author, if there is some way to discover that information. Because that's part of an editor's job is finding the gems in the slushpile.

    At any rate thank you for the work you have been doing. I realize these suggestions on "customizability" would be a lot of work / a pain in the neck and I don't want to give the impression I don't value what you are doing. I just think that things fans can easily adapt to express their personal tastes are likely to be more helpful to me personally, and perhaps there are more readers like me out there.

    ReplyDelete
    Replies
    1. Thanks for the suggestions.

      For "found new authors that I loved," we can indicate if a story is by a first or second-year author and provide a link to their website once I start tracking them for the Campbell Award (coming soon).

      For short fiction, the list of stories by editor is basically the list of stories by magazine (use editor-in-chief), except for Tor.com which has a pool of editors submitting stories. The post has links to stories by magazine.

      For long fiction, I don't know if the list of novels by editor equates to the list of novels by publisher (editor-in-chief). While I can see a magazine editor-in-chief looking at a thousand stories and reading a few hundred to decide what to print, I have difficulty imagining a publisher editor-in-chief doing the same for hundreds of books. :-) RSR might look into this further in a year or two to see if we can do any analysis that adds value to readers.

      Delete
    2. BTW, RSR loved Penric's Demon and rated it 5-stars. It didn't get reviewed by the seven prolific reviewers we followed because it was a standalone novella and not part of a magazine issue, so it didn't stand out in the novella list with just RSR's recommendation.

      Delete
    3. Have a look: http://www.rocketstackrank.com/2016/01/2015-short-fiction-editor-list.html

      Are you sure that's useful, though? I suppose it really comes down to asking "how should fans recognize a good editor?" Basing it only on the stories you read (unless you read hundreds) seems rather narrow. If the goal is to genuinely try to recognize the best editors, I don't think there's any way for fans to do it without making use of outside information. (E.g. ratings of reviewers who really did read hundreds of stories.)

      Delete
  2. Greg,

    Thanks for doing all this. I'm sorry that you got some heat from the usual suspects on File770. I don't always agree with your reviews (for example, I really rated Han Song's Security Check), but I've always found them intelligent and thought-provoking. It's a wonderful thing you're doing here. Thank you.

    ReplyDelete
    Replies
    1. Thanks. We appreciate the support.

      I think a lot of people forget that "the slates" was a group of about 160 people who were willing to vote for a list of works regardless of quality to make a political point. The list itself wasn't really to blame.

      The fans beat the slates at Sasquan, showing what a tiny minority they were. We believe the fans can beat them at nominations too, provided we make it easier for the fans to make nominations. That means supplying as much information as possible and organizing it as best we can, but it also means trusting the fans to be fair and to use the information wisely.

      Discouraging people from making recommendation lists doesn't hurt the slates at all; it only hurts the fans.

      Delete