Database systems research: dead or alive? The answer may be backpropagation!

Much has been said recently about the future of database systems research. Mike Stonebraker has loudly complained about how core systems papers are disappearing at an alarming rate from SIGMOD and VLDB. He called it Fear #1 in a recent talk. Another excellent and a more optimistic view of the database research was published by Mike Caferalla and Chris RĂ©.

So what is going on: Is there or is there not an issue with database systems research? While in my mind there is no doubt that there is a rich future for database systems research, it is definitely the case that publishing systems papers has gotten much harder. There is undoubtedly an urgent issue to solve with reviewing papers.

One key root cause is that there is a lot of bad reviewing in our conferences. And, there is no long-term mechanism to deal with bad reviewing. With the gigantic PCs that we now have and a bandwagon approach to following-the-latest-trend (Mike makes a point about this too in his talk), the way to game the system is to write a lot of papers. By definition, systems folks are at a disadvantage in this game as it takes much longer to write most systems papers. For example, the Quickstep paper (VLDB'18) took 5+ years with 7+ students to write.

So, a key issue that we need to deal with is measuring and reporting the quality of reviewers. Here I have a simple suggestion to PC chairs and the powers-to-be at the SIGMOD and PVLDB executive committees:
  1. Reduce the size of the PC so that each PC member has a significant number of papers to review. Current SIGMOD and VLDB committees are way too large. Cutting the size to half is a good start (fewer random variables). For PVLDB the monthly cycle adds even more randomness. Reduce that too.
  2. Incorporate transparency in the review process so that the distribution of the overall recommendations made by each PC member is made public. There are reviewers who consistently hate everything they read, and reviewers who love nearly everything that they read. This really messes up the system, and I think is a big factor in why our system is broken. Note reducing the PC size (suggestion #1) yields more data per PC member, which you need to make sound comparisons. Also randomly assign papers to reviewers based on the interests that reviewers express to reduce the chances of someone saying "I had a bad batch of reviews."
In other words, a crucial problem that we need to solve is having a fairer way of accepting or rejecting papers. A key component in achieving this fairness is exposing overly optimistic or overly pessimistic reviewers. Lets do just that! Let's create a website (add this to DBLP?) with a scorecard for each reviewer that shows the distribution of their overall accept/reject scores. This information can then be used by PC chairs to decide how to pick a good PC. Most importantly, it may lead to reviewers self-correcting themselves so that they are more balanced in their evaluation.

This backpropagation, if you will, can help correct those reviewers who haven't calibrated themselves (but wish to), and starkly expose those who likely shouldn't be on PCs.

There are many other good ideas floating around, including having more training for junior members of the community (I hope all advisors train their students in this), in-person PC meetings at least for the area chairs, and so on. I see all those as complementary to the proposal above.

What do you think?

Comments

Popular posts from this blog

Graph Analytics: Simpler than we think?

WideTable: An Accelerator for Analytic Data Processing

SIPping and TIPping for faster search over sorted arrays