So what is going on: Is there or is there not an issue with database systems research? While in my mind there is no doubt that there is a rich future for database systems research, it is definitely the case that publishing systems papers has gotten much harder. There is undoubtedly an urgent issue to solve with reviewing papers.
One key root cause is that there is a lot of bad reviewing in our conferences. And, there is no long-term mechanism to deal with bad reviewing. With the gigantic PCs that we now have and a bandwagon approach to following-the-latest-trend (Mike makes a point about this too in his talk), the way to game the system is to write a lot of papers. By definition, systems folks are at a disadvantage in this game as it takes much longer to write most systems papers. For example, the Quickstep paper (VLDB'18) took 5+ years with 7+ students to write.
So, a key issue that we need to deal with is measuring and reporting the quality of reviewers. Here I have a simple suggestion to PC chairs and the powers-to-be at the SIGMOD and PVLDB executive committees:
- Reduce the size of the PC so that each PC member has a significant number of papers to review. Current SIGMOD and VLDB committees are way too large. Reducing them to half is a good start (fewer random variables). For PVLDB the monthly cycle adds even more randomness. Reduce that too.
- Incorporate transparency in the review process so that the distribution of the overall recommendations made by the PC member is made public. There are reviewers who consistently hate everything they read, and reviewers who love nearly everything that they read. This really messes up the system, and I think is a big factor in why our system is broken. Note reducing the PC size (suggestion #1) yields more data per PC member, which you need to make sound comparisons. Also randomly assign papers to reviewers based on the interests that reviewers express to reduce the chances of someone saying "I had a bad batch of reviews."
This backpropagation, if you will, can help correct those reviewers who haven't calibrated themselves (but wish to), and starkly expose those who likely shouldn't be on PCs.
Of course there are lots other good ideas floating around, including having more training for junior members of the community (I hope all advisors train their students in this), in-person PC meetings at least for the area chairs, and so on. I see all those as complementary to the proposal above.
What do you think?