Utility is in the Eye of the User: A Critique of NLP Leaderboards

Utility is in the Eye of the User: A Critique of NLP Leaderboards

Author

Kawin Ethayarajh, Dan Jurafsky

Year
2020
image

Utility is in the Eye of the User: A Critique of NLP Leaderboards

Kawin Ethayarajh, Dan Jurafsky. 2020. (View Paper → )

Benchmarks such as GLUE have helped drive advances in NLP by incentivising the creation of more accurate models. While this leaderboard paradigm has been remarkably successful, a historical focus on performance-based evaluation has been at the expense of other qualities that the NLP community values in models, such as compactness, fairness, and energy efficiency. In this opinion paper, we study the divergence between what is incentivised by leaderboards and what is useful in practice through the lens of microeconomic theory. We frame both the leaderboard and NLP practitioners as consumers and the benefit they get from a model as its utility to them. With this framing, we formalise how leaderboards – in their current form – can be poor proxies for the NLP community at large. For example, a highly inefficient model would provide less utility to practitioners but not to a leaderboard, since it is a cost that only the former must bear. To allow practitioners to better estimate a model’s utility to them, we advocate for more transparency on leaderboards, such as the reporting of statistics that are of practical concern (e.g., model size, energy efficiency, and inference latency).

DeepMind and other companies are moving away from using leaderboards. What gets measured gets managed, and leaderboards create strange incentives. They’re great for a ground truth problem like protein folding (CASP) but they aren’t nuanced enough to capture all of the non-goals and important ethical considerations.