🌸 BigCodeBench Leaderboard 🌸

BigCodeBench evaluates LLMs with practical and challenging programming tasks.

blog leaderboard data
github viewer paper

πŸ“ Notes

  1. Evaluated using BigCodeBench version 0.1.0;
  2. Models are ranked according to (calibrated) Pass@1 using greedy decoding. Setup details can be found here.
  3. Complete vs Instruct:
    Complete: Code Completion based on the (verbose) structured docstring. This variant tests if the models are good at coding.
    Instruct (πŸ”₯Vibe CheckπŸ”₯): Code Generation based on the (less verbose) NL-oriented instructions. This variant tests if the models are really capable enough to understand human intents to code.
  4. Wonder the relative performance among models, or the current progress of task solve rate? Check out the πŸ€— Hugging Face Leaderboard!
  5. πŸ’€ indicates the models having at least a difference of 1% between the calibrated Pass@1 and the original one. What does this imply? Instruction-tuned models can be lazy, omitting essential code parts and thus failing on some tasks. Therefore, we add the missing parts during evaluation, and report the calibrated Pass@1 score as default,
  6. ✨ marks models evaluated using a chat setting, while others perform direct code completion. We note that some instruction-tuned models miss the chat template in their tokenizer configuration.
  7. Model providers have the responsibility to avoid data contamination. Models trained on close data can be affected by contamination.
  8. πŸ’š means open weights and open data. πŸ’™ means open weights and open SFT data, but the base model is not data-open. What does this imply? πŸ’šπŸ’™ models open-source the data such that one can concretely reason about contamination.
  9. "Size" here is the amount of activated model weight during inference.

πŸ€— More Leaderboards

In addition to BigCodeBench leaderboards, it is recommended to comprehensively understand LLM coding ability through a diverse set of benchmarks and leaderboards, such as:

πŸ™ Acknowledgements

  • We thank the EvalPlus team for providing the leaderboard template.
  • We are grateful for the significant contributions from the BigCode community.