JetBrains logo

Context Collection Competition

Context Collection Competition

Co-organized by JetBrains and Mistral AI

In AI-enabled IDEs, code completion quality heavily depends on how well the IDE understands the surrounding code – the context.

That context is everything, and we have asked the community to help us find the best way to collect it.

The participants have a chance to present their strategy at the dedicated workshop at ASE 2025.

Results

We are very grateful to all participants for their involvement, enthusiasm, and creativity.

The top teams are listed below.

More details

on Discord

Goal of the competition

Our experiments at JetBrains Research show that context plays an important role in the quality of code completion.

The objective in our competition is to create a context collection strategy that supplements the provided completion points with useful information from across the whole repository.

The strategy should maximise the chrF score averaged between three strong code models:

Competition tracks

The competition includes two tracks with the same problem, but in different programming languages.

  • Python: A popular target for many novel AI-based programming assistance techniques due to its very wide user base.
  • Kotlin: A modern statically typed language with historically good support in JetBrains products, but with less attention from the research community.

We are especially excited about universal solutions that work across both dynamic (Python) and static (Kotlin) typing systems.

Prizes

The top teams will get a share of the 12,000 USD prize pool.

Monetary prizes

Each track awards prizes to the top three teams.

  • 🥇 1st place: USD 3,000
  • 🥈 2nd place: USD 2,000
  • 🥉 3rd place: USD 1,000

Extras

  • 🎁 A one-year JetBrains All Products Pack license for every member of top teams
  • 🔑 Credits granted to your account on La Plateforme, for you to use however you like.
  • We will also cover the registration fee at the dedicated ASE 2025 workshop for a representative from each top team

ASE 2025 workshop

We are hosting a workshop dedicated to the competition and the broader topic of context collection at ASE 2025 in Seoul in November 2025.

The workshop will serve as a venue for the winners to present their approaches to the competition problem and their solutions, share their experience, and build connections.

All solutions outperforming a reasonable baseline provided by us will qualify for a workshop submission.

Important dates

  • June 2, 2025: competition opens and practice phase begins
  • June 9, 2025: public phase begins
  • July 25, 2025: public phase ends, private phase begins
  • July 25, 2025: solution paper submission opens
  • (new) August 18, 2025: preliminary results announced
  • (updated) August 21, 2025: private phase submission deadline
  • (updated) August 25, 2025: final results announced
  • (updated) September 1, 2025: solution paper submission closes
  • November 2025: solutions presented at the workshop

Getting started

Go to the competition page on EvalAI to see the results of the public phase.

You are welcome to fork our starter kit to develop your solutions.

By participating in the competition, you indicate your agreement to its terms and conditions.