Context Collection Competition by JetBrains and Mistral AI
Build smarter code completions and compete for a share of USD 12,000!In AI-enabled IDEs, code completion quality heavily depends on how well the IDE understands the surrounding code the context. That context is everything, and we want your help to find the best way to collect it.Join JetBrains and Mistral AI at the Context Collection Competition. Show us your best strategy for gathering code context, and compete for your share of USD 12,000 in prizes and a chance to present it at the workshop at ASE 2025.Why context mattersCode completion predicts what a developer will write next based on the current code. Our experiments at JetBrains Research show that context plays an important role in the quality of code completion. This is a hot topic in software engineering research, and we believe its a great time to push the boundaries even further.Goal and tracksThe goal of our competition is to create a context collection strategy that supplements the given completion points with useful information from across the whole repository. The strategy should maximize the chrF score averaged between three strong code models: Mellum by JetBrains, Codestral by Mistral AI, and Qwen2.5-Coder by Alibaba Cloud.The competition includes two tracks with the same problem, but in different programming languages:Python: A popular target for many novel AI-based programming assistance techniques due to its very wide user base.Kotlin: A modern statically-typed language with historically good support in JetBrains products, but with less interest in the research community.Were especially excited about universal solutions that work across both dynamic (Python) and static (Kotlin) typing systems.PrizesEach track awards prizes to the top three teams: 1st place: USD 3,000 2nd place: USD 2,000 3rd place: USD 1,000Thats a USD 12,000 prize pool, plus free ASE 2025 workshop registration for a representative from each top team.Top teams will also receive: A one-year JetBrains All Products Pack license for every team member (12 IDEs, 3 extensions, 2 profilers; worth USD 289 for individual use). USD 2,000 granted on La Plateforme, for you to use however you like.Join the competitionThe competition is hosted on Eval.AI. Get started here: https://jb.gg/co4.We have also released a starter kit to help you hit the ground running: https://github.com/JetBrains-Research/ase2025-starter-kit.Key dates are:June 2, 2025: competition opensJune 9, 2025: public phase beginsJuly 25, 2025: public phase endsJuly 25, 2025: private phase beginsJuly 25, 2025: solution paper submission opensAugust 18, 2025: private phase endsAugust 18, 2025: final results announcedAugust 26, 2025: solution paper submission closesNovember 2025: solutions presented at the workshopBy participating in the competition, you indicate your agreement to its terms and conditions.
0 Comments 0 Shares 10 Views