JetBrains logo

Applied Research Division

Applied Research Division

We transform research into product value in a changing development landscape.

This page presents an overview of our teams in the Applied Research (AR) Division and our current projects.

Context engineering and MCP tools

Context and MCP tools engineering

The aim of this research area is to make LLMs better at understanding code through context engineering and MCP tool usage.

Ongoing projects:

  • IntelliJ MCP tools selection: We strive to enable agentic frameworks to use IDE tools effectively.
  • Context-aware code retrieval: We boost code generation by using context-aware embeddings.

Runtime traces for SWE agents

This research avenue explores approaches to searching through runtime information to find relevant fragments and provide them to a given agent. Extend tools for collecting runtime information, taking into account performance and memory requirements.

Ongoing project:

  • Execution traces for Junie: We optimize Junie with runtime traces.

Test generation and maintenance

This area of our research involves exploring different approaches to increasing the quality of test generation using AI. We extend it to other stages of the software development process, like keeping tests up-to-date after changes in the production code.

Ongoing project:

  • Test repair: We implement agents and compare them with general-purpose SWE Agents.

Find out more on the team page.

LLMs and AI agents as building blocks

Analysis and optimization of agents

This research track looks into new methods for analyzing AI agent traces and optimizing prompts, agent topologies, and tools, to improve on the current methods that require sifting through extensive agent traces and evaluation results.

Ongoing projects:

  • Analysis of AI agents: We apply pattern mining for AI agent traces.
  • Auto-optimizing agents: We use evolutionary algorithms to optimize prompts and agents.

AI evaluation and benchmarks

Our research in this area involves exploring various techniques and develop tools to simplify and enhance the accessibility and robustness of evaluation for all individuals working with AI-powered functionality.

Ongoing projects:

  • Benchmark collection agent: We use AI Agents to mine SWE benchmarks from GitHub.
  • Anonymization for SWE benchmarks: We apply metamorphic testing to reduce data leakage in benchmarks.

New generation IDE features

Intent-driven debugging

In this reserach area, we create a high-level debugging approach, so developers and coding agents can easily assess program behavior at a higher level, instead of using low-level stepping controls.

Ongoing project:

  • Trace recorder plugin for IntelliJ IDEA: We record and navigate through tracepoints

Collaborate with us

We are open to collaborate with other researchers from both academia and industry.

If you’d be interested in working with us on any of the above projects, please reach out!