We fast-track research to product value in a changing development landscape.
This page presents an overview of our teams in the Applied Research (AR) Division and our current projects.
Making LLMs better at understanding code through context engineering and MCP tool usage.
Ongoing projects:
Explore approaches to searching through runtime information to find relevant fragments and provide them for an agent. Extend tools for collecting runtime information taking into account performance and memory requirements.
Ongoing project:
Explore different approaches to increasing the quality of test generation with AI. Extend it to other stages of the software development, like keeping tests up-to-date after changes in the production code.
Ongoing project:
Find out more on the team page.
Research methods for analyzing AI agent traces and optimizing prompts, agent topologies and tools, which currently requires sifting through extensive agent traces and evaluation results.
Ongoing projects:
Explore various techniques and develop tools to simplify and enhance the accessibility and robustness of evaluation for all individuals working with AI-powered functionality.
Ongoing projects:
Creating a high-level debugging approach, so developers and coding agents can easily assess program behavior at a higher level, instead of using low-level stepping controls.
Ongoing project:
We are open to collaborate with other researchers from both academia and industry.
If any of the above projects interest you for collaboration, please reach out!