The Developer Ecosystem Report 2025 is a public report. Its contents may be used only for non-commercial purposes, as described here.
We included incomplete responses only when the question about the use of programming languages was answered. We also used a set of 34 criteria to identify and exclude suspicious responses, such as:
This year's survey consisted of 585 questions.
Our goal was to cover a variety of research areas, so each respondent was exposed to certain sections but not others based on their previous questions. For example, questions about Go were shown only to programmers who use Go. In addition, we randomized questions and sections to further reduce the load on each respondent.
On average, participants spent 30 minutes completing the survey. While we have made efforts to streamline the process, we aim to make it even more efficient next year.
We invited potential respondents by using Google Ads, X ads, Facebook ads, Instagram, Reddit, Quora, BilliBilly, MaiMai, Zhihu, dev.to, TLDR, IT Media, and JetBrains’ own communication channels. We also posted links to user groups and tech community channels and asked respondents to share the survey link with their peers.
We collected sufficiently large samples from 19 geographical regions. The 11 countries with the most developers – Brazil, Canada, China, France, Germany, India, Japan, South Korea, Spain, the United Kingdom, and the United States – formed their own individual regions. The remaining countries were grouped into seven additional regions as follows:
For each region, we collected at least 300 responses from external sources, such as ads or respondents’ referrals.
We weighted responses by their source. As our baseline dataset, we used the responses collected from external channels that are less biased toward JetBrains users, such as paid ads on X, Facebook, Instagram, Quora, and referrals. Then, for each respondent, we applied a three-stage weighting procedure to produce a more balanced view of the global developer population.
In the first stage, we assembled the responses collected while targeting different countries. Then, we applied our estimations of the populations of professional developers in each country to these data.
First, we took the survey data we received from professional developers and working students who were directed to us via ads posted on various social networks in the 19 regions, along with the data that we received from various peer referrals. Then, we weighted the responses according to our estimated populations of professional developers in those 19 regions. This ensured that the distribution of the responses corresponded to the population size of professional developers in each country.
In the second stage, we forced the proportion of students and unemployed respondents to be 17% in every country. We did this to maintain consistency with the previous year’s methodology, as that is the only estimate of their populations we have available.
By this point, we had a distribution of responses from external sources weighted both by region and employment status.
The third stage was rather sophisticated, as it included calculations obtained by solving systems of equations. We took those weighted responses, and for the developers from each region, in addition to their employment status, we calculated the shares for each of the 30+ programming languages, as well as the shares for those who answered “I currently use JetBrains products” and “I have never heard of JetBrains or its products”. Those shares became constants in our equations.
The next step was to add two more groups of responses from other sources: JetBrains internal communication channels, such as JetBrains social media accounts and our research panel, and social network ad campaigns targeted at users of specific programming languages.
We composed a system of 30+ linear equations and inequalities that described:
To solve this system of equations with the minimum variance of the weighting coefficients (which is essential!), we used the dual method of Goldfarb and Idnani (1982, 1983), which helped us collate the optimal individual weighting coefficients for the 23,262 total respondents.
Despite these measures, some bias is likely present, as JetBrains users might have been more willing, on average, to complete the survey. This year, we additionally corrected for that by reducing their representation in the dataset by 10%, i.e., multiplying their share of responses by 0.9.
As much as we try to control the survey distribution and apply smart weighting, the communities and the developer ecosystem are constantly evolving, and the possibility of some unexpected data fluctuations cannot be completely eliminated.