Dropbox shared insights into how it leverages the Cursor code editor to manage its extensive codebase. In essence, the company indexes over 550,000 files through Cursor and approves more than a million lines of code written by AI agents each month. This isn't merely an experiment; it's an operational workflow.
Dropbox integrates Cursor AI into development
What's Actually Happening
Dropbox has integrated Cursor into its development cycle at the infrastructure level. The editor connects to the company's internal systems, providing it with access to the codebase, documentation, and change history. Essentially, the AI gains the context that a human working on the project for several months would typically acquire.
Cursor functions not just as an assistant suggesting the next line of code, but as a comprehensive agent: it can analyze tasks, propose changes across multiple files simultaneously, and generate tests and documentation. Engineers review the output and accept it if everything meets their standards.
AI code generation volume and type analysis
A Million Lines – Is That a Lot or a Little?
A million lines per month is roughly one-third of the code engineers manually write in an average team of several dozen people. However, the significance lies not in the quantity, but in the type of code being generated.
Typically, AI agents handle routine tasks such as API migrations, dependency updates, refactoring repetitive sections, and writing tests. These are tasks where the logic is clear, but the volume of work is substantial. An engineer might spend a day on such a task, whereas an agent might complete it in an hour. While these tasks were often postponed due to a lack of urgency, they are now addressed immediately.
Dropbox notes that since implementing Cursor, the merge speed of pull requests has increased, and the development cycle time has decreased. Put simply, code reaches production faster.
Technical implementation of Cursor AI
How It Works Technically
Cursor indexes the codebase, meaning it constructs an internal map of the project: identifying file locations, module connections, and design patterns. For small projects, this isn't an issue, but Dropbox deals with hundreds of thousands of files. Indexing such a vast volume requires robust infrastructure.
The company integrated the editor with its internal tools, including version control systems, CI/CD (continuous integration/continuous delivery), and knowledge bases. Cursor therefore understands not only the code but also its broader context: who authored a module, why, what bugs existed, and how it relates to the overall architecture. This enables it to generate code that aligns with the team's style and accounts for the project's history.
Quality of AI generated code at Dropbox
What About Quality?
Dropbox doesn't indiscriminately accept all generated code. There are reviews, tests, and checks in place. However, the very fact that the company accepts a million lines per month indicates a high level of quality. Otherwise, engineers would spend more time correcting errors than they would save through AI generation.
A crucial point is that AI agents work on tasks with clear rules and existing examples. If an old API needs to be rewritten to a new one, and dozens of such examples already exist in the codebase, the agent will perform well. If a task requires architectural decisions or an understanding of business logic not present in the code, that remains the human's responsibility.
Impact of AI on software development
What This Changes for Development
Dropbox refers to this approach as «AI-native SDLC» – a software development life cycle inherently oriented towards AI – where AI is not just a tool but an integral part of the process. This does not mean that developers are becoming obsolete. Rather, their work is evolving: they spend less time on mechanical code editing and more on design, verification, and strategic decision-making.
For the industry, this suggests that AI code editors are transitioning from being a «handy tool» to a «systemic development component». Companies are beginning to build processes around them, rather than merely plugging them in as an add-on.
Unanswered questions about AI code generation
Open Questions
Dropbox has not disclosed all the specifics: for instance, what percentage of generated code is rejected during review, how much time is spent on verification, or which tasks agents are currently unable to solve. It's also unclear how this affects technical debt: if AI rapidly writes a large volume of code, are more problems accumulating that will surface later?
Another consideration is the dependence on infrastructure. Cursor requires indexing and integration with internal systems. This is viable for a large company with sufficient resources, but for smaller teams, such an approach might be excessive.
Regardless, Dropbox's experience demonstrates that AI agents can already handle a significant portion of routine work in real-world projects. And this isn't a future possibility – it's happening now.