Author: Denis Avetisyan
A new analysis reveals that professional software developers aren’t passively collaborating with AI agents, but actively directing them to improve both productivity and code quality.
Experienced developers leverage planning and supervision to control AI agents, rather than relying on free-form ‘vibe coding’, in order to maximize software development efficiency and maintain high standards.
While the promise of AI agents automating software development suggests a shift towards collaborative ‘vibe coding,’ experienced developers instead prioritize control and oversight. This paper, ‘Professional Software Developers Don’t Vibe, They Control: AI Agent Use for Coding in 2025’, investigates how professionals integrate these agents into their workflows, revealing a pragmatic approach focused on productivity gains without sacrificing software quality. Our findings demonstrate that developers leverage their expertise to strategically manage agent behavior, complementing-rather than replacing-established best practices. Ultimately, this raises the question of how agentic interfaces can best support, rather than disrupt, effective software development processes.
The Illusion of Autonomy: Agentic Coding and the Shifting Landscape
The landscape of software development is experiencing a fundamental shift driven by the emergence of agentic coding tools. These innovative systems, fueled by the capabilities of Large Language Models, represent a move beyond traditional code completion and automated assistance. Rather than simply suggesting lines of code, these agents can autonomously generate, test, and even debug entire functionalities based on high-level instructions. This transition signifies a potential paradigm shift, promising to augment developer productivity and tackle increasingly complex software challenges. The core principle involves empowering these agents with a degree of autonomy, allowing them to navigate the coding process with minimal direct oversight – a capability previously confined to human developers. While still in its nascent stages, the rise of agentic coding signals a future where artificial intelligence plays a more proactive and independent role in the creation of software.
The potential for agentic coding tools to automate substantial portions of the software development lifecycle is generating considerable excitement, yet translating this promise into practical workflow integration presents a significant challenge. While these tools, driven by large language models, demonstrate an ability to generate code, suggest improvements, and even debug, their seamless incorporation into established professional routines isn’t guaranteed. Developers currently grapple with issues of trust – verifying the accuracy and security of AI-generated code – and adaptability, needing to reshape existing processes to effectively leverage these new capabilities. The true impact of agentic coding hinges not just on what these tools can do, but on how developers learn to reliably and efficiently integrate them into complex, real-world projects, a process that demands careful study and iterative refinement.
Despite the increasing buzz surrounding AI-powered coding assistants, practical integration within professional software development remains limited, with current adoption rates hovering around 25% for weekly use. A comprehensive understanding of how developers are actually utilizing these agentic tools – beyond simple task completion – is therefore paramount. Investigating workflows, identifying common challenges, and discerning patterns of successful implementation will be key to unlocking the full potential of these systems. Without such insights, the risk of these tools becoming underutilized or even counterproductive looms large, hindering rather than accelerating the software development lifecycle. Focused research into real-world usage will enable refinement of these agents, addressing developer needs and maximizing return on investment.
Mapping the Developer’s Domain: Our Research Approach
This research utilized a mixed-methods approach, combining field observations with a qualitative survey to comprehensively analyze developer workflows. Field observations were conducted remotely via Zoom with thirteen experienced developers, allowing for contextual understanding of tool interaction within their typical work environments. Complementing these observations, a qualitative survey was distributed to ninety-nine experienced developers, providing broader insights and enabling the identification of patterns and themes across a larger participant pool. The combination of these methods facilitated a nuanced and detailed examination of developer practices, leveraging both the depth of observational data and the breadth of survey responses.
Data collection included remote observation sessions conducted via Zoom with 13 experienced developers. These sessions focused on observing participants interacting with agentic coding tools within their typical work environments, allowing for the capture of ecologically valid data regarding real-world workflows. The remote format enabled observation of developers irrespective of geographical location, while maintaining a view of their screen and, with consent, audio interaction to document tool usage and associated verbalizations. Observations were systematically recorded and documented for subsequent qualitative analysis.
The research methodology adhered to strict ethical guidelines, with the study protocol undergoing comprehensive review and receiving approval from the Institutional Review Board prior to the commencement of data collection. This ensured the protection of participant rights and welfare throughout the study. Data was gathered via a qualitative survey completed by 99 experienced developers, providing a substantial sample size for analysis and contributing to the robustness of the findings.
The Pragmatic Developer: Strategies for Agentic Coding
Experienced developers predominantly utilize a ‘Planning & Supervision’ strategy when working with AI-generated code, characterized by thorough review and validation before integration. This approach was observed to be the most common among participants in our study, indicating a preference for maintaining code quality and functional correctness. Data suggests developers prioritize productivity gains achievable through AI assistance, but not at the expense of reliability; they actively verify the AI’s output rather than accepting it uncritically. This behavior reinforces the finding that developers currently value a balance between development speed and software quality when incorporating AI tools into their workflow.
The observed preference for ‘Planning & Supervision’ in agentic coding workflows indicates a strong emphasis on maintaining software quality standards. Developers utilizing this strategy prioritize thorough review and validation of AI-generated code, even if it introduces a slightly slower development pace. This pragmatic approach to automation suggests a calculated trade-off: accepting a potential reduction in immediate velocity to ensure the resulting codebase meets established quality benchmarks and reduces the risk of bugs or technical debt. Data indicates this prioritization is common among experienced developers, highlighting a focus on long-term maintainability and reliability over simply maximizing lines of code produced per unit time.
A less frequently observed development style, termed ‘Vibe Coding,’ is characterized by a high degree of trust in AI-generated code with minimal subsequent review. While potentially increasing initial development velocity, this approach carries an inherent risk of reduced software quality due to the acceptance of potentially flawed or suboptimal code. Observations indicate that developers employing Vibe Coding prioritize speed of implementation over thorough validation, potentially leading to increased technical debt or functional errors that require later remediation.
The Limits of Automation: When AI Hits a Wall
Agentic coding tools demonstrate a clear aptitude for streamlining straightforward tasks within software development, offering substantial productivity gains for routine activities. These tools excel at automating repetitive coding patterns, such as generating boilerplate code, implementing simple functions, or refactoring well-defined code blocks. By handling these predictable operations, developers are freed to concentrate on more complex problem-solving, architectural design, and innovative features. This automation isn’t about replacing developers, but rather about augmenting their capabilities, allowing them to achieve more with their time and effort – effectively shifting the focus from tedious implementation to higher-level strategic thinking and creative design.
While agentic coding tools demonstrate notable efficiency gains with simpler programming tasks, their effectiveness notably decreases when confronted with complexity, necessitating considerable human oversight. A recent study examining experienced open source maintainers revealed a surprising outcome: utilizing these AI-powered tools actually slowed their overall workflow by 19%. This reduction in speed stems from the substantial time required to review, correct, and adapt the AI’s output, particularly when dealing with nuanced problems or intricate codebases. The findings suggest that, currently, these tools aren’t capable of autonomously handling sophisticated development challenges and, instead, function best when integrated into a workflow where a skilled developer retains significant control and provides critical validation.
A recent investigation into the practical application of agentic coding tools revealed a surprisingly low success rate; only 8% of system invocations culminated in a merged pull request, indicating limited autonomous functionality. This finding underscores a crucial point about the current capabilities of these technologies – they are more effectively positioned as developer assistants rather than complete replacements for human expertise. While agentic systems can automate portions of the coding process, substantial developer oversight remains necessary to ensure accuracy, maintain code quality, and navigate the complexities inherent in software development projects. The study suggests that focusing on augmenting human capabilities, rather than attempting full automation, represents a more realistic and productive path forward for integrating these tools into professional workflows.
The pursuit of seamless integration between developers and AI agents, this ‘vibe coding’, feels predictably optimistic. The study highlights experienced developers aren’t relinquishing control – they’re managing the chaos. It’s a pragmatic approach, acknowledging that even the most sophisticated large language models aren’t immune to producing… let’s call them ‘unexpected artifacts’. As Andrey Kolmogorov observed, “The most important discoveries are often the simplest.” This isn’t about finding a magical synergy; it’s about applying rigorous control to a probabilistic system. One suspects that future archaeologists will unearth layers of meticulously crafted prompts, not elegant code, proving that even in the age of AI, someone still has to tell the machine exactly what to do. The idea that production always finds a way to break things remains stubbornly true; at least it’s predictably unpredictable.
The Road Ahead
The observation that experienced developers control agentic coding, rather than ‘vibe’ with it, suggests a predictable arc. Each wave of automation promises liberation, but consistently delivers a new form of management. The tooling will undoubtedly improve – agents will become more adept at translating intent into executable code. Yet, the core challenge isn’t about what the agent can do, but about what a human must still verify. The current focus on prompting and orchestration will inevitably give way to more sophisticated planning layers – and, consequently, more complex failure modes.
Future research should abandon the pursuit of ‘seamless’ integration. A truly useful agent isn’t invisible; it’s transparent in its limitations. The field needs rigorous investigation into the cognitive load imposed by constant supervision, and the long-term effects of shifting from creation to curation. It’s unlikely anyone will solve the ‘software quality’ problem, but perhaps they can accurately measure the rate at which entropy increases in agent-assisted codebases.
Ultimately, the interesting question isn’t whether AI will replace developers, but what new forms of technical debt it will create. The legacy of tomorrow isn’t a lack of code, it’s a surplus of good intentions, carefully managed by a weary engineer. That, at least, is a constant worth predicting.
Original article: https://arxiv.org/pdf/2512.14012.pdf
Contact the author: https://www.linkedin.com/in/avetisyan/
See also:
- Brawl Stars December 2025 Brawl Talk: Two New Brawlers, Buffie, Vault, New Skins, Game Modes, and more
- Clash Royale Best Boss Bandit Champion decks
- Best Hero Card Decks in Clash Royale
- Call of Duty Mobile: DMZ Recon Guide: Overview, How to Play, Progression, and more
- Clash Royale December 2025: Events, Challenges, Tournaments, and Rewards
- Best Arena 9 Decks in Clast Royale
- Clash Royale Best Arena 14 Decks
- Clash Royale Witch Evolution best decks guide
- Brawl Stars December 2025 Brawl Talk: Two New Brawlers, Buffie, Vault, New Skins, Game Modes, and more
- Decoding Judicial Reasoning: A New Dataset for Studying Legal Formalism
2025-12-17 12:01