Author: Denis Avetisyan
A new wave of artificial intelligence systems is transforming visual analytics, automating complex workflows and shifting the analyst’s role towards strategic oversight.

This review examines the emerging field of agentic visual analytics and its co-evolutionary framework of roles and workflows driven by large language models.
While visual analytics increasingly relies on automation, a comprehensive understanding of how this shift reshapes human roles remains elusive. This paper, ‘Exploring Agentic Visual Analytics: A Co-Evolutionary Framework of Roles and Workflows’, surveys 55 recent systems leveraging large language models to autonomously navigate the full analytical pipeline. Our analysis reveals a co-evolution between increasing agent autonomy and the necessary transition of human involvement from manual operation to strategic supervision, formalized through a role-workflow taxonomy aligning four key agentic roles with established VA stages. How can we best design these agentic systems to maximize analytical insight and ensure effective human-AI collaboration?
The Inevitable Shift: From Manual Sifting to Automated Insight
Historically, visual analytics has depended heavily on human interaction – analysts manually sifting through data representations to identify trends and anomalies. This process, while offering nuanced understanding, becomes a significant bottleneck when confronted with the sheer scale and intricacy of modern datasets. The cognitive limitations of human perception and working memory restrict the speed and thoroughness of exploration, potentially obscuring critical insights buried within complex visualizations. Consequently, reliance on manual exploration hinders the ability to respond rapidly to evolving data landscapes and capitalize on time-sensitive opportunities, demanding a shift towards more automated and intelligent analytical approaches.
The sheer scale of modern datasets, coupled with the speed at which they are generated, presents a significant challenge to traditional visual analytics methods. As data volume and velocity continue to increase, manual exploration becomes increasingly inefficient and prone to overlooking critical insights. Consequently, research is now heavily focused on developing automated assistance tools capable of filtering extraneous noise and proactively highlighting meaningful patterns. These systems employ algorithms – ranging from machine learning to statistical modeling – to identify anomalies, correlations, and trends that might otherwise remain hidden within the data’s complexity. This automation not only accelerates the discovery process but also allows analysts to focus on interpreting results and formulating strategic decisions, rather than being overwhelmed by the task of initial data sifting.
![Recent advances in agentic visual analytics (VA) systems demonstrate paradigm innovations including programming-based data representations, multi-modal knowledge integration, generation of complex visual structures, dynamic UI editing for intent-driven manipulation, and perception-driven optimization using multi-modal large language models [wu2024AutomatedDataVisualization,khanal2024FathomGPTNaturalLanguage,yang2024MatPlotAgentMethodEvaluation,chen2025codaagenticsystemscollaborative,vaithilingam2024DynaVisDynamicallySynthesized,ava2024].](https://arxiv.org/html/2604.15813v1/x4.png)
The Rise of the Agent: Automating Analytical Pipelines
Agentic Visual Analytics leverages Large Language Models (LLMs) to automate the traditional Visual Analytics Pipeline, encompassing stages from initial data processing through to final presentation of findings. This automation is achieved by deploying LLM-driven agents capable of executing tasks previously requiring manual intervention. These agents handle data ingestion, cleaning, transformation, analysis, and visualization generation without explicit, step-by-step programming. The intent is to reduce the cognitive load on analysts and accelerate the analytical workflow by providing a system that can autonomously navigate the entire analytical process, adapting to data characteristics and user goals.
Agentic Visual Analytics systems frequently employ the ReAct framework to facilitate a cyclical process of reasoning and action. ReAct allows agents to generate both natural language thoughts – representing intermediate reasoning steps – and actions that interact with an environment, such as querying data or generating visualizations. This iterative loop enables agents to refine their analytical approach based on observations from their actions; for example, an initial visualization may prompt the agent to re-evaluate its data query or apply a different analytical technique. By interleaving reasoning and action, these systems move beyond static analysis and towards dynamic, self-correcting exploration of data, ultimately improving the quality and relevance of insights generated.
This survey encompasses a systematic analysis of 55 agentic visual analytics systems developed and documented since 2023. These systems represent the current state of LLM-driven architectures in the field, with inclusion criteria focused on implementations that actively integrate large language models to automate aspects of the visual analytics pipeline. The selection process prioritized systems with publicly available details regarding their architecture, functionality, and demonstrated capabilities, allowing for comparative assessment of emerging trends and common design patterns within agentic visual analytics.
Agentic visual analytics systems are structured around four key functional abstractions: the \planner, \creator, \reviewer, and \contextmanager. The \planner agent is responsible for interpreting user requests and breaking down complex analytical goals into discrete, executable tasks. Following the plan, the \creator agent then generates the necessary visualizations and associated code to perform these tasks. The \reviewer agent assesses the outputs, identifying potential errors or areas for improvement, and providing feedback to refine the analytical process. Finally, the \contextmanager maintains the state of the analysis, storing intermediate results and ensuring consistency across all stages of the pipeline, enabling iterative refinement and adaptation.
![The co-evolutionary framework demonstrates a reciprocal relationship where increasing AI agency-progressing from basic assistance [latex]
ightarrow[/latex] strategic orchestration-corresponds with a shift in human involvement from direct control to high-level oversight.](https://arxiv.org/html/2604.15813v1/x2.png)
ightarrow[/latex] strategic orchestration-corresponds with a shift in human involvement from direct control to high-level oversight.
Quality Through Reflection: The Loop of Continuous Improvement
The Reviewer Agent implements quality assurance within the Visual Analytics Pipeline by systematically validating the outputs generated at each stage. This validation process extends beyond simple error detection to include an assessment of analytical relevance and accuracy. Following validation, the agent delivers constructive feedback detailing identified issues and potential improvements. This feedback is then fed back into the pipeline, allowing for iterative refinement of analytical processes and outputs – establishing a reflective loop that continuously enhances the quality and reliability of visualizations and insights.
The implementation of an LLM-as-a-Judge component introduces an objective evaluation metric for generated visualizations, moving beyond subjective human assessment. This LLM analyzes visualizations based on pre-defined criteria including accuracy of data representation, clarity of visual encoding, and relevance to the originating analytical query. The LLM assigns a quality score and provides specific feedback regarding areas for improvement, such as inappropriate chart types or misleading visual scales. This automated assessment process allows for continuous refinement of the visualization generation pipeline, ensuring outputs consistently meet established quality standards and effectively communicate intended insights.
The Context Manager Agent utilizes persistent memory to retain information across multiple analytical sessions, facilitating continuity in data exploration and analysis. This memory stores user interactions, previous analytical steps, and data characteristics, allowing the agent to maintain contextual awareness. By referencing this stored information, the agent can refine its understanding of user intent, anticipate future needs, and provide more relevant and accurate analytical outputs. This capability extends beyond single sessions, enabling the agent to learn from past interactions and improve its performance over time, ultimately enhancing the overall analytical workflow.
The integrated suite of agents – Reviewer, LLM-as-a-Judge, and Context Manager – collectively supports each stage of the Visual Analytics Pipeline. Beginning with Visual Mapping, the agents ensure data is appropriately translated into visual elements. During View Transformation, the LLM-as-a-Judge objectively assesses the quality and relevance of generated visualizations, while the Reviewer Agent provides constructive feedback. The Context Manager Agent maintains persistent memory throughout these stages, enabling the agents to refine their understanding of user intent and maintain continuity across analytical sessions, ultimately enhancing the entire pipeline from initial data representation to final visualization refinement.
Beyond Automation: Toward a Symbiotic Analytical Future
The future of data analysis lies in dissolving the boundaries between asking questions and exploring data directly. Multi-modal interaction achieves this by allowing users to combine the precision of natural language – posing questions like “Show me sales trends for Q2” – with the immediacy of visual manipulation. Instead of solely relying on pre-defined charts, a user can refine a visualization by simply dragging data points, highlighting specific regions, or applying filters directly on the graph, all while continuing the conversation with the analytical agent. This synergistic approach-combining spoken or typed requests with intuitive visual actions-creates a fluid and remarkably intuitive analytical experience, enabling faster insights and a more comprehensive understanding of complex datasets. It moves beyond simply receiving answers to actively discovering them through a dynamic interplay between language and vision.
The agentic system’s capacity for complex analysis hinges on its ability to understand data, not merely process it – and this understanding is significantly bolstered by Semantic Web technologies. These technologies, including Resource Description Framework (RDF) and Web Ontology Language (OWL), provide a standardized framework for representing data meaning, enabling the system to move beyond simple keyword matching to grasp the relationships between concepts. By explicitly defining data semantics, the system can perform sophisticated reasoning, infer new knowledge, and answer queries that require understanding context and nuance. This allows for more accurate insights, proactive problem-solving, and ultimately, a more intelligent and adaptable analytical experience, effectively transforming raw data into actionable intelligence through knowledge representation and automated reasoning.
The culmination of complex data analysis often lies not in the numbers themselves, but in the ability to communicate their significance effectively. This presentation stage leverages the principles of data storytelling to translate raw analytical results into engaging and accessible narratives. By strategically combining visualizations, concise explanations, and a clear narrative arc, data storytelling moves beyond simply presenting information; it fosters understanding and drives action. This approach recognizes that audiences respond more readily to compelling stories than to abstract data points, enabling insights to resonate with diverse groups regardless of their technical expertise. Ultimately, the power of data storytelling lies in its capacity to transform analytical outputs into persuasive communication, facilitating informed decision-making and broader impact.
The agentic system’s analytical capabilities extend beyond pre-programmed responses through the implementation of reinforcement learning. This allows the system to iteratively refine its behavior based on user interactions, effectively learning which analytical approaches and presentation styles yield the most insightful and satisfying results for each individual. By treating user feedback as a reward signal, the system dynamically optimizes its decision-making process, personalizing the analytical experience over time. Consequently, the agent doesn’t simply present data; it adapts to a user’s preferences and analytical style, progressively enhancing the efficiency and impact of each interaction and ultimately fostering a more intuitive and productive relationship between the user and the data.
The pursuit of agentic visual analytics, as detailed in the study, reveals a predictable arc. Systems designed for automation aren’t simply constructed; they evolve, mirroring natural processes. This echoes Carl Friedrich Gauss’s observation: “I do not know what I appear to the world, but to myself I seem to be a person who has spent his life in contemplation.” The ‘contemplation’ lies in understanding that analytical workflows, once rigidly defined, now demand a shift towards strategic supervision-a recognition that true insight isn’t about commanding data, but observing the system’s self-correction as it navigates complexity. Every dependency built into these systems, every automated step, is a promise made to the past, and the framework’s strength lies in its capacity to adapt, to ‘fix itself’ over time.
What’s Next?
The pursuit of agentic visual analytics, as this survey demonstrates, isn’t about building better tools. It’s about cultivating a more complex dependency. The systems are growing outward, accumulating contextual memory and autonomy, but the fundamental challenges of trust and error propagation remain stubbornly unresolved. Each automated workflow, each delegated reasoning step, represents a narrowing of human oversight, a further entanglement in the machine’s logic. The celebrated shift toward ‘strategic supervision’ feels less like liberation and more like a redefinition of the point of failure.
The focus on large language models, while yielding impressive short-term gains, risks obscuring the underlying brittleness. These models are, at their core, pattern-completion engines, susceptible to subtle shifts in data distribution and prone to confabulation. The promise of truly agentic behavior – genuine adaptability and robust reasoning – demands more than scale. It necessitates a fundamental rethinking of how knowledge is represented, validated, and integrated within these systems.
It is tempting to envision a future where these agents seamlessly augment human intellect. But every connection, every automated step, creates a single point of systemic risk. The system doesn’t merely have a fate; it is its fate. The question, then, isn’t whether these agents will fail, but how spectacularly-and what dependencies will fall with them.
Original article: https://arxiv.org/pdf/2604.15813.pdf
Contact the author: https://www.linkedin.com/in/avetisyan/
See also:
- Gear Defenders redeem codes and how to use them (April 2026)
- Annulus redeem codes and how to use them (April 2026)
- Last Furry: Survival redeem codes and how to use them (April 2026)
- All 6 Viltrumite Villains In Invincible Season 4
- The Real Housewives of Rhode Island star Alicia Carmody reveals she once ‘ran over a woman’ with her car
- CookieRun: Kingdom x KPop Demon Hunters collab brings new HUNTR/X Cookies, story, mini-game, rewards, and more
- Clash of Clans: All the Ranked Mode changes coming this April 2026 explained
- Beauty queen busted for drug trafficking and money laundering in ‘Operation Luxury’ sting
- Brawl Stars April 2026 Brawl Talk: Three New Brawlers, Adidas Collab, Game Modes, Bling Rework, Skins, Buffies, and more
- All Mobile Games (Android and iOS) releasing in April 2026
2026-04-21 05:31