Reframing Research with AI: A New Approach to Problem Formulation

Author: Denis Avetisyan


This article explores how integrating artificial intelligence agents into established research workflows can help software engineers define more relevant and impactful research questions.

AI agents now operate within a learning representation infrastructure, enabling a synergistic integration of intelligence and adaptable systems.
AI agents now operate within a learning representation infrastructure, enabling a synergistic integration of intelligence and adaptable systems.

Integrating AI agents into the Lean Research Inception framework to facilitate co-creation and contextualized insights for software engineering research.

Software engineering research often struggles to translate into practical impact due to inadequately formulated problems. This paper, ‘Towards AI Agents Supported Research Problem Formulation’, explores how artificial intelligence agents can be integrated into the Lean Research Inception (LRI) framework to address this challenge. We demonstrate a scenario where AI agents facilitate collaborative problem definition by pre-filling attributes, aligning stakeholder perspectives, and simulating multi-faceted assessments. Could this approach not only enhance the relevance of software engineering research but also foster a more context-aware and practice-oriented research process?


The Peril of Ill-Defined Inquiry

Frequently, the initial framing of a research problem suffers from a lack of systematic rigor, ultimately diminishing the potential for meaningful outcomes. Investigations often begin with broadly stated goals or assumptions that haven’t undergone sufficient scrutiny, leading to projects that are vaguely defined and difficult to manage. This absence of a structured approach not only complicates the research process itself – requiring frequent course correction and wasted effort – but also significantly increases the risk of generating results that are irrelevant or unusable. Consequently, valuable time, funding, and expertise are often diverted toward ill-defined endeavors, highlighting the critical need for a more formalized and disciplined methodology in problem formulation to maximize research efficiency and impact.

Research projects often falter not due to flawed execution, but because of an inherent tension in their initial formulation: balancing the desire for groundbreaking novelty with the practical constraints of feasibility and the ultimate demand for real-world impact. Studies reveal that a disproportionate emphasis on innovative approaches, while potentially yielding exciting results, frequently leads to solutions detached from actual needs or impossible to implement with available resources. Conversely, projects prioritizing practicality over innovation may lack the transformative potential to justify the investment. This delicate equilibrium is crucial; a successful research trajectory demands a rigorous assessment of each element, ensuring that a project is not only conceptually sound and achievable, but also demonstrably relevant and capable of translating into tangible benefits, thereby bridging the gap between academic inquiry and practical application.

Research endeavors, despite diligent effort, often yield solutions disconnected from genuine need or practical application. This disconnect stems from a lack of systematic problem formulation, leading to investigations that prioritize intellectual curiosity over demonstrable impact. The result is a proliferation of findings that, while potentially innovative, remain confined to academic discourse due to implementation barriers – be they technological, economic, or societal. A more rigorous, iterative approach to research design – one that actively incorporates stakeholder input and feasibility assessments – is therefore crucial to bridge the gap between discovery and meaningful progress, ensuring that intellectual investment translates into tangible benefits.

Architecting Robust Problem Statements with AI

The Lean Research Inception (LRI) framework is a structured methodology for problem formulation, and this work details its facilitation through the use of AI Agents. These agents are integrated to automate and enhance key steps within LRI, moving beyond traditional brainstorming and qualitative analysis. The methodology leverages AI to assist in identifying core problem attributes, exploring potential solution spaces, and iteratively refining the problem statement. This agent-based approach aims to provide a more systematic and efficient process for defining research problems, ensuring all critical facets are considered before significant resources are allocated. The current implementation focuses on streamlining the initial phases of research, specifically problem definition and scoping, with the intention of improving the overall quality and relevance of research outputs.

The Lean Research Inception (LRI) framework utilizes a ‘Problem Vision’ as a central component for robust problem definition. This Problem Vision is a visual board designed to organize and assess seven key attributes of the research problem: desired outcomes, target users, key features, assumptions, constraints, metrics for success, and potential risks. By explicitly defining each of these attributes, the Problem Vision ensures a comprehensive understanding of the problem space before initiating detailed research. The visual format facilitates collaboration and allows stakeholders to identify gaps or inconsistencies in the initial problem formulation, leading to a more refined and actionable research direction.

The Semantic Differential Scale is incorporated into the research process to provide a quantitative assessment of potential research problems along three key dimensions: value, feasibility, and applicability. This scale utilizes bipolar adjective pairs – such as “important/trivial”, “achievable/impossible”, and “relevant/irrelevant” – to allow researchers to rate problems on a numerical scale, typically ranging from 1 to 7. By assigning numerical values to these subjective attributes, the scale transforms qualitative judgments into quantitative data, enabling a more objective comparison of different research avenues and facilitating prioritization based on measurable criteria. The resulting data can then be used to identify problems that simultaneously score highly across all three dimensions, indicating a strong potential for impactful and achievable research.

This paper details the integration of AI agents within the Lean Research Inception (LRI) methodology to enhance both the practical relevance and holistic understanding of defined research problems. The AI agents are utilized to support the LRI framework’s processes, including the development and assessment of the ‘Problem Vision’ and the application of the ‘Semantic Differential Scale’. While initial results suggest a potential for improved problem formulation, it is important to note that rigorous quantitative validation of the framework’s efficacy and the AI agents’ contribution remains limited at this stage of development. Further research is required to establish statistically significant improvements and demonstrate the framework’s generalizability.

Demonstrating Efficacy: Practical Applications and Results

The efficacy of the AI-assisted Learning and Reasoning Infrastructure (LRI) framework was evaluated through application to a series of practical machine learning projects. These projects encompassed diverse use cases including image classification, natural language processing, and time series forecasting. The framework’s performance was measured by assessing its ability to improve key development metrics within each project, specifically focusing on areas such as development time, resource utilization, and the quality of the resulting models. Data collected from these implementations demonstrated a consistent improvement in project outcomes when utilizing the AI-assisted LRI compared to traditional development methodologies, validating its utility in real-world scenarios.

Code maintainability within machine learning projects presents a substantial challenge due to the iterative and experimental nature of development. The codebase often evolves rapidly, incorporating numerous dependencies, complex algorithms, and custom data transformations. This can lead to decreased readability, increased cyclomatic complexity, and a lack of consistent coding style. Consequently, debugging, refactoring, and extending the code become progressively more difficult and time-consuming. Poor maintainability directly impacts the long-term viability of the project, hindering future development efforts and increasing the risk of introducing bugs during modifications. Effective strategies to address this include enforcing coding standards, utilizing version control systems, and prioritizing code documentation.

AI Agents integrated into the LRI framework analyze initial problem formulations in machine learning projects to detect potential code maintainability risks. This analysis focuses on factors such as code complexity, anticipated dependencies, and adherence to established coding standards. By flagging these issues during the early stages – before significant code is written – the agents provide developers with actionable insights to refactor designs and adopt more sustainable architectural patterns. Specifically, agents can suggest alternative algorithms with lower computational complexity, recommend modular code structures to reduce coupling, and enforce consistent documentation practices. This proactive intervention minimizes the accumulation of technical debt and promotes the creation of codebases that are easier to understand, modify, and extend over time.

Early identification of code maintainability issues through AI-assisted LRI directly mitigates the accumulation of technical debt. Technical debt, in this context, represents the implied cost of rework caused by choosing an easy solution now instead of a better approach that would take longer. Reducing this debt translates to lower future development costs, faster iteration cycles, and increased resilience to changing requirements. By addressing potential maintainability concerns during initial problem formulation, the framework fosters the creation of more sustainable and adaptable machine learning projects, ultimately maximizing their long-term value and impact through reduced maintenance overhead and extended operational lifespan.

Towards a Future of Contextual and Collaborative Discovery

The integration of artificial intelligence agents into research methodologies fosters a paradigm shift towards ‘Context-Aware Research’. Traditionally, investigations often proceed with idealized assumptions, potentially overlooking crucial real-world limitations and complexities. However, AI agents, capable of processing vast datasets and identifying relevant contextual factors, enable researchers to proactively address these constraints. This approach moves beyond purely theoretical exploration, grounding research in the practical realities of the problem domain. By systematically incorporating variables like resource limitations, ethical considerations, and existing infrastructure, these AI-driven systems promote investigations that are not only innovative but also demonstrably feasible and impactful, ultimately bridging the gap between academic discovery and tangible solutions.

The proposed research framework is designed to move beyond static problem definitions, embracing instead a process of Iterative Refinement. This approach allows the research question itself to be continuously reshaped by incoming data and emerging understandings. Rather than rigidly adhering to an initial hypothesis, the framework facilitates a dynamic cycle where preliminary findings inform subsequent investigations, prompting adjustments to the scope, methodology, or even the core objectives of the study. This responsiveness is achieved through integrated feedback loops, enabling researchers to incorporate new insights and address unforeseen complexities as they arise, ultimately fostering a more nuanced and robust exploration of the research topic. This continuous adaptation promises to yield solutions that are not only scientifically sound but also deeply relevant to the ever-changing context of real-world challenges.

An ‘AI Co-Scientist’, leveraging the capabilities of Large Language Models, represents a novel paradigm in research collaboration. This intelligent assistant isn’t simply a tool for data analysis or literature review; it actively participates throughout the entire research lifecycle, from initial problem formulation to dissemination of findings. By processing vast amounts of information and identifying relevant connections, the AI Co-Scientist can suggest alternative research directions, critique methodologies, and even assist in the writing process. This collaborative dynamic aims to augment human intellect, fostering a more iterative and insightful approach to scientific inquiry, and ultimately accelerating the pace of discovery by providing continuous support and feedback.

The study culminates in a proposed conceptual framework designed to bridge the gap between theoretical inquiry and real-world application, striving for research that is not only novel but also deeply relevant and comprehensively understood. This framework prioritizes a holistic approach, encouraging researchers to consider the broader implications and interconnectedness of their work. While the initial development offers a promising pathway toward more impactful investigations, the authors acknowledge the necessity of rigorous quantitative validation through future studies. Establishing measurable outcomes and demonstrable effectiveness remains a key priority for solidifying the framework’s utility and ensuring its lasting contribution to the field.

The pursuit of impactful software engineering research, as detailed in this work, benefits from a holistic understanding of interconnectedness. This mirrors the sentiment expressed by Paul Erdős: “A mathematician knows a lot of things and knows a little of everything.” The article’s integration of AI agents into the Lean Research Inception (LRI) framework isn’t simply about automating a step, but about fostering a co-creation process. Each added agent, each new dependency introduced, impacts the entire system-a truth echoed in Erdős’s statement. The success of LRI, enhanced by AI, rests on acknowledging this structural interplay and striving for an elegant simplicity that allows for contextualized insights and truly relevant problem formulation.

Where Do We Go From Here?

The integration of AI agents into a structured research inception process, as explored in this work, feels less like a solution and more like a carefully illuminated boundary. Systems break along invisible boundaries – if one cannot see them, pain is coming. The current approach, while promising for problem formulation, implicitly relies on the quality of the initial data fed to these agents. A critical weakness lies in the potential for these agents to amplify existing biases or, worse, to converge on locally optimal research questions that lack genuine novelty or impact. Anticipating these failure modes requires a shift in focus – not simply on what the agent suggests, but why it suggests it.

Future work must address the interpretability of agent reasoning. A ‘black box’ offering solutions is merely a sophisticated oracle, not a true research partner. The challenge is to build agents capable of articulating the rationale behind their suggestions, explicitly identifying assumptions, and highlighting potential limitations. This demands a move beyond current machine learning paradigms towards more symbolic, knowledge-driven approaches.

Ultimately, the true test lies in moving beyond co-creation of problem formulation to genuine co-creation of the research process itself. Can these agents not only help define the question, but also assist with experimental design, data analysis, and even the critical evaluation of results? Such a system requires not just intelligence, but a robust understanding of the inherent messiness – and the beautiful fragility – of scientific inquiry.


Original article: https://arxiv.org/pdf/2512.12719.pdf

Contact the author: https://www.linkedin.com/in/avetisyan/

See also:

2025-12-16 08:53