Reasoning Capacity in Multi-Agent Systems: Limitations, Challenges and Human-Centered Solutions
CoRR(2024)
摘要
Remarkable performance of large language models (LLMs) in a variety of tasks
brings forth many opportunities as well as challenges of utilizing them in
production settings. Towards practical adoption of LLMs, multi-agent systems
hold great promise to augment, integrate, and orchestrate LLMs in the larger
context of enterprise platforms that use existing proprietary data and models
to tackle complex real-world tasks. Despite the tremendous success of these
systems, current approaches rely on narrow, single-focus objectives for
optimization and evaluation, often overlooking potential constraints in
real-world scenarios, including restricted budgets, resources and time.
Furthermore, interpreting, analyzing, and debugging these systems requires
different components to be evaluated in relation to one another. This demand is
currently not feasible with existing methodologies. In this postion paper, we
introduce the concept of reasoning capacity as a unifying criterion to enable
integration of constraints during optimization and establish connections among
different components within the system, which also enable a more holistic and
comprehensive approach to evaluation. We present a formal definition of
reasoning capacity and illustrate its utility in identifying limitations within
each component of the system. We then argue how these limitations can be
addressed with a self-reflective process wherein human-feedback is used to
alleviate shortcomings in reasoning and enhance overall consistency of the
system.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要