Skip to content

prompt, code, data of paper "Getting LLM to think and act like a human being: Logical path reasoning and Replanning"

License

Notifications You must be signed in to change notification settings

superlin30/Real-World-Web-Agent

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

30 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Getting LLM to think and act like a human being: Logical path reasoning and Replanning

agent.in.Ctrip.com.mp4

agent.in.booking.com.mp4

We introduce a Replanning mechanism for LLM-based agents that dynamically integrates feedback from actions and implicit information which not available in an initial thinking and reasoning framework, forming a bridge between the thinking and acting of LLMs. Compared to other agent methods, experiments conducted on real-life ticket booking websites such as Ctrip and Booking show that our method is more robust in executing clear instructions, capable of successfully completing more steps, and achieving a higher success rate in practical tasks such as ticket booking. especially in challenging tasks that require interactive thinking and action for LLMs.

Comparison

Innovations

We propose three innovative mechanisms: implicit reasoning, Replanning, and Reaction, which make the following contributions:

  1. Implicit reasoning based on logical path propagation, which integrates explicit and implicit information for reasoning, achieving a cognitive process closer to human-like thinking.

  2. Replanning, a mechanism based on action feedback, allowing planning to dynamically integrate experiences from successful or failed actions, linking these isolated local optimal decisions for globally optimal Replanning.

  3. Reaction and inductive deduction based on Replanning, which corrects erroneous actions and explores new actions while reassessing and correcting action strategies under a global information set, refining and preserving more general laws and strategies.

Implicit

Our method

The framework is based on three innovative components: logic path reasoning based on Knowledge Graphs (KG), single action process incorporating environmental feedback and skill management, and task chain management for implementing Replanning and Reaction.

Algorithm

1.The logic path reasoning component is based on KG, which captures implicit information based on the identity, background, and preferences of the user. The reasoning process involves named entity recognition and KG retrieval, path propagation, and mindmap generation. This approach transforms sparse, document-based knowledge repositories into a more effective knowledge graph-based reasoning and rapid response mechanism.

Logic

2.The single action process component is based on environmental feedback and skill management. It enables the interaction of Replanning and Reaction in LLM’s actions by treating LLM’s single action as a dynamic skill management system based on environmental feedback. The process involves environmental awareness, skill library management, and error handling mechanism based on environmental feedback.

Logic

3.The task chain management component is responsible for implementing Replanning and Reaction. It uses a vector database to store different task chains and merges them based on the vector similarity of the tasks, creating a comprehensive mindmap that encompasses multiple task chains. This mindmap dynamically integrates and manages actions triggered in the past, following the logical relationships of the planned tasks.

Task

References

There are several key papers that contribute to the arguments and research field of our work. Let's discuss the significance of these important references:

  • Abend and Rappoport (2017) - provide an overview of semantic representation, which is crucial for understanding how LLMs process and interpret language. This foundational knowledge is essential for developing the Replanning mechanism. [PDF]
  • Achiam et al. (2023) - present the GPT-4 technical report, which is a significant LLM that serves as a basis for the research. Understanding the capabilities and limitations of GPT-4 is vital for developing mechanisms to improve LLM-based agents. [PDF]
  • Ahn et al. (2022) - discuss grounding language in robotic affordances, which is relevant to the authors' focus on integrating thinking and action in LLMs. This paper provides insights into how language models can be applied to real-world tasks and the challenges they face. [PDF]
  • Brown et al. (2020) - explore the few-shot learning capabilities of language models, which is an important aspect of the authors' research on Replanning. This paper highlights the potential of LLMs to learn from limited examples and adapt to new tasks. [PDF]
  • Bubeck et al. (2023) - discuss early experiments with GPT-4, providing further insights into the capabilities of this LLM. This reference helps to contextualize the authors' research within the broader field of LLM development.[PDF]
  • Celikyilmaz et al. (2020) - survey the evaluation of text generation, which is a key aspect of the authors' focus on improving LLM performance in complex tasks. This paper offers a comprehensive overview of the challenges and best practices in text generation. [PDF]
  • Chang et al. (2023) - provide a survey on the evaluation of large language models, which is directly relevant to the authors' research on enhancing LLM-based agents. This reference offers a critical analysis of the current state of LLMs and their performance in various tasks. [PDF]
  • Chen et al. (2020) - review knowledge reasoning over knowledge graphs, which is an important aspect of the authors' focus on integrating explicit and implicit information for reasoning. This paper provides insights into how LLMs can leverage knowledge graphs to improve their reasoning capabilities. [PDF]
  • Chen et al. (2023) - discuss the robustness of GPT-3.5 to text generation, which is a predecessor to the GPT-4 model used in the authors' research. This reference helps to establish a baseline for understanding the improvements made by the proposed Replanning mechanism. [PDF]

About

prompt, code, data of paper "Getting LLM to think and act like a human being: Logical path reasoning and Replanning"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published