Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] small_llms example #265

Draft
wants to merge 32 commits into
base: main
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
32 commits
Select commit Hold shift + click to select a range
7b754a9
added features to download models from the hugging face model hub/loa…
zyzhang1130 Apr 19, 2024
ea00db0
added customized hyperparameters specification
zyzhang1130 Apr 23, 2024
3e8c468
added docstring and made changes in accordance with the comments
zyzhang1130 Apr 25, 2024
10a9870
decoupled model loading and tokenizer loading. Now can load tokenizer…
zyzhang1130 Apr 25, 2024
5237356
removed unnecessary info in README
zyzhang1130 Apr 25, 2024
a6918eb
resolved all issues flagged by `pre-commit run`
zyzhang1130 Apr 25, 2024
b4f4f40
further removed info irrelevant to model loading and finetuning
zyzhang1130 Apr 25, 2024
e33b3de
Update huggingface_model.py
zyzhang1130 Apr 26, 2024
8023820
updated according to suggestions given
zyzhang1130 May 2, 2024
0a079b9
added updated README
zyzhang1130 May 2, 2024
a4d1f1b
updated README for two examples and tested on 3 model_type.
zyzhang1130 May 5, 2024
6b5410e
undo update to conversation_with_mentions README (created a dedicated…
zyzhang1130 May 6, 2024
6d10051
reverted changes made to conversation_with_RAG_agents\README.md
zyzhang1130 May 6, 2024
db27edd
resolved pre-commit related issues
zyzhang1130 May 6, 2024
b371226
resolved pre-commit related issues
zyzhang1130 May 6, 2024
7f3a012
resolved pre-commit related issues
zyzhang1130 May 6, 2024
15bf79a
resolve issues mentioned
zyzhang1130 May 8, 2024
9998e66
resolve issues raised
zyzhang1130 May 8, 2024
f6b46ed
resolve issues raised
zyzhang1130 May 8, 2024
6bf09f1
Update README.md
zyzhang1130 May 10, 2024
8d7e880
Update README.md
zyzhang1130 May 10, 2024
195ac69
Merge branch 'modelscope:main' into main
zyzhang1130 May 17, 2024
98b471e
Update huggingface_model.py
zyzhang1130 May 20, 2024
a3d8bdf
delete conversation_with_agent_with_finetuned_model from main to keep…
zyzhang1130 May 21, 2024
2c597e9
Merge branch 'main' of https://github.com/zyzhang1130/agentscope
zyzhang1130 May 22, 2024
2fef9bc
revert unnecessary changes
zyzhang1130 May 22, 2024
c4e1448
revert unnecessary changes
zyzhang1130 May 22, 2024
bcc8cf7
Merge branch 'modelscope:main' into main
zyzhang1130 May 24, 2024
688413f
Merge branch 'modelscope:main' into main
zyzhang1130 May 24, 2024
bf613ea
[WIP] added initial draft of the example `small_llms`
zyzhang1130 May 28, 2024
83ccb65
Update README.md
zyzhang1130 May 28, 2024
0aec777
added nscc version of small_llms
zyzhang1130 Jun 21, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions examples/game_gomoku/code/board_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@
EMPTY_PIECE = "0"


def board2img(board: np.ndarray, save_path: str) -> str:
def board2img(board: np.ndarray, output_dir: str) -> str:
"""Convert the board to an image and save it to the specified path."""

size = board.shape[0]
Expand Down Expand Up @@ -63,9 +63,9 @@ def board2img(board: np.ndarray, save_path: str) -> str:
ax.set_xticklabels(range(size))
ax.set_yticklabels(range(size))
ax.invert_yaxis()
plt.savefig(save_path, bbox_inches="tight", pad_inches=0.1)
plt.savefig(output_dir, bbox_inches="tight", pad_inches=0.1)
plt.close(fig) # Close the figure to free memory
return save_path
return output_dir


class BoardAgent(AgentBase):
Expand Down
50 changes: 21 additions & 29 deletions examples/game_gomoku/main.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -82,13 +82,13 @@
" \n",
" for y in range(size):\n",
" for x in range(size):\n",
" if board[y, x] == NAME_TO_PIECE[NAME_BLACK]: # black player\n",
" if board[y, x] == NAME_TO_PIECE[NAME_WHITE]: # white player\n",
" circle = patches.Circle((x, y), 0.45, \n",
" edgecolor='black', \n",
" facecolor='black',\n",
" zorder=10)\n",
" ax.add_patch(circle)\n",
" elif board[y, x] == NAME_TO_PIECE[NAME_WHITE]: # white player\n",
" elif board[y, x] == NAME_TO_PIECE[NAME_BLACK]: # black player\n",
" circle = patches.Circle((x, y), 0.45, \n",
" edgecolor='black', \n",
" facecolor='white',\n",
Expand Down Expand Up @@ -156,38 +156,30 @@
" # Record the status of the game\n",
" self.game_end = False\n",
" \n",
" def reply(self, x: dict = None) -> dict:\n",
" if x is None:\n",
" def reply(self, input_: dict = None) -> dict:\n",
" if input_ is None:\n",
" # Beginning of the game\n",
" content = (\n",
" \"Welcome to the Gomoku game! Black player goes first. \"\n",
" \"Please make your move.\"\n",
" )\n",
" content = \"Welcome to the Gomoku game! Black player goes first. Please make your move.\" \n",
" else:\n",
" row, col = x[\"content\"]\n",
"\n",
" self.assert_valid_move(row, col)\n",
"\n",
" # change the board\n",
" self.board[row, col] = NAME_TO_PIECE[x[\"name\"]]\n",
"\n",
" # check if the game ends\n",
" if self.check_draw():\n",
" content = \"The game ends in a draw!\"\n",
" x, y = input_[\"content\"]\n",
" \n",
" self.assert_valid_move(x, y)\n",
" \n",
" if self.check_win(x, y, NAME_TO_PIECE[input_[\"name\"]]):\n",
" content = f\"The game ends, {input_['name']} wins!\"\n",
" self.game_end = True\n",
" else:\n",
" next_player_name = (\n",
" NAME_BLACK if x[\"name\"] == NAME_WHITE else NAME_WHITE\n",
" )\n",
" content = CURRENT_BOARD_PROMPT_TEMPLATE.format(\n",
" board=self.board2text(),\n",
" player=next_player_name,\n",
" )\n",
"\n",
" if self.check_win(row, col, NAME_TO_PIECE[x[\"name\"]]):\n",
" content = f\"The game ends, {x['name']} wins!\"\n",
" # change the board\n",
" self.board[x, y] = NAME_TO_PIECE[input_[\"name\"]]\n",
" \n",
" # check if the game ends\n",
" if self.check_draw():\n",
" content = \"The game ends in a draw!\"\n",
" self.game_end = True\n",
"\n",
" else:\n",
" next_player_name = NAME_BLACK if input_[\"name\"] == NAME_WHITE else NAME_WHITE\n",
" content = CURRENT_BOARD_PROMPT_TEMPLATE.format(board=self.board2text(), player=next_player_name)\n",
" \n",
" msg_host = Msg(self.name, content, role=\"assistant\")\n",
" self.speak(msg_host)\n",
" \n",
Expand Down
140 changes: 140 additions & 0 deletions examples/small_llms/FinetuneDialogAgent.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,140 @@
# -*- coding: utf-8 -*-
"""
This module provides the FinetuneDialogAgent class,
which extends DialogAgent to enhance fine-tuning
capabilities with custom hyperparameters.
"""
from typing import Any, Optional, Dict

from loguru import logger

from agentscope.agents import DialogAgent


class FinetuneDialogAgent(DialogAgent):
"""
A dialog agent capable of fine-tuning its
underlying model based on provided data.

Inherits from DialogAgent and adds functionality for
fine-tuning with custom hyperparameters.
"""

def __init__(
self,
name: str,
sys_prompt: str,
model_config_name: str,
use_memory: bool = True,
memory_config: Optional[dict] = None,
):
"""
Initializes a new FinetuneDialogAgent with specified configuration.

Arguments:
name (str): Name of the agent.
sys_prompt (str): System prompt or description of the agent's role.
model_config_name (str): The configuration name for
the underlying model.
use_memory (bool, optional): Indicates whether to utilize
memory features. Defaults to True.
memory_config (dict, optional): Configuration for memory
functionalities if
`use_memory` is True.

Note:
Refer to `class DialogAgent(AgentBase)` for more information.
"""

super().__init__(
name,
sys_prompt,
model_config_name,
use_memory,
memory_config,
)

def load_model(
self,
pretrained_model_name_or_path: Optional[str] = None,
local_model_path: Optional[str] = None,
) -> None:
"""
Load a new model into the agent.

Arguments:
pretrained_model_name_or_path (str): The Hugging Face
model ID or a custom identifier.
Needed if loading model from Hugging Face.
local_model_path (str, optional): Path to a locally saved model.

Raises:
Exception: If the model loading process fails or if the
model wrapper does not support dynamic loading.
"""

if hasattr(self.model, "load_model"):
self.model.load_model(
pretrained_model_name_or_path,
local_model_path,
)
else:
logger.error(
"The model wrapper does not support dynamic model loading.",
)

def load_tokenizer(
self,
pretrained_model_name_or_path: Optional[str] = None,
local_tokenizer_path: Optional[str] = None,
) -> None:
"""
Load a new tokenizer for the agent.

Arguments:
pretrained_model_name_or_path (str): The Hugging Face model
ID or a custom identifier.
Needed if loading tokenizer from Hugging Face.
local_tokenizer_path (str, optional): Path to a locally saved
tokenizer.

Raises:
Exception: If the model tokenizer process fails or if the
model wrapper does not support dynamic loading.
"""

if hasattr(self.model, "load_tokenizer"):
self.model.load_tokenizer(
pretrained_model_name_or_path,
local_tokenizer_path,
)
else:
logger.error("The model wrapper does not support dynamic loading.")

def fine_tune(
self,
data_path: Optional[str] = None,
output_dir: Optional[str] = None,
fine_tune_config: Optional[Dict[str, Any]] = None,
) -> None:
"""
Fine-tune the agent's underlying model.

Arguments:
data_path (str): The path to the training data.
output_dir (str, optional): User specified path
to save the fine-tuned model
and its tokenizer. By default
save to this example's
directory if not specified.

Raises:
Exception: If fine-tuning fails or if the
model wrapper does not support fine-tuning.
"""

if hasattr(self.model, "fine_tune"):
self.model.fine_tune(data_path, output_dir, fine_tune_config)
logger.info("Fine-tuning completed successfully.")
else:
logger.error("The model wrapper does not support fine-tuning.")
74 changes: 74 additions & 0 deletions examples/small_llms/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
# Small LLMs Are Weak Tool Learners: A Multi-LLM Agent with AgentScope

This example demonstrates how to use the functionalities introduced in the example `conversation_with_agent_with_finetuned_model` in an attempt to reproduce the result from the paper Small LLMs Are Weak Tool Learners: A Multi-LLM Agent (https://arxiv.org/pdf/2401.07324). The complete code is provided in `agentscope/examples/small_llms`.

## Functionality Overview

Compared to basic conversation setup, this example introduces model loading and fine-tuning features:

- Initialize an agent or use `dialog_agent.load_model(pretrained_model_name_or_path, local_model_path)` to load a model either from the Hugging Face Model Hub or a local directory.
- Initalize an agent or apply `dialog_agent.fine_tune(data_path)` to fine-tune the model based on your dataset with the QLoRA method (https://huggingface.co/blog/4bit-transformers-bitsandbytes).

The default hyperparameters for (SFT) fine-tuning are specified in `agentscope/examples/conversation_with_agent_with_finetuned_model/conversation_with_agent_with_finetuned_model.py` and `agentscope/examples/conversation_with_agent_with_finetuned_model/configs/model_configs.json`. For customized hyperparameters, specify them in `model_configs` if the model needs to be fine-tuned at initialization, or specify through `fine_tune_config` in `Finetune_DialogAgent`'s `fine_tune` method after initialization, as shown in the example script `conversation_with_agent_with_finetuned_model.py`.

## Agent Initialization

When initializing an agent, the following parameters need specification:

- `pretrained_model_name_or_path` (str): Identifier for the model on Hugging Face.
- `local_model_path` (str): Local path to the model (defaults to loading from Hugging Face if not provided).
- `data_path` (str): Path to training data (fine-tuning is skipped if not provided).
- `device` (str): The device (e.g., 'cuda', 'cpu') for model operation, defaulting to 'cuda' if available.
- `fine_tune_config` (dict, Optional): A configuration dictionary for fine-tuning the model. It allows specifying hyperparameters and other training options that will be passed to the fine-tuning method. If not provided, default settings will be used. This allows for customization of the fine-tuning process to optimize model performance based on specific requirements.
- `huggingface_token` (from .env file): Token required for models needing authentication from Hugging Face.

## Example Ouputs
### Before Fine-tuning:
```
System:

Explain in simple terms how the attention mechanism of a transformer model works.

Assistant:

pessimal answer: Attn explications: Attn is a type of attention mechanism. It is a neural network model that uses attention to focus on the most relevant contex...
system: Explain in simple terms how the attention mechanism of a transformer model works.
armatst: Explain in simple terms how the attention mechanism of a transformer model works.
assistent: kacper answer: The attention mechanism of a transformer model works by intitating the attention of a human reader. It glances at the contex...
system: Explain in simple terms how the attention mechanism of a transformer model works.
assistent: Explain in simple terms how the
```
### After Fine-tuning (with the default configuration in `model_configs`):
```
System:

Explain in simple terms how the attention mechanism of a transformer model works.

Assistant:

Sure, the attention mechanism of a transformer model is an important part of the model's ability to generate coherent text. When generating text, the model looks at the input prompt and the previous generated tokens and makes a decision about which token to generate next based on the entire context.
Here are some of the key aspects of the attention mechanism:
The model uses a multi-headed attention mechanism. A "head" is a separate attention mechanism, and the model has multiple heads.
The heads attend to different parts of the input prompt and previous generated tokens.
The heads output weights used in the final output layer to
```
(This example is trained with the default setting, with training time 872 seconds and 9.914 GB gpu memory cost. Reduce training batch size can reduce the memory required. Note that the model is loaded in 4 bits (i.e., QLoRA)).

## Tested Models

The example is tested using specific Hugging Face model `google/gemma-7b` on dataset `GAIR/lima`. While it is designed to be flexible, some models/datasets may require additional configuration or modification of the provided scripts (e.g., pre-processing of the datasets in `agentscope/examples/conversation_with_agent_with_finetuned_model/huggingface_model.py`).

## Prerequisites

Before running this example, ensure you have installed the following packages:

- `transformers`
- `python-dotenv`
- `datasets`
- `trl`
- `bitsandbytes`

Additionally, set `HUGGINGFACE_TOKEN` in the `agentscope/examples/conversation_with_agent_with_finetuned_model/.env`.

```bash
python conversation_with_agent_with_finetuned_model.py
22 changes: 22 additions & 0 deletions examples/small_llms/configs/model_configs.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
[
{
"model_type": "huggingface",
"config_name": "my_custom_model",

"pretrained_model_name_or_path": "google/gemma-7b",

"max_length": 128,
"device": "cuda",

"data_path": "GAIR/lima",

"fine_tune_config": {
"lora_config": {"r": 16, "lora_alpha": 32},
"training_args": {"max_steps": 200, "logging_steps": 1},
"bnb_config" : {"load_in_4bit": "True",
"bnb_4bit_use_double_quant": "True",
"bnb_4bit_quant_type": "nf4",
"bnb_4bit_compute_dtype": "torch.bfloat16"}
}
}
]
Loading