Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

增加对商汤科技商汤商量系列模型的支持(已经和lagent项目一起测试OK) Add support for SenseTime's SenseNova series LLM (Has been tested OK with lagent project) #164

Open
wants to merge 5 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ pip install -r requirements.txt
Setup FastAPI Server.

```bash
python -m mindsearch.app --lang en --model_format internlm_server --search_engine DuckDuckGoSearch
python -m mindsearch.app --lang en --model_format sensenova --search_engine DuckDuckGoSearch
```

- `--lang`: language of the model, `en` for English and `cn` for Chinese.
Expand Down
9 changes: 5 additions & 4 deletions frontend/React/vite.config.ts
Original file line number Diff line number Diff line change
Expand Up @@ -53,10 +53,11 @@ export default defineConfig({
server: {
port: 8080,
proxy: {
// "/solve": {
// target: "https://mindsearch.openxlab.org.cn",
// changeOrigin: true,
// },
"/solve": {
target: "http://127.0.0.1:8002",
changeOrigin: true,
rewrite: (path) => path.replace(/^\/solve/, '/solve'),
},
},
},
});
3 changes: 3 additions & 0 deletions frontend/mindsearch_streamlit.py
Original file line number Diff line number Diff line change
Expand Up @@ -317,3 +317,6 @@ def main():

if __name__ == '__main__':
main()

# 指定端口为7860的运行方法
# streamlit run frontend/mindsearch_streamlit.py --server.port=7860
19 changes: 17 additions & 2 deletions mindsearch/agent/models.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
import os

from lagent.llms import (GPTAPI, INTERNLM2_META, HFTransformerCasualLM,
from lagent.llms import (SENSENOVA_API, GPTAPI, INTERNLM2_META, HFTransformerCasualLM,
LMDeployClient, LMDeployServer)

internlm_server = dict(type=LMDeployServer,
Expand Down Expand Up @@ -36,9 +36,24 @@
stop_words=['<|im_end|>'])
# openai_api_base needs to fill in the complete chat api address, such as: https://api.openai.com/v1/chat/completions
gpt4 = dict(type=GPTAPI,
model_type='gpt-4-turbo',
model_type='gpt-4o-mini',
key=os.environ.get('OPENAI_API_KEY', 'YOUR OPENAI API KEY'),
openai_api_base=os.environ.get('OPENAI_API_BASE', 'https://api.openai.com/v1/chat/completions'),
max_new_tokens=24576,
)

# First, apply for SenseNova's ak and sk from SenseTime staff
# Then, generated SENSENOVA_API_KEY using lagent.utils.gen_key.auto_gen_jwt_token(ak, sk) here
# https://github.com/InternLM/lagent/blob/ffc4ca71b4bcdbfb3a69bc0dccfa2dcc584a474d/lagent/utils/gen_key.py#L23

# If you want to switch to the locally deployed SenseNova model, you need to add the model name and context-window-length here
# https://github.com/winer632/lagent/blob/a5284a9af4c373a3ac666c51d6cef6de1e1de509/lagent/llms/sensenova.py#L21
# You also need to change the SENSENOVA_API_BASE environment variable to the API address of the local inference framework
sensenova = dict(type=SENSENOVA_API,
model_type='SenseChat-5',
key=os.environ.get('SENSENOVA_API_KEY', 'YOUR SENSENOVA API KEY'),
sensenova_api_base=os.environ.get('SENSENOVA_API_BASE', 'https://api.sensenova.cn/v1/llm/chat-completions'),
max_new_tokens=24576,
)

url = 'https://dashscope.aliyuncs.com/api/v1/services/aigc/text-generation/generation'
Expand Down
4 changes: 2 additions & 2 deletions requirements.txt
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
duckduckgo_search==5.3.1b1
einops
fastapi
git+https://github.com/InternLM/lagent.git
xlagent==0.2.1
gradio
janus
lmdeploy
git+https://github.com/InternLM/lmdeploy.git
pyvis
sse-starlette
termcolor
Expand Down