Skip to content

Commit

Permalink
feat: Support AI agent (#887)
Browse files Browse the repository at this point in the history
  • Loading branch information
congminh1254 authored Aug 26, 2024
1 parent abab5b9 commit 5b109ad
Show file tree
Hide file tree
Showing 20 changed files with 525 additions and 30 deletions.
4 changes: 4 additions & 0 deletions codegen/codegen.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -188,6 +188,10 @@ import { generateManagerClasses } from './generate-manager-classes';
{
name: 'textGen',
operationId: 'post_ai_text_gen',
},
{
name: 'getAiAgentDefaultConfig',
operationId: 'get_ai_agent_default',
}
]
}
Expand Down
44 changes: 40 additions & 4 deletions docs/ai.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,16 +6,18 @@ AI allows to send an intelligence request to supported large language models and
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->

- [Send AI request](#send-ai-request)
- [Send AI text generation request](#send-ai-text-generation-request)
- [AI](#ai)
- [Send AI request](#send-ai-request)
- [Send AI text generation request](#send-ai-text-generation-request)
- [Get AI agent default configuration](#get-ai-agent-default-configuration)

<!-- END doctoc generated TOC please keep comment here to allow auto update -->

Send AI request
------------------------

To send an AI request to the supported large language models, call the
[`ai.ask(body, options?, callback?)`](http://opensource.box.com/box-node-sdk/jsdoc/AI.html#ask) method with the prompt and items. The `items` parameter is a list of items to be processed by the LLM, often files. The `prompt` provided by the client to be answered by the LLM. The prompt's length is limited to 10000 characters. The `mode` specifies if this request is for a single or multiple items. If you select `single_item_qa` the items array can have one element only. Selecting `multiple_item_qa` allows you to provide up to 25 items.
[`ai.ask(body, options?, callback?)`](http://opensource.box.com/box-node-sdk/jsdoc/AIManager.html#ask) method with the prompt and items. The `items` parameter is a list of items to be processed by the LLM, often files. The `prompt` provided by the client to be answered by the LLM. The prompt's length is limited to 10000 characters. The `mode` specifies if this request is for a single or multiple items. If you select `single_item_qa` the items array can have one element only. Selecting `multiple_item_qa` allows you to provide up to 25 items.

<!-- sample post_ai_ask -->
```js
Expand Down Expand Up @@ -47,7 +49,7 @@ Send AI text generation request
------------------------

To send an AI text generation request to the supported large language models, call the
[`ai.textGen(body, options?, callback?)`](http://opensource.box.com/box-node-sdk/jsdoc/AI.html#textGen) method with the prompt, items and dialogue history. The `dialogue_history` parameter is history of prompts and answers previously passed to the LLM. This provides additional context to the LLM in generating the response. The `items` parameter is a list of items to be processed by the LLM, often files. The `prompt` provided by the client to be answered by the LLM. The prompt's length is limited to 10000 characters.
[`ai.textGen(body, options?, callback?)`](http://opensource.box.com/box-node-sdk/jsdoc/AIManager.html#textGen) method with the prompt, items and dialogue history. The `dialogue_history` parameter is history of prompts and answers previously passed to the LLM. This provides additional context to the LLM in generating the response. The `items` parameter is a list of items to be processed by the LLM, often files. The `prompt` provided by the client to be answered by the LLM. The prompt's length is limited to 10000 characters.

<!-- sample post_ai_text_gen -->
```js
Expand Down Expand Up @@ -81,3 +83,37 @@ client.ai.textGen(
} */
});
```


Get AI agent default configuration
------------------------

To get an AI agent default configuration call the [ai.getAiAgentDefaultConfig(options?, callback?)](http://opensource.box.com/box-node-sdk/jsdoc/AIManager.html#getAiAgentDefaultConfig) method. The `mode` parameter filters the agent configuration to be returned. It can be either `ask` or `text_gen`. The `language` parameter specifies the ISO language code to return the agent config for. If the language is not supported, the default agent configuration is returned. The `model` parameter specifies the model for which the default agent configuration should be returned.

<!-- sample get_ai_agent_default -->
```js
client.ai.getAiAgentDefaultConfig({
mode: 'ask',
language: 'en',
model:'openai__gpt_3_5_turbo'
}).then(response => {
/* response -> {
"type": "ai_agent_ask",
"basic_text": {
"llm_endpoint_params": {
"type": "openai_params",
"frequency_penalty": 1.5,
"presence_penalty": 1.5,
"stop": "<|im_end|>",
"temperature": 0,
"top_p": 1
},
"model": "openai__gpt_3_5_turbo",
"num_tokens_for_completion": 8400,
"prompt_template": "It is `{current_date}`, and I have $8000 and want to spend a week in the Azores. What should I see?",
"system_message": "You are a helpful travel assistant specialized in budget travel"
},
...
} */
});
```
44 changes: 42 additions & 2 deletions src/managers/ai.generated.ts
Original file line number Diff line number Diff line change
Expand Up @@ -18,13 +18,13 @@ class AIManager {
* @param {schemas.AiAsk} body
* @param {object} [options] Options for the request
* @param {Function} [callback] Passed the result if successful, error otherwise
* @returns {Promise<schemas.AiResponse>} A promise resolving to the result or rejecting with an error
* @returns {Promise<schemas.AiResponseFull>} A promise resolving to the result or rejecting with an error
*/
ask(
body: schemas.AiAsk,
options?: {},
callback?: Function
): Promise<schemas.AiResponse> {
): Promise<schemas.AiResponseFull> {
const { ...queryParams } = options,
apiPath = urlPath('ai', 'ask'),
params = {
Expand Down Expand Up @@ -63,5 +63,45 @@ class AIManager {
callback
);
}
/**
* Get AI agent default configuration
*
* Get the AI agent default config
* @param {object} options Options for the request
* @param {"ask" | "text_gen"} options.mode The mode to filter the agent config to return.
* @param {string} [options.language] The ISO language code to return the agent config for. If the language is not supported the default agent config is returned.
* @param {string} [options.model] The model to return the default agent config for.
* @param {Function} [callback] Passed the result if successful, error otherwise
* @returns {Promise<schemas.AiAgentAsk | schemas.AiAgentTextGen>} A promise resolving to the result or rejecting with an error
*/
getAiAgentDefaultConfig(
options: {
/**
* The mode to filter the agent config to return.
*/
readonly mode: 'ask' | 'text_gen';
/**
* The ISO language code to return the agent config for.
* If the language is not supported the default agent config is returned.
*/
readonly language?: string;
/**
* The model to return the default agent config for.
*/
readonly model?: string;
},
callback?: Function
): Promise<schemas.AiAgentAsk | schemas.AiAgentTextGen> {
const { ...queryParams } = options,
apiPath = urlPath('ai_agent_default'),
params = {
qs: queryParams,
};
return this.client.wrapWithDefaultHandler(this.client.get)(
apiPath,
params,
callback
);
}
}
export = AIManager;
17 changes: 17 additions & 0 deletions src/schemas/ai-agent-ask.generated.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
import * as schemas from '.';
/**
* AI agent for question requests
*
* The AI agent used to handle queries.
*/
export interface AiAgentAsk {
/**
* The type of AI agent used to handle queries.
* Example: ai_agent_ask
*/
type: 'ai_agent_ask';
long_text?: schemas.AiAgentLongTextTool;
basic_text?: schemas.AiAgentBasicTextToolAsk;
long_text_multi?: schemas.AiAgentLongTextTool;
basic_text_multi?: schemas.AiAgentBasicTextToolAsk;
}
14 changes: 14 additions & 0 deletions src/schemas/ai-agent-basic-gen-tool.generated.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
import * as schemas from '.';
/**
* AI agent basic text generation tool
*
* AI agent basic tool used to generate text.
*/
export interface AiAgentBasicGenTool extends schemas.AiAgentLongTextTool {
/**
* How the content should be included in a request to the LLM.
* When passing this parameter, you must include `{content}`.
* Example: ---{content}---
*/
content_template?: string;
}
34 changes: 34 additions & 0 deletions src/schemas/ai-agent-basic-text-tool-ask.generated.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
import * as schemas from '.';
/**
* AI agent basic text tool
*
* AI agent tool used to handle basic text.
*/
export interface AiAgentBasicTextToolAsk {
/**
* The model used for the AI Agent for basic text.
* Example: openai__gpt_3_5_turbo
*/
model?: string;
/**
* System messages try to help the LLM "understand" its role and what it is supposed to do.
* Example: You are a helpful travel assistant specialized in budget travel
*/
system_message?: string;
/**
* The prompt template contains contextual information of the request and the user prompt.
*
* When passing `prompt_template` parameters, you **must include** inputs for `{current_date}`, `{user_question}`, and `{content}`.
* Example: It is `{current_date}`, and I have $8000 and want to spend a week in the Azores. What should I see?
*/
prompt_template?: string;
/**
* The number of tokens for completion.
* Example: 8400
*/
num_tokens_for_completion?: number;
/**
* The parameters for the LLM endpoint specific to OpenAI / Google models.
*/
llm_endpoint_params?: schemas.AiLlmEndpointParamsOpenAi | schemas.AiLlmEndpointParamsGoogle;
}
36 changes: 36 additions & 0 deletions src/schemas/ai-agent-basic-text-tool-text-gen.generated.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
import * as schemas from '.';
/**
* AI agent basic text tool
*
* AI agent tool used to handle basic text.
*/
export interface AiAgentBasicTextToolTextGen {
/**
* The model to be used for the AI Agent for basic text.
* Example: openai__gpt_3_5_turbo
*/
model?: string;
/**
* System messages try to help the LLM "understand" its role and what it is supposed to do.
* This parameter requires using `{current_date}`.
* Example: You are a helpful travel assistant specialized in budget travel
*/
system_message?: string;
/**
* The prompt template contains contextual information of the request and the user prompt.
*
* When using the `prompt_template` parameter, you **must include** input for `{user_question}`.
* Inputs for `{current_date}` and`{content}` are optional, depending on the use.
* Example: It is `{current_date}`, and I have $8000 and want to spend a week in the Azores. What should I see?
*/
prompt_template?: string;
/**
* The number of tokens for completion.
* Example: 8400
*/
num_tokens_for_completion?: number;
/**
* The parameters for the LLM endpoint specific to OpenAI / Google models.
*/
llm_endpoint_params?: schemas.AiLlmEndpointParamsOpenAi | schemas.AiLlmEndpointParamsGoogle;
}
10 changes: 10 additions & 0 deletions src/schemas/ai-agent-long-text-tool.generated.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
import * as schemas from '.';
/**
* AI agent long text tool
*
* AI agent tool used to to handle longer text.
*/
export interface AiAgentLongTextTool
extends schemas.AiAgentBasicTextToolTextGen {
embeddings?: object;
}
14 changes: 14 additions & 0 deletions src/schemas/ai-agent-text-gen.generated.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
import * as schemas from '.';
/**
* AI agent for text generation requests
*
* The AI agent used for generating text.
*/
export interface AiAgentTextGen {
/**
* The type of AI agent used for generating text.
* Example: ai_agent_text_gen
*/
type: 'ai_agent_text_gen';
basic_gen?: schemas.AiAgentBasicGenTool;
}
14 changes: 12 additions & 2 deletions src/schemas/ai-ask.generated.ts
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
import * as schemas from '.';
/**
* AI Ask Request
* AI ask request
*
* AI Ask request object
* AI ask request object
*/
export interface AiAsk {
/**
Expand All @@ -27,4 +27,14 @@ export interface AiAsk {
type?: string;
content?: string;
}[];
/**
* The history of prompts and answers previously passed to the LLM. This provides additional context to the LLM in generating the response.
*/
dialogue_history?: schemas.AiDialogueHistory[];
/**
* A flag to indicate whether citations should be returned.
* Example: true
*/
include_citations?: boolean;
ai_agent?: schemas.AiAgentAsk;
}
28 changes: 28 additions & 0 deletions src/schemas/ai-citation.generated.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
import * as schemas from '.';
/**
* The citation of the LLM's answer reference
*
* The citation of the LLM's answer reference.
*/
export interface AiCitation {
/**
* The specific content from where the answer was referenced.
* Example: Public APIs are key drivers of innovation and growth.
*/
content?: string;
/**
* The id of the item.
* Example: 123
*/
id?: string;
/**
* The type of the item.
* Example: file
*/
type?: 'file';
/**
* The name of the item.
* Example: The importance of public APIs.pdf
*/
name?: string;
}
23 changes: 23 additions & 0 deletions src/schemas/ai-dialogue-history.generated.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
import * as schemas from '.';
/**
* Dialogue history
*
* A context object that can hold prior prompts and answers.
*/
export interface AiDialogueHistory {
/**
* The prompt previously provided by the client and answered by the LLM.
* Example: Make my email about public APIs sound more professional.
*/
prompt?: string;
/**
* The answer previously provided by the LLM.
* Example: Here is the first draft of your professional email about public APIs.
*/
answer?: string;
/**
* The ISO date formatted timestamp of when the previous answer to the prompt was created.
* Example: 2012-12-12T10:53:43-08:00
*/
created_at?: string;
}
32 changes: 32 additions & 0 deletions src/schemas/ai-llm-endpoint-params-google.generated.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
import * as schemas from '.';
/**
* AI LLM endpoint params Google
*
* AI LLM endpoint params Google object
*/
export interface AiLlmEndpointParamsGoogle {
/**
* The type of the AI LLM endpoint params object for Google.
* This parameter is **required**.
* Example: google_params
*/
type: 'google_params';
/**
* The temperature is used for sampling during response generation, which occurs when `top-P` and `top-K` are applied.
* Temperature controls the degree of randomness in token selection.
*/
temperature?: number;
/**
* `Top-P` changes how the model selects tokens for output. Tokens are selected from the most (see `top-K`) to least probable
* until the sum of their probabilities equals the `top-P` value.
* Example: 1
*/
top_p?: number;
/**
* `Top-K` changes how the model selects tokens for output. A `top-K` of 1 means the next selected token is
* the most probable among all tokens in the model's vocabulary (also called greedy decoding),
* while a `top-K` of 3 means that the next token is selected from among the three most probable tokens by using temperature.
* Example: 1
*/
top_k?: number;
}
Loading

0 comments on commit 5b109ad

Please sign in to comment.