Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TUI - Improve how error message gets presented when rate limits like tokens per min or maximum context length gets execeeded. #691

Open
sangee2004 opened this issue Aug 1, 2024 · 0 comments
Labels
bug Something isn't working

Comments

@sangee2004
Copy link
Contributor

gptscript version v0.0.0-dev-7ff3fa1f-dirty

Steps to reproduce the problem:

  1. I am testing with an openai account which is in "Tier 1" (with $5 credit) billing tier.
  2. When I hit rate limits for tokens per min or maximum context length , the error message presented in the TUI looks like they are unhandled exceptions .
%gptscript chat_internet.gpt 

  Hello! I'm here to help you with any questions or tasks you have. How can I assist you today?                                                         

> who won 2024 superbowl?

    ┌────────────────────────────────────────────────────────────────────┐
    │ Call Arguments:                                                    │
    │                                                                    │
    │ answersFromTheInternet {"question":"Who won the 2024 Super Bowl?"} │
    └────────────────────────────────────────────────────────────────────┘
                                                                          
  Running answers-from-the-internet from github.com/gptscript-ai/answers-from-the-internet                                                              

    ┌───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
    │ ERROR: got (exit status 1) while running tool, OUTPUT:                                                                                                    
    │ > tool                                                                                                                                                    
    │ > node --no-warnings --loader ts-node/esm src/server.ts                                                                                                   
    │                                                                                                                                                           
    │ slow page: https://www.marca.com/en/nfl/winners.html                                                                                                      
    │                                                                                                                                                           
    │ node:internal/process/esm_loader:34                                                                                                                       
    │       internalBinding('errors').triggerUncaughtException(                                                                                                 
    │                                 ^                                                                                                                         
    │ error, status code: 429, message: Request too large for gpt-4o in organization org-*** on tokens per min (TPM): Limit 30000, Requeste
    │ (Use `node --trace-uncaught ...` to show where the exception was thrown)                                                                                  
    │                                                                                                                                                           
    │ Node.js v21.7.0                                                                                                                                           
    └───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
                                                                                                                                                                
  It seems there was an issue retrieving the information. Let me try again.                                                                             


    ┌────────────────────────────────────────────────────────────────────┐
    │ Call Arguments:                                                    │
    │                                                                    │
    │ answersFromTheInternet {"question":"Who won the 2024 Super Bowl?"} │
    └────────────────────────────────────────────────────────────────────┘
                                                                          
  Running answers-from-the-internet from github.com/gptscript-ai/answers-from-the-internet                                                              

    ┌───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
    │ ERROR: got (exit status 1) while running tool, OUTPUT:                                                                                                    
    │ > tool                                                                                                                                                    
    │ > node --no-warnings --loader ts-node/esm src/server.ts                                                                                                   
    │                                                                                                                                                           
    │ slow page: https://www.marca.com/en/nfl/winners.html                                                                                                      
    │                                                                                                                                                           
    │ node:internal/process/esm_loader:34                                                                                                                       
    │       internalBinding('errors').triggerUncaughtException(                                                                                                 
    │                                 ^                                                                                                                         
    │ error, status code: 429, message: Request too large for gpt-4o in organization org-*** on tokens per min (TPM): Limit 30000, Requeste
    │ (Use `node --trace-uncaught ...` to show where the exception was thrown)                                                                                  
    │                                                                                                                                                           
    │ Node.js v21.7.0                                                                                                                                           
    └───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
                                                                                                                                                                
  It looks like I'm having trouble retrieving the information at the moment. You might want to check a reliable sports news website or the official NFL 
  website for the latest updates on the Super Bowl winner. If you have any other questions or need assistance with something else, feel free to ask!    


sangeethahariharan@Sangeethas-MBP scripts % gptscript --default-model gpt-3.5-turbo chat_internet.gpt
12:27:58 WARNING: Changing the default model can have unknown behavior for existing tools. Use the model field per tool instead.

  Hello! I am here to assist you. Feel free to ask me any question, and I will do my best to provide you with accurate information.                     

> who won 2024 superbowl?

    ┌──────────────────────────────────────────────────────────────┐
    │ Call Arguments:                                              │
    │                                                              │
    │ answersFromTheInternet {"question":"2024 Super Bowl winner"} │
    └──────────────────────────────────────────────────────────────┘
                                                                    
  Running answers-from-the-internet from github.com/gptscript-ai/answers-from-the-internet                                                              

    ┌───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
    │ ERROR: got (exit status 1) while running tool, OUTPUT:                                                                                                    
    │ > tool                                                                                                                                                    
    │ > node --no-warnings --loader ts-node/esm src/server.ts                                                                                                   
    │                                                                                                                                                           
    │                                                                                                                                                           
    │ node:internal/process/esm_loader:34                                                                                                                       
    │       internalBinding('errors').triggerUncaughtException(                                                                                                 
    │                                 ^                                                                                                                         
    │ error, status code: 400, message: This model's maximum context length is 16385 tokens. However, your messages resulted in 27294 tokens. Please reduce the 
    │ (Use `node --trace-uncaught ...` to show where the exception was thrown)                                                                                  
    │                                                                                                                                                           
    │ Node.js v21.7.0                                                                                                                                           
    └───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
                                                                                                                                                                
  I encountered an error while trying to retrieve the information about the 2024 Super Bowl winner. Is there anything else you would like to know or ask
  about?                                                                                                                                                

 % 

Expected Behavior:
Have an Improved error message presented to the user in this case.

@sangee2004 sangee2004 added the bug Something isn't working label Aug 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant