Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: 'NoneType' object has no attribute 'model_endpoint_type' when using Ollama to load local models with MemGPT #1711

Open
5 tasks
Armanasq opened this issue Sep 3, 2024 · 0 comments

Comments

@Armanasq
Copy link

Armanasq commented Sep 3, 2024

Describe the bug
I'm trying to use Ollama to load local models with MemGPT, but I'm encountering an AttributeError. The error message indicates that a NoneType object has no attribute model_endpoint_type. This error occurs when I attempt to create a MemGPT agent using the provided configuration.

Please describe your setup

  • How did you install memgpt?
    • I installed it using pip install pymemgpt.
  • Describe your setup
    • OS: Linux
    • Running memgpt via Jupyter Notebook

Additional context
I've tried different configurations and combinations, but the error persists. Below is the error traceback:

AttributeError: 'NoneType' object has no attribute 'model_endpoint_type'

The error occurs at this line in the memgpt_agent.py file:

if llm_config["model_endpoint_type"] in ["azure", "openai"] or llm_config["model_endpoint_type"] != config.default_llm_config.model_endpoint_type:
    # Additional code

I've followed the default example for using Ollama with MemGPT.

MemGPT Config
Here is the configuration I'm using:

config_list = [
    {
        "model": "NULL",
        "base_url": "http://localhost:11434/v1",
        "api_key": "ollama",
    }
]

config_list_memgpt = [
    {
        "preset": DEFAULT_PRESET,
        "model": "llama3.1",  # only required for Ollama, see: https://memgpt.readme.io/docs/ollama
        "context_window": 8192,  # the context window of your model (for Mistral 7B-based models, it's likely 8192)
        "model_wrapper": "chatml",  # chatml is the default wrapper
        "model_endpoint_type": "ollama",  # can use webui, ollama, llamacpp, etc.
        "model_endpoint": "http://localhost:11434",  # the IP address of your LLM backend
    },
]

Local LLM details

If you are trying to run MemGPT with local LLMs, please provide the following information:

  • The exact model you're trying to use: llama3.1
  • The local LLM backend you are using: Ollama
  • Your hardware for the local LLM backend: Local computer running Linux
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: To triage
Development

No branches or pull requests

1 participant