-
Notifications
You must be signed in to change notification settings - Fork 134
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to upgrade to gpt-4o #219
Comments
So you have issues with your workflow. So I'd recommend that you write an Extended OpenAI Conversation script that includes the complete workflow. Here's a template I generated for you with ChatGPT without modifying it, but so that you have an idea how to start:
|
Uploading a picture to a vision complaint OpenAI model was added to Extended OpenAI Conversation several months ago, @jleinenbach @The-Erf . You can use a sentence trigger through the HA GUI with keywords to trigger what kind of image analysis you want or simply ask in your native language and let ChatGPT "understand" what you say. |
Tanks can you Explain more ؟ |
The developer outlines it here: #60 |
This spec, as taken from this post, allows you to chat with Extended OpenAI about an image (multiple images if you want): - spec:
name: vision
description: Analyze images
parameters:
type: object
properties:
message:
type: string
description: Analyze the images as requested by the user
required:
- request
function:
type: script
sequence:
- service: gpt4vision.image_analyzer
data:
max_tokens: 400
message: "{{request}}"
image_file: |-
/media/Allarme_Camera.jpg
/media/Allarme_Sala1.jpg
/media/Snapshot_Giardino1_20240425-090813.jpg
provider: OpenAI
model: gpt-4-vision-preview
target_width: 1280
temperature: 0.5
response_variable: _function_result |
@valentinfrlch |
So I assume you have installed OpenAI Extended Conversation.
The spec posted here has also been updated. You can find the updated version in the wiki of gpt4vision Also note that you need to install gpt4vision (a separate integration) for this spec to work. You can do so through HACS, just follow the instructions here. |
Thanks very much. I had an existing spec, so wasn't sure wether to replace or append, but seems like append is the way to go. Thank you @valentinfrlch ! |
Hello
What I have in mind is that gpt can analyze the images of the cameras connected to the home assistant
for example :
How many people do you see on the camera?
Or what is the color of their clothes?
Do they look suspicious?
In the first step, I tried to set its language model to gpt-4o in the extended open ai conversion settings
as a result :
The response speed is relatively better
But when I asked him to analyze the camera images, he replied that I don't have access to cameras or that I don't have the ability to process images.
After a little searching, I found this: https://community.home-assistant.io/t/gpt-4o-vision-capabilities-in-home-assistant/729241
I installed it and after 1 day
I succeeded!
in such a way that
When I say open ai conversion
"what do you see?"
1- My automation or script is executed
2- A photo is taken from the camera I specified
3-Then I send that photo to ha-gpt4vision
4- The response of ha-gpt4vision is converted to sound with tts
If I'm honest, the result is good. lol:)
But its problems are many
For example, it is very limited
Or sometimes its tts sound interferes with openai conversion (tts sounds are played at the same time)
Or I have to write a lot of scripts to run ha-gpt4vision (for example, if the word x is said, take a picture and analyze the picture.
If the word b is said, take a picture and say what it is used for.
If the word c is said, take a picture and tell if the person in the picture is suspicious or not.
In this way, you have to write a lot of scripts to analyze each different photo
I'm looking for a way to not write scripts
For example, extended open ai conversion can directly access the cameras, and when we say for example, what do you see in the camera? Analyze the camera image in real time with GPT-4O
In the end, I hope I have explained correctly and you understand because I used Google translator ♥
The text was updated successfully, but these errors were encountered: