Skip to content

bitovi/temporal-ai-pipeline-example

Repository files navigation

Temporal AI workflow

The following is a simplified sample Temporal workflow to create custom embeddings from large list of files for use in an LLM-based application.

Installing and running dependencies

This repo contains a simple local development setup. For production use, we would recommend using AWS.

Use the following command to run everything you need locally:

  • Localstack (for storing files in local S3)
  • Postgres (where embeddings are stored)
  • A Temporal Worker (to run your Workflow/Activity code)

Obtain an OpenAI API key

To run this project, you will need an OpenAI API key. If you already have an OpenAI account, you can setup a project and API key on the OpenAI Settings page. If you don't have an account, you can sign up at OpenAI. You'll need to perform two main steps to run this project:

  1. To create an API key, create a project, first, then open the API Keys page from the left sidebar and create a new key.
  2. Open the Limits page for the new project and select the following models:
    • gpt-3.5-turbo
    • text-embedding-ada-002
    • gpt-4-turbo

Once you've setup your API key and models, you'll be ready to run the project. Note that sometimes it can take up to 15 minutes for the model selections to apply to your API key.

Configure Environment Variables

You will need to set the OpenAI API Key as an environment variable and will also need environment variables set for connecting to Temporal Cloud. Create a .env file and fill it in:

cp .env-example .env

Starting the application

docker compose up --build -d

See these instructions if you need an OpenAI key.

Tearing everything down

Run the following command to turn everything off:

docker compose down -v

Installing Client dependencies

The npm scripts below rely on dependencies being installed. Install them using:

npm ci

Create embeddings

npm run process-documents

Generated embeddings are stored in a Postgres table:

Alt text

Invoke a prompt

npm run invoke-prompt <embeddings workflowID> "<query>"

Test a prompt

npm run test-prompts <embeddings workflowID>

More info

This repo was created for demonstrating concepts outlined in the following articles.

About

Example application using Temporal for an AI Pipeline

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published