Skip to content

FastAPI endpoint interface for an AI sentiment analysis model. The model is loaded locally and leverages the transformers library to perform sentiment analysis. This project demonstrates how to set up a FastAPI application, integrate a pre-trained sentiment analysis model, and expose a RESTful API endpoint for sentiment predictions.

License

Notifications You must be signed in to change notification settings

kayung-developer/FastApi-AI-Model-Endpoint

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AI Model Inference with FastAPI

This project demonstrates how to set up a FastAPI endpoint for AI model inference, specifically using a pre-trained sentiment analysis model from the Hugging Face Transformers library.

Features

  • FastAPI Integration: Easily create API endpoints using FastAPI.
  • Local Model Loading: Load and use pre-trained models stored locally on your machine.
  • Sentiment Analysis: Analyze text sentiment using state-of-the-art transformer models.
  • Extensible and Modular: Designed to be easily extendable for other types of models and tasks.

Project Structure

  • app/: Contains the FastAPI application code.
    • main.py: Entry point for the FastAPI application.
    • models/: Directory for model-related code.
      • sentiment_model.py: Contains the sentiment analysis model class.
    • routers/: Directory for route definitions.
      • inference.py: Contains the route for sentiment analysis inference.
    • schemas/: Directory for request/response schemas.
      • text_data.py: Contains the schema for text data.
  • Dockerfile: Docker configuration file.
  • requirements.txt: List of Python dependencies.
  • .dockerignore: List of files and directories to ignore in the Docker image.
  • README.md: Project documentation.

Setup

Prerequisites

  • Docker
  • Docker Compose (optional, for more complex setups)

Running the Application

  1. Build the Docker image:

    docker build -t ai-model-inference .
  2. Run the Docker container:

    docker run -d -p 8000:8000 ai-model-inference
  3. Access the API documentation at http://localhost:8000/docs.

Endpoints

  • POST /predict/: Predict the sentiment of a given text.

Download Models

--visit `https://huggingface.co/distilbert/distilbert-base-uncased-finetuned-sst-2-english/tree/main

Example Request

curl -X POST "http://localhost:8000/predict/" -H "Content-Type: application/json" -d '{"text": "I love FastAPI!"}'

About

FastAPI endpoint interface for an AI sentiment analysis model. The model is loaded locally and leverages the transformers library to perform sentiment analysis. This project demonstrates how to set up a FastAPI application, integrate a pre-trained sentiment analysis model, and expose a RESTful API endpoint for sentiment predictions.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages