From 9a1b82b66d29a08e6682e3c6095ef76f25ee3245 Mon Sep 17 00:00:00 2001 From: Sebastian Raschka Date: Mon, 16 Sep 2024 15:36:16 -0500 Subject: [PATCH] Add Chainlit Studio (#1728) --- tutorials/deploy.md | 17 +++++++++++++++-- 1 file changed, 15 insertions(+), 2 deletions(-) diff --git a/tutorials/deploy.md b/tutorials/deploy.md index 1db3625c69..900d529dfe 100644 --- a/tutorials/deploy.md +++ b/tutorials/deploy.md @@ -2,8 +2,9 @@ This document shows how you can serve a LitGPT for deployment. +   -## Serve an LLM +## Serve an LLM with LitServe This section illustrates how we can set up an inference server for a phi-2 LLM using `litgpt serve` that is minimal and highly scalable. @@ -48,7 +49,7 @@ Example input. ```   -## Optional streaming mode +### Optional: Use the streaming mode The 2-step procedure described above returns the complete response all at once. If you want to stream the response on a token-by-token basis, start the server with the streaming option enabled: @@ -78,3 +79,15 @@ Sure, here is the corrected sentence: Example input ``` + +  +## Serve an LLM UI with Chainlit + +If you are interested in developing a simple ChatGPT-like UI prototype, see the Chainlit tutorial in the following Studio: + + + Open In Studio + + + +