How to Install Lobe Chat - Multi LLM Frontend Framework

How to Install Lobe Chat - Multi LLM Frontend Framework

Overview

  • Most developers start with the OpenAI API to develop using LLM. The OpenAI API operates separately from ChatGPT and allows the use of the latest model, GPT-4 Turbo (equivalent to gpt-4-turbo-2024-04-09 as of December 2023).

  • Developers using the OpenAI API often feel the need for a frontend similar to ChatGPT. This article introduces Lobe Chat, an open-source frontend that allows the OpenAI API to be used like ChatGPT. In my case, I install it locally for convenient, constant use.

  • The advantage of Lobe Chat is that it provides an integrated frontend for almost all models worldwide. Just by entering the API Key, you can conveniently use it with a unified UI, which is highly recommended.

Lobe Chat Installation

  • Lobe Chat is very conveniently installed as it provides a Docker image. It does not require a separate database, and the memory usage during service operation is around 150MB.
$ mkdir lobe-chat
$ cd lobe-chat

# Write Docker Compose file
$ nano docker-compose.yml
services:
  lobe-chat:
    image: lobehub/lobe-chat:latest
    container_name: lobe-chat
    restart: always
    ports:
      - '3210:3210'
    environment:
      OPENAI_API_KEY: {your-openai-api-key}
      ACCESS_CODE: {your-password}

# Install and run Lobe Chat service
$ docker-compose up -d

Running Lobe Chat

  • You can access Lobe Chat by navigating to http://localhost:3210 in a web browser.

References