📄️ Create a basic LLM app
At the most basic level, the LLM completes text. That is why the input text is called a "prompt". The base model simply comes up with the next words that are likely to follow the prompt. In this example, we will demonstrate this basic use case.
📄️ Create a chatbot LLM app
The most common LLM app has to be the chatbot. For that, the base LLM is finetuned with a lot of back and forth conversation examples. The base LLM "learns" how to follow conversations and becomes a chat LLM. Since the conversation examples are fed into the LLM using certain formats, the chat LLM will expect the input prompt to follow the same format. This is called the prompt template. Let's see how that works.
📄️ Create a multimodal app
Coming soon.
📄️ Create an embedding app
An important LLM task is to generate embeddings for natural language sentences. It converts a sentence to a vector of numbers called an "embedding". The embedding vectors can then be stored in a vector database. You can search it later to find similiar sentences.
📄️ Create knowledge embeddings using the API server
The LlamaEdge API server project demonstrates how to support OpenAI style APIs to upload, chunck, and create embeddings for a text document. In this guide, I will show you how to use those API endpoints as a developer.
📄️ Implement your own RAG API server
Coming soon.