Skip to main content

Quickstart

In this guide, you'll create your first AI-powered API endpoint using LLM Stack. We will walk you through the process of creating an account, deploying an API, and testing it.

For this guide, we'll create an API that processes any document and extracts data in a structured format.

As a prerequisite, sign in using your GitHub account at llmstack.dev .

1. Create Project

Projects are a way to organize your resources on LLM Stack. You can create a project for each application or use case. Projects also help you manage access control and billing across different teams.

Project Creation Screenshot

2. Project Dashboard

Once you create a project, click on it and you'll be taken to the project dashboard. Here you can see all the endpoints, logs, and summary for your project.

Project Dashboard Screenshot

3. Create Endpoint

Click on "Endpoints" Tab → "Create Endpoint" to create a new endpoint.

Endpoints are the core building blocks of LLM Stack. They are the APIs that you can call to get predictions from your LLM models. Each endpoint is associated with an OCR model + an LLM model.

The prompt uses jinja templating to inject variable input data into the LLM model. The input data can be text or a file.

Endpoint Creation Screenshot
info

The OCR model only gets invoked if the input is anything other than text.

4. Test It

Click on the "Test API" button to get the endpoint access details. You can test the endpoint using the provided curl command.

Test within LLM Stack

Send an image in the same data variable and get the structured data in the response. Following example is from the same endpoint consuming an image.

Test with Postman

Sample curl command to test the endpoint:

curl --location 'https://api.llmstack.dev/dapi/endpoint/{endpoint-slug}/query' \
--header 'Authorization: {endpoint-token}' \
--form 'data=@"./PHOTO-2025-01-07-21-08-01.jpg"'

🎉 That's it! Your first AI API is live.