Globe AI
Globe AI is a Dart first SDK for integrating large language models (LLMs) into Dart and Flutter apps. It supports text generation, streaming responses, and structured data extraction.
It is ideal for building AI-powered features, generating creative text, extracting information from documents, and validating structured data output against a schema.
This page covers setup, generating text, streaming responses, and working with structured data.
Features
- Text Generation: Generate text from a simple prompt or a series of messages, including multimodal inputs like images and documents.
- Streaming: Stream text or structured data responses as they are generated for a real-time user experience.
- Structured Output: Reliably generate structured JSON output that is validated against a custom schema.
- Model Agnostic: Easily configure your preferred model provider, like OpenAI.
1. Install Globe Runtime
First, you need the Globe Runtime. You can install it using one of two methods:
-
With the Globe CLI: If you have the Globe CLI installed, simply run the following command in your terminal:
globe runtime install
-
Manual Installation: Download the
libglobe_runtime
dynamic library for your platform from the GitHub Releases page. Place the downloaded file into the~/.globe/runtime/
directory.
2. Add globe_ai
to Your Project
Add the globe_ai
dependency to your pubspec.yaml
file:
dependencies:
globe_ai: ^<latest-version>
Then, run the appropriate command to install it:
-
With Dart:
dart pub get
-
With Flutter:
flutter pub get
3. Configure Your Model
Create an instance of your chosen model provider. For example, to use OpenAI, you can configure it like this:
import 'package:globe_ai/globe_ai.dart';
// Configure the model, e.g., OpenAI's 'gpt-4o'
// The 'user' parameter is optional and can be used for tracking or moderation.
final model = openai.chat('gpt-4o', user: 'John');
// Or for a simpler configuration:
final model = openai('gpt-4o');
Basic Text Generation
final result = await generateText(
model: openai.chat('gpt-4o'),
prompt: 'What is the capital of Ghana?',
);
print(result); // Output: Accra
Generation with Multimodal Input
You can also send a combination of text and files (like PDFs or images) as input.
final textWithPdf = await generateText(
model: openai.chat('gpt-4o'),
messages: [
OpenAIMessage(
role: 'user',
content: [
OpenAIInput(text: 'What is the title of this book?'),
OpenAIInput(
file: FileInput(
data: File('path/to/your/doc.pdf').readAsBytesSync(),
mimeType: 'application/pdf',
name: 'ai.pdf',
),
),
],
),
],
);
print(textWithPdf);
Streaming Text
For a more responsive user experience, stream the response as it's being generated by the model. This is great for chatbots or displaying long-form text.
final stream = streamText(
model: openai('o4-mini'),
prompt: 'Describe the Mission burrito vs SF burrito debate.',
);
await for (final chunk in stream) {
// Each 'chunk' is a piece of the response
stdout.write(chunk);
}
Structured Object Generation
Ensure you get reliable, machine-readable output by generating JSON that is validated against a schema. This is perfect for when you need a specific data structure.
1. Define Your Schema
First, define the shape of the JSON you expect using l.schema.
final schema = l.schema({
'recipe': l.schema({
'name': l.string(),
'ingredients': l.list(validators: [
l.schema({'name': l.string(), 'amount': l.string()}),
]),
'steps': l.list(validators: [l.string()]),
})
});
2. Generate the Object
Then, call generateObject with your prompt and schema.
final result = await generateObject<Map<String, dynamic>>(
model: openai('gpt-4.1'),
prompt: 'Generate a lasagna recipe.',
schema: schema,
);
// Safely access the validated data
print(result['recipe']['name']); // e.g., 'Classic Lasagna'
Streaming Structured Objects
Combine the power of streaming and structured data. This feature allows you to stream a JSON object and validate its structure incrementally.
final resultStream = streamObject<Map<String, dynamic>>(
model: openai('gpt-4.1'),
prompt: 'Generate a lasagna recipe.',
schema: schema,
);
await for (final chunk in resultStream) {
// 'chunk' is a validated partial output of your object
print(chunk);
}