Builder
laravel-ai
Creates AI-powered features using Prism PHP
Overview
The laravel-ai agent creates AI-powered features using Prism PHP. It generates AI services, implements tool use (function calling), creates embeddings for semantic search, sets up RAG (Retrieval-Augmented Generation) pipelines, and builds conversational chatbots.
Responsibilities
- AI Service Classes - Structured classes for AI operations
- Tool Use / Function Calling - Let AI call your application functions
- Embeddings - Vector representations for semantic search
- RAG Pipelines - Retrieval-augmented generation setup
- Chatbot Features - Conversational AI with memory
- Provider Abstraction - Switch between OpenAI, Anthropic, etc.
Supported Providers
| Provider | Models | Features |
|---|---|---|
| OpenAI | GPT-4o, GPT-4, GPT-3.5 | Chat, Tools, Embeddings, Vision |
| Anthropic | Claude 3.5, Claude 3 | Chat, Tools, Vision |
| Ollama | Llama 3, Mistral, etc. | Chat, Embeddings (local) |
| Groq | Llama, Mixtral | Fast inference |
Generated AI Service
<?php
namespace App\Services\AI;
use EchoLabs\Prism\Prism;
use EchoLabs\Prism\Enums\Provider;
use EchoLabs\Prism\ValueObjects\Messages\UserMessage;
use EchoLabs\Prism\ValueObjects\Messages\AssistantMessage;
class ContentAssistant
{
private array $conversationHistory = [];
public function __construct(
private string $systemPrompt = 'You are a helpful content assistant.'
) {}
/**
* Generate content based on a prompt.
*/
public function generate(string $prompt): string
{
$response = Prism::text()
->using(Provider::OpenAI, 'gpt-4o')
->withSystemPrompt($this->systemPrompt)
->withPrompt($prompt)
->generate();
return $response->text;
}
/**
* Continue a conversation with context.
*/
public function chat(string $message): string
{
$this->conversationHistory[] = new UserMessage($message);
$response = Prism::text()
->using(Provider::OpenAI, 'gpt-4o')
->withSystemPrompt($this->systemPrompt)
->withMessages($this->conversationHistory)
->generate();
$this->conversationHistory[] = new AssistantMessage($response->text);
return $response->text;
}
/**
* Summarize long content.
*/
public function summarize(string $content, int $maxWords = 100): string
{
return Prism::text()
->using(Provider::OpenAI, 'gpt-4o-mini')
->withPrompt("Summarize the following in {$maxWords} words or less:\n\n{$content}")
->generate()
->text;
}
public function resetConversation(): void
{
$this->conversationHistory = [];
}
}
Tool Use (Function Calling)
<?php
namespace App\Services\AI;
use EchoLabs\Prism\Prism;
use EchoLabs\Prism\Enums\Provider;
use EchoLabs\Prism\Tool;
use App\Models\Product;
use App\Models\Order;
class ShoppingAssistant
{
public function process(string $userMessage): string
{
$response = Prism::text()
->using(Provider::Anthropic, 'claude-3-5-sonnet-20241022')
->withSystemPrompt('You are a shopping assistant. Use the available tools to help customers.')
->withPrompt($userMessage)
->withTools($this->getTools())
->generate();
return $response->text;
}
private function getTools(): array
{
return [
Tool::as('search_products')
->for('Search for products by name or category')
->withStringParameter('query', 'Search query')
->withStringParameter('category', 'Product category', nullable: true)
->using(function (string $query, ?string $category = null): string {
$products = Product::query()
->where('name', 'like', "%{$query}%")
->when($category, fn ($q) => $q->where('category', $category))
->limit(5)
->get(['id', 'name', 'price', 'in_stock']);
return $products->toJson();
}),
Tool::as('get_order_status')
->for('Get the status of an order by order number')
->withStringParameter('order_number', 'The order number')
->using(function (string $orderNumber): string {
$order = Order::where('number', $orderNumber)->first();
if (!$order) {
return json_encode(['error' => 'Order not found']);
}
return json_encode([
'status' => $order->status,
'shipped_at' => $order->shipped_at?->toDateString(),
'estimated_delivery' => $order->estimated_delivery?->toDateString(),
]);
}),
];
}
}
RAG Pipeline with Embeddings
<?php
namespace App\Services\AI;
use EchoLabs\Prism\Prism;
use EchoLabs\Prism\Enums\Provider;
use App\Models\Document;
use Illuminate\Support\Collection;
class DocumentRAG
{
/**
* Index a document by generating embeddings.
*/
public function indexDocument(Document $document): void
{
// Split into chunks
$chunks = $this->splitIntoChunks($document->content, 500);
foreach ($chunks as $index => $chunk) {
$embedding = Prism::embeddings()
->using(Provider::OpenAI, 'text-embedding-3-small')
->fromInput($chunk)
->generate();
$document->chunks()->create([
'content' => $chunk,
'embedding' => $embedding->embeddings[0],
'chunk_index' => $index,
]);
}
}
/**
* Query documents using semantic search.
*/
public function query(string $question, int $topK = 5): string
{
// Generate embedding for the question
$questionEmbedding = Prism::embeddings()
->using(Provider::OpenAI, 'text-embedding-3-small')
->fromInput($question)
->generate()
->embeddings[0];
// Find similar chunks (using pgvector or similar)
$relevantChunks = $this->findSimilarChunks($questionEmbedding, $topK);
// Build context from relevant chunks
$context = $relevantChunks
->pluck('content')
->implode("\n\n---\n\n");
// Generate answer using context
return Prism::text()
->using(Provider::OpenAI, 'gpt-4o')
->withSystemPrompt("Answer questions based on the provided context. If the answer isn't in the context, say so.")
->withPrompt("Context:\n{$context}\n\nQuestion: {$question}")
->generate()
->text;
}
private function splitIntoChunks(string $content, int $chunkSize): array
{
$words = explode(' ', $content);
return collect($words)
->chunk($chunkSize)
->map(fn ($chunk) => implode(' ', $chunk->toArray()))
->toArray();
}
private function findSimilarChunks(array $embedding, int $topK): Collection
{
// Using pgvector extension
return DocumentChunk::query()
->selectRaw('*, embedding <=> ? as distance', [json_encode($embedding)])
->orderBy('distance')
->limit($topK)
->get();
}
}
Configuration
// config/prism.php
return [
'providers' => [
'openai' => [
'api_key' => env('OPENAI_API_KEY'),
'organization' => env('OPENAI_ORGANIZATION'),
],
'anthropic' => [
'api_key' => env('ANTHROPIC_API_KEY'),
],
'ollama' => [
'url' => env('OLLAMA_URL', 'http://localhost:11434'),
],
],
];
// .env
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
Invoked By Commands
- /laravel-agent:ai:make - Create AI-powered features
Called By
- laravel-architect - When building AI features
Guardrails
The AI agent follows strict rules:
- ALWAYS store API keys in environment variables
- ALWAYS implement rate limiting for AI calls
- ALWAYS handle API errors gracefully
- NEVER expose raw AI responses without validation
- NEVER store sensitive user data in AI prompts
See Also
- laravel-service-builder - Service class patterns
- laravel-queue - Background AI processing