AI

/laravel-agent:ai:make

Create AI-powered feature using Prism PHP

Overview

The /ai:make command generates AI-powered Laravel features using Prism PHP. It creates complete AI services with support for conversational AI, embeddings, tool calling, structured output, and MCP server integration. Choose from multiple AI providers including OpenAI, Anthropic, and Ollama.

Usage

/laravel-agent:ai:make [Name] [type]

Examples

# Create a conversational AI service (default)
/laravel-agent:ai:make ChatBot

# Create semantic search/recommendations with embeddings
/laravel-agent:ai:make ProductRecommender embeddings

# Create AI with tool calling capabilities
/laravel-agent:ai:make OrderAssistant tools

# Create AI with structured JSON output
/laravel-agent:ai:make ContentModerator structured

AI Feature Types

The command supports multiple AI feature types, each optimized for specific use cases:

Type Use Case Features
chat Conversational AI (default) Streaming responses, conversation history, context management
embeddings Semantic search & recommendations Vector embeddings, similarity search, content clustering
tools AI with function/tool calling External API integration, database queries, action execution
structured Structured JSON output Schema validation, typed responses, data extraction
mcp MCP server tools Model Context Protocol integration, server-side tools

What Gets Created

The command generates a complete AI feature with the following components:

Component Location Description
AI Service app/Services/AI/ Main AI service class with Prism integration
Configuration config/ai.php AI provider settings, models, and credentials
Controller app/Http/Controllers/ HTTP endpoints for AI interactions
Routes routes/web.php API routes for AI service
View resources/views/ai/ Frontend interface with streaming support
Tests tests/Feature/ Pest PHP tests for AI functionality

AI Providers

Choose from multiple AI providers based on your needs:

  • OpenAI - GPT-4, GPT-3.5 models with function calling and vision
  • Anthropic - Claude models with extended context and tool use
  • Ollama - Local models for privacy and cost savings

Available Features

Customize your AI service with these features:

  • streaming - Real-time response streaming for better UX
  • history - Conversation history and context management
  • tools - Function/tool calling for dynamic capabilities

Example Output Structure

For /laravel-agent:ai:make ChatBot chat:

app/
├── Services/AI/
│   └── ChatBotService.php
├── Http/Controllers/
│   └── ChatBotController.php
config/
└── ai.php
resources/views/ai/
└── chatbot.blade.php
routes/
└── web.php
tests/Feature/
└── ChatBotTest.php

Generated Service Example

<?php

namespace App\Services\AI;

use EchoLabs\Prism\Prism;
use EchoLabs\Prism\ValueObjects\Messages\UserMessage;

class ChatBotService
{
    public function chat(string $message, array $history = []): string
    {
        $prism = Prism::text()
            ->using('anthropic', 'claude-3-5-sonnet-20241022')
            ->withSystemPrompt('You are a helpful AI assistant.')
            ->withMessages(array_merge($history, [
                new UserMessage($message)
            ]));

        return $prism->generate()->text;
    }

    public function chatWithStreaming(string $message, array $history = [])
    {
        return Prism::text()
            ->using('anthropic', 'claude-3-5-sonnet-20241022')
            ->withSystemPrompt('You are a helpful AI assistant.')
            ->withMessages(array_merge($history, [
                new UserMessage($message)
            ]))
            ->stream();
    }
}

Best Practices

  1. Start with chat type - Use the default chat type for most conversational use cases
  2. Enable streaming - Provide better UX with real-time response streaming
  3. Manage context - Store and manage conversation history for coherent interactions
  4. Use tools wisely - Add tool calling when AI needs to perform actions or query data
  5. Test thoroughly - AI responses vary; test edge cases and failure scenarios
  6. Monitor costs - Track API usage and implement rate limiting for production

Configuration

The generated config/ai.php includes provider credentials and settings:

<?php

return [
    'providers' => [
        'openai' => [
            'api_key' => env('OPENAI_API_KEY'),
            'default_model' => 'gpt-4-turbo-preview',
        ],
        'anthropic' => [
            'api_key' => env('ANTHROPIC_API_KEY'),
            'default_model' => 'claude-3-5-sonnet-20241022',
        ],
        'ollama' => [
            'url' => env('OLLAMA_URL', 'http://localhost:11434'),
            'default_model' => 'llama2',
        ],
    ],
];

Related Agent

This command uses the laravel-ai agent.

See Also