Abstract Factory Pattern

The Abstract Factory pattern is perfect for LLM applications that need to create families of related objects, such as different AI providers, model configurations, or complete AI system stacks with consistent interfaces.

Why Abstract Factory for LLM?

LLM applications often require:

  • Provider abstraction: Support multiple AI providers (OpenAI, Anthropic, Google, etc.)

  • Environment consistency: Different configurations for development, testing, and production

  • Family cohesion: Related components that work together (client + embeddings + tools)

  • Easy switching: Change entire AI stacks without code modifications

Key LLM Use Cases

1. Multi-Provider AI Client Factory

Creating consistent interfaces for different AI providers:

from abc import ABC, abstractmethod

# Abstract products
class LLMClient(ABC):
    @abstractmethod
    def generate(self, prompt, **kwargs):
        pass
    
    @abstractmethod
    def stream_generate(self, prompt, **kwargs):
        pass

class EmbeddingClient(ABC):
    @abstractmethod
    def embed_text(self, text):
        pass
    
    @abstractmethod
    def embed_documents(self, documents):
        pass

class ImageClient(ABC):
    @abstractmethod
    def generate_image(self, prompt):
        pass
    
    @abstractmethod
    def analyze_image(self, image):
        pass

# Abstract factory
class AIProviderFactory(ABC):
    @abstractmethod
    def create_llm_client(self):
        pass
    
    @abstractmethod
    def create_embedding_client(self):
        pass
    
    @abstractmethod
    def create_image_client(self):
        pass

# Concrete implementations for OpenAI
class OpenAILLMClient(LLMClient):
    def generate(self, prompt, **kwargs):
        return openai.ChatCompletion.create(
            model="gpt-4",
            messages=[{"role": "user", "content": prompt}],
            **kwargs
        )
    
    def stream_generate(self, prompt, **kwargs):
        return openai.ChatCompletion.create(
            model="gpt-4",
            messages=[{"role": "user", "content": prompt}],
            stream=True,
            **kwargs
        )

class OpenAIEmbeddingClient(EmbeddingClient):
    def embed_text(self, text):
        return openai.Embedding.create(
            model="text-embedding-ada-002",
            input=text
        )
    
    def embed_documents(self, documents):
        return [self.embed_text(doc) for doc in documents]

class OpenAIImageClient(ImageClient):
    def generate_image(self, prompt):
        return openai.Image.create(
            prompt=prompt,
            n=1,
            size="1024x1024"
        )
    
    def analyze_image(self, image):
        return openai.ChatCompletion.create(
            model="gpt-4-vision-preview",
            messages=[{
                "role": "user",
                "content": [{"type": "image_url", "image_url": image}]
            }]
        )

class OpenAIFactory(AIProviderFactory):
    def create_llm_client(self):
        return OpenAILLMClient()
    
    def create_embedding_client(self):
        return OpenAIEmbeddingClient()
    
    def create_image_client(self):
        return OpenAIImageClient()

# Concrete implementations for Anthropic
class AnthropicLLMClient(LLMClient):
    def generate(self, prompt, **kwargs):
        return anthropic.Completion.create(
            model="claude-3-opus-20240229",
            prompt=f"Human: {prompt}\n\nAssistant:",
            **kwargs
        )
    
    def stream_generate(self, prompt, **kwargs):
        return anthropic.Completion.create(
            model="claude-3-opus-20240229",
            prompt=f"Human: {prompt}\n\nAssistant:",
            stream=True,
            **kwargs
        )

class AnthropicFactory(AIProviderFactory):
    def create_llm_client(self):
        return AnthropicLLMClient()
    
    def create_embedding_client(self):
        # Anthropic doesn't have embeddings, use OpenAI
        return OpenAIEmbeddingClient()
    
    def create_image_client(self):
        # Anthropic doesn't have image generation, use OpenAI
        return OpenAIImageClient()

# Usage
class AIApplication:
    def __init__(self, factory: AIProviderFactory):
        self.llm = factory.create_llm_client()
        self.embeddings = factory.create_embedding_client()
        self.images = factory.create_image_client()
    
    def process_multimodal_request(self, text_prompt, image_url=None):
        if image_url:
            image_analysis = self.images.analyze_image(image_url)
            combined_prompt = f"{text_prompt}\n\nImage analysis: {image_analysis}"
        else:
            combined_prompt = text_prompt
        
        return self.llm.generate(combined_prompt)

# Easy provider switching
openai_app = AIApplication(OpenAIFactory())
anthropic_app = AIApplication(AnthropicFactory())

Benefits:

  • Consistent interface across all AI providers

  • Easy switching between providers

  • Family of related clients work together

  • Centralized provider configuration

2. Environment-Based Configuration Factory

Different configurations for different environments:

Benefits:

  • Environment-specific optimizations

  • Consistent configuration patterns

  • Easy environment switching

  • Centralized configuration management

3. RAG System Component Factory

Creating complete RAG systems with consistent component families:

Benefits:

  • Domain-optimized RAG components

  • Consistent component interfaces

  • Easy domain switching

  • Specialized configurations per use case

4. Testing and Evaluation Factory

Creating comprehensive testing suites for different scenarios:

Benefits:

  • Comprehensive testing capabilities

  • Consistent testing interfaces

  • Easy test type switching

  • Specialized testing for different concerns

Implementation Advantages

1. Provider Independence

  • Switch between AI providers without code changes

  • Consistent interfaces across different services

  • Easy migration between providers

  • Vendor lock-in prevention

2. Configuration Management

  • Environment-specific optimizations

  • Centralized configuration control

  • Easy deployment across environments

  • Consistent component relationships

3. Domain Specialization

  • Optimized component families for specific domains

  • Domain-specific configurations and behaviors

  • Easy domain switching and comparison

  • Specialized feature sets per domain

4. Testing and Quality Assurance

  • Comprehensive testing frameworks

  • Different testing strategies for different concerns

  • Consistent evaluation approaches

  • Easy comparison of different configurations

Real-World Impact

The Abstract Factory pattern in LLM applications provides:

  • Flexibility: Easy switching between providers, environments, and configurations

  • Consistency: Guaranteed compatible component families

  • Maintainability: Centralized creation logic and configuration management

  • Scalability: Easy extension to new providers and environments

This pattern is crucial for production LLM systems where you need to support multiple AI providers, different deployment environments, and domain-specific optimizations while maintaining consistent interfaces and behavior.

Last updated