Building AI Chat Interfaces
Define emerging standards for AI/human conversational interfaces with streaming UX, context management, and multi-modal interactions.
When to Use
Use this skill when:
- Building ChatGPT-style conversational interfaces
- Creating AI assistants, copilots, or chatbots
- Implementing streaming text responses with markdown
- Managing conversation context and token limits
- Handling multi-modal inputs (text, images, files, voice)
- Dealing with AI-specific errors (hallucinations, refusals, limits)
- Adding feedback mechanisms (thumbs, regeneration, editing)
- Implementing conversation branching or threading
- Visualizing tool/function calling
Overview
This skill leverages meta-knowledge from building WITH Claude to establish definitive patterns for streaming UX, context management, and multi-modal interactions. As the industry lacks established patterns, this provides the reference implementation.
Quick Start
Minimal AI chat interface in under 50 lines:
import { useChat } from 'ai/react';
export function MinimalAIChat() {
const { messages, input, handleInputChange, handleSubmit, isLoading, stop } = useChat();
return (
<div className="chat-container">
<div className="messages">
{messages.map(m => (
<div key={m.id} className={`message ${m.role}`}>
<div className="content">{m.content}</div>
</div>
))}
{isLoading && <div className="thinking">AI is thinking...</div>}
</div>
<form onSubmit={handleSubmit} className="input-form">
<input
value={input}
onChange={handleInputChange}
placeholder="Ask anything..."
disabled={isLoading}
/>
{isLoading ? (
<button type="button" onClick={stop}>Stop</button>
) : (
<button type="submit">Send</button>
)}
</form>
</div>
);
}
Core Components
Message Display
- User message bubbles
- AI message with streaming markdown
- System messages
- Streaming cursor indicator
Input Components
- Multi-line textarea with auto-resize
- File attachment button
- Voice input toggle
- Send button (disabled when empty or loading)
Response Controls
- Stop generating (during streaming)
- Regenerate response
- Continue generation
- Edit and resubmit message
Feedback Mechanisms
- Thumbs up/down for RLHF
- Copy to clipboard
- Share conversation
- Report issues
Streaming & Real-Time UX
Progressive rendering of AI responses requires special handling:
// Use Streamdown for AI streaming (handles incomplete markdown)
import { Streamdown } from '@vercel/streamdown';
// Smart auto-scroll heuristic
function shouldAutoScroll() {
const threshold = 100; // px from bottom
const isNearBottom =
container.scrollHeight - container.scrollTop - container.clientHeight < threshold;
return isNearBottom && !hasUserScrolledUp;
}
Context Management
Communicate token limits clearly to users:
function TokenIndicator({ used, total }) {
const percentage = (used / total) * 100;
const remaining = total - used;
return (
<div className="token-indicator">
<span>
{percentage > 80
? `⚠️ About ${Math.floor(remaining / 250)} messages left`
: `${Math.floor(remaining / 250)} pages remaining`}
</span>
</div>
);
}
Multi-Modal Support
Handle images, files, and voice inputs:
// Image upload with preview
function ImageUpload({ onUpload }) {
return (
<div className="upload-zone" onDrop={handleDrop}>
<input
type="file"
accept="image/*"
onChange={handleFileSelect}
multiple
/>
{previews.map(preview => (
<img key={preview.id} src={preview.url} alt="Upload" />
))}
</div>
);
}
Error Handling
Handle AI-specific errors gracefully:
// Refusal handling
if (response.type === 'refusal') {
return (
<div className="error refusal">
<p>I cannot help with that request.</p>
<details>
<summary>Why?</summary>
<p>{response.reason}</p>
</details>
</div>
);
}
Recommended Stack
# Core AI chat functionality
npm install ai @ai-sdk/react @ai-sdk/openai
# Streaming markdown rendering
npm install @vercel/streamdown
# Syntax highlighting
npm install react-syntax-highlighter
# Security for LLM outputs
npm install dompurify
Performance Optimization
Critical for smooth streaming:
// Memoize message rendering
const MemoizedMessage = memo(Message, (prev, next) =>
prev.content === next.content && prev.isStreaming === next.isStreaming
);
// Debounce streaming updates
const debouncedUpdate = useMemo(
() => debounce(updateMessage, 50),
[]
);
Accessibility
Ensure AI chat is usable by everyone:
// ARIA live regions for screen readers
<div role="log" aria-live="polite" aria-relevant="additions">
{messages.map(msg => (
<article key={msg.id} role="article" aria-label={`${msg.role} message`}>
{msg.content}
</article>
))}
</div>
// Loading announcements
<div role="status" aria-live="polite" className="sr-only">
{isLoading ? 'AI is responding' : ''}
</div>
Key Innovations
This skill provides industry-first solutions for:
- Memoized streaming rendering - 10-50x performance improvement
- Intelligent auto-scroll - User activity-aware scrolling
- Token metaphors - User-friendly context communication
- Incomplete markdown handling - Graceful partial rendering
- RLHF patterns - Effective feedback collection
- Accessibility-first - Built-in screen reader support
Strategic Importance
This is THE most critical skill because:
- Perfect timing - Every app adding AI (2024-2025 boom)
- No standards exist - Opportunity to define patterns
- Meta-advantage - Building WITH Claude = intimate UX knowledge
- Unique challenges - Streaming, context, hallucinations all new
Related Skills
- Theming Components - Chat UI theming
- Building Forms - Input components and validation
- Providing Feedback - Loading states and error messages
- Managing Media - File and image attachments
References
- Full Skill Documentation
- Patterns:
references/streaming-patterns.md,references/context-management.md - Features:
references/multimodal-input.md,references/feedback-loops.md - Implementation:
references/error-handling.md,references/tool-usage.md - Performance:
references/performance-optimization.md - Examples:
examples/basic-chat.tsx,examples/multimodal-chat.tsx