AStack Demo
Client-Side Rendering Demo
AI Avatar
Avatar Type:
Controls & Status
Status:Disconnected
Event Log
No events yet...
How It Works
This demo showcases AStack's client-side rendering pipeline for real-time AI avatar interactions.
Client-Side Rendering
- • Avatar rendered in-browser (VRM/TalkingHead)
- • Audio playback with blendshape sync
- • WebSocket for data transfer (not WebRTC)
- • 52 ARKit blendshapes for facial animation
- • Low bandwidth - only audio + blendshape data
AI Pipeline
- • Real-time speech recognition (ASR)
- • Large language model responses (LLM)
- • Text-to-speech synthesis (TTS)
- • Audio-to-face blendshape generation
- • Sub-second response latency target
Demo Flow
- 1. Click "Connect to AStack" to establish WebSocket connection
- 2. Select AI providers (ASR, LLM, TTS)
- 3. Click "Start Call" and allow microphone access
- 4. Speak to the AI - watch the avatar respond
- 5. Use text input for silent messaging
- 6. Switch between VRM and TalkingHead avatars
Client-Side Rendering Enabled! This demo uses the @astack/client-sdk with client-side avatar rendering. Audio and blendshapes are streamed via WebSocket, while the 3D avatar is rendered entirely in your browser.