Building Vibe Draw: combining ElevenLabs with FLUX Kontext for voice-powered image creation
Vibe Draw combines ElevenLabs' voice AI with FLUX Kontext for voice-powered image creation.
Voice interfaces are changing how we communicate with AI. What if creating an image was as easy as describing it out loud?
That’s the idea that led to me creating Vibe Draw as a weekend project. It is a voice-first creative tool that pairs ElevenLabs’ Voice AI with Black Forest Labs’ FLUX Kontext to turn spoken prompts into images.
FLUX Kontext represents a new class of image model. Unlike traditional text-to-image systems, Kontext handles both generation and editing. It can create new images from prompts, modify existing ones, and even merge multiple reference images into a single output.
While models like GPT-4o and Gemini 2 Flash offer multimodal capabilities, FLUX Kontext is purpose-built for high-quality visual manipulation. In testing, I could change individual letters in stylized text or reposition an object — just by describing the change.
That’s when I thought: “Why not do this with voice?” And what better foundation than ElevenLabs’ powerful voice technology?
.webp&w=3840&q=95)
The technical challenge
Building a voice-driven image system required solving five key problems:
- Natural language understanding — Differentiating between new creation and edits
- Contextual awareness — Maintaining continuity across interactions
- Audio management — Avoiding overlapping responses and managing queues
- Visual generation — Seamless transitions between generation and editing
- User experience — Making advanced AI interactions feel intuitive
Architecture overview
Vibe Draw runs entirely client-side and integrates the following components:
- Web Speech API for speech recognition
- ElevenLabs TTS API for voice responses
- FLUX Kontext API for image generation and editing
- Custom intent detection for understanding user input
This approach keeps the prototype lightweight, but production deployments should proxy requests server-side for security.
Implementing Voice with ElevenLabs
Vibe Draw uses ElevenLabs’ text-to-speech API, tuned for conversational responsiveness:
1 | const voiceSettings = { |
2 | model_id: "eleven_turbo_v2", |
3 | voice_settings: { |
4 | stability: 0.5, |
5 | similarity_boost: 0.75 |
6 | } |
7 | }; |
8 |
To create variety, voice responses are randomly selected from pre-defined templates:
1 | const responses = { |
2 | generating: [ |
3 | "Ooh, I love that idea! Let me bring it to life...", |
4 | "That sounds awesome! Creating it now...", |
5 | "Great description! Working on it..." |
6 | ], |
7 | editing: [ |
8 | "Got it! Let me tweak that for you...", |
9 | "Sure thing! Making those changes...", |
10 | "No problem! Adjusting it now..." |
11 | ] |
12 | }; |
13 | |
14 | function getRandomResponse(type) { |
15 | const options = responses[type]; |
16 | return options[Math.floor(Math.random() * options.length)]; |
17 | } |
18 |
Managing audio playback
Overlapping voice responses break the illusion of conversation. Vibe Draw solves this with an audio queue system:
1 | let audioQueue = []; |
2 | let isPlayingAudio = false; |
3 | |
4 | async function queueAudioResponse(text) { |
5 | audioQueue.push(text); |
6 | if (!isPlayingAudio) { |
7 | playNextAudio(); |
8 | } |
9 | } |
10 |
Each message plays fully before triggering the next.
Intent detection and context management
The system uses keyword and context detection to decide whether a user prompt is a new image request or an edit:
1 | const editKeywords = [ ... ]; |
2 | const contextualEditPhrases = [ |