This workflow implements a message-batching buffer using Redis for temporary storage and GPT-4 for consolidated response generation. Incoming user messages are collected in a Redis list; once a configurable “inactivity” window elapses or a batch size threshold is reached, all buffered messages are sent to GPT-4 in a single prompt. The system then clears the buffer and returns the consolidated reply.
Key Features
– Redis-backed buffer to queue incoming messages per user session
– Dynamic wait time (shorter for long messages, longer for short messages)
– Batch trigger on inactivity timeout or minimum message count
– GPT-4 consolidation: merges all buffered messages into one coherent response
Setup InstructionsMap Input
– Rename node to “Extract Session & Message”
– Assign context_id and message from webhook or manual trigger
Compute Wait Time
– Rename node to “Determine Inactivity Timeout”
– JS Code:
javascript const wordCount = $json.message.split(‘ ‘).filter(w=>w).length; return [{ json: { context_id: $json.context_id, message: $json.message, waitSeconds: wordCount < 5 ? 45 : 30 }}]; Buffer Message in Redis
– Push into list buffer_in:{{$json.context_id}}
– INCR key buffer_count:{{$json.context_id}} with TTL {{$json.waitSeconds + 60}}
Mark Waiting State
– GET waiting_reply:{{$json.context_id}} → if null, SET it to true with TTL {{$json.waitSeconds}}
– Rename nodes to “Check Waiting Flag” / “Set Waiting Flag”
Wait for Inactivity
– Wait node: pause for {{$json.waitSeconds}} seconds
Check Batch Trigger
– GET keys:
– last_seen:{{$json.context_id}}
– buffer_count:{{$json.context_id}}
– IF both:
– buffer_count >= 1
– (now – last_seen) >= waitSeconds * 1000
– Rename node to “Trigger Batch on Inactivity or Count”
Fetch & Consolidate
– GET entire list buffer_in:{{$json.context_id}}
– Information Extractor → rename to “Consolidate Messages”
– System prompt: “You are an expert at merging multiple messages into one clear paragraph without duplicates.”
GPT-4 Chat
– OpenAI Chat Model (GPT-4)
Cleanup & Respond
– Delete Redis keys:
– buffer_in:{{$json.context_id}}
– waiting_reply:{{$json.context_id}}
– buffer_count:{{$json.context_id}}
– Return the consolidated reply to the user
Customization Guidance
– Batch Size Trigger: Add an additional IF to fire when buffer_count reaches your desired batch size.
– Timeout Policy: Adjust the word-count thresholds or replace with character-count logic.
– Multi-Channel Support: Change the trigger from a manual test node to any webhook (e.g., chat, SMS, email).
– Error Handling: Insert a fallback branch to catch Redis timeouts or OpenAI API errors and notify users.
🎧 Translate Audio with AI
OverviewThis workflow takes some French text and translates it into spoken audio. It then transcribes that audio back into text, translates it into English, and