dotHello GenAI Addon

GenAI UI Catalog

32 interactive components for AI-powered interfaces. All components follow the dh-ai-* class convention, are fully dark-themed, and require no external dependencies.

v2.5.0 32 components 4 groups Dark theme
⬇ Download GenAI Addon
Inputs

Prompt Input

Multi-line textarea with toolbar actions, character counter, and send button. Handles file attach, voice, web search, and code injection shortcuts.

0 / 4096
Inputs

System Prompt

Editable system prompt editor with token count display. Lets users define model persona, constraints, and output formatting instructions.

System Prompt ~42 tokens
Inputs

File Attachments

Inline file attachment chips with icon, filename, size and remove action. Supports any MIME type and renders inline in the prompt area.

📄 Q3_Report.pdf 2.1 MB
🖼 diagram.png 340 KB
📊 data.csv 88 KB
Inputs

Slash Commands

Command palette triggered by / or @ prefix. Groups commands by category with keyboard shortcut hints.

Commands
📝
/summarize Summarise selected text
S
🌐
/translate Translate to another language
T
💻
/code Generate or refactor code
C
🔍
/explain Explain this in simple terms
E
Files
📄
@Q3_Report.pdf Reference attached document
Inputs

Voice Input

Push-to-talk voice recording with animated waveform visualiser and duration counter. Transcribes audio to text on release.

Tap to record
Recording state →
0:07 ● Recording…
Inputs

Prompt Library

Browsable, filterable library of reusable prompt templates. Supports category filters and one-click insertion into the prompt field.

Blog Post Outline Writing
Generate a structured blog post outline with intro, 5 key sections, and a conclusion for the topic: [TOPIC].
Code Review Code
Review the following code for bugs, performance issues, and style violations. Provide actionable suggestions: [CODE].
Data Analysis Analysis
Analyse this dataset and identify the top 3 trends, anomalies, and actionable insights. Format as bullet points: [DATA].
Research Brief Research
Write a concise research brief covering background, key findings, methodology, and recommendations for: [TOPIC].
Outputs

Streaming Text

Real-time streaming text renderer with animated cursor and token/speed metadata. Driven by the data-dh-ai-stream attribute.

claude-sonnet-4-6 · streaming 0 tok/s
Outputs

Thinking / Reasoning

Collapsible reasoning trace panel. Shows spinner while active, checkmark when done, with token and duration metadata.

Thinking… 3.2s
Analysing the user's question about transformer architectures. I need to consider the historical context, the Attention is All You Need paper, and subsequent scaling laws. Let me break down the key components: multi-head self-attention, positional encodings, feed-forward layers, and residual connections…
Reasoning complete 1.8s · 142 tok
Concluded that the best response involves citing the original Vaswani et al. paper and explaining scaling effects concisely.
Outputs

Tool Use

Displays tool invocations with input arguments and result payloads. Supports done, running, and error states.

🔍 web_search tool_call_01 done
query "WWDC 2025 announcements"
Result Apple announced AI-integrated Siri, visionOS 3.0, and significant Swift concurrency improvements at WWDC 2025.
⚙️ code_interpreter tool_call_02 running
code import pandas as pd; df.describe()
Running…
Outputs

Citations

Inline citation chips that reference a source panel below the response. Clicking a chip scrolls to the matching citation entry.

Large language models have demonstrated remarkable few-shot learning capabilities Brown et al., 20201 and have since been scaled to trillions of parameters Chowdhery et al., 20222 with significant downstream improvements.

1
Language Models are Few-Shot Learners — Brown et al., NeurIPS 2020
arxiv.org/abs/2005.14165
2
PaLM: Scaling Language Modeling with Pathways — Chowdhery et al., 2022
arxiv.org/abs/2204.02311
Outputs

Canvas / Artifact

Full-featured artifact viewer with code/preview tabs, copy, download, and external open actions. Renders rich code with syntax highlights.

// Iterative Fibonacci sequence
function fibonacci(n) {
  if (n <= 1) return n;
  let [a, b] = [0, 1];
  for (let i = 2; i <= n; i++) {
    [a, b] = [b, a + b];
  }
  return b;
}
Outputs

Image Gen Preview

Image generation widget with prompt, model metadata, progress bar, and generated image display area.

Image Generation
🎨 Ready to generate
dall-e-3 · 1024×1024 · vivid
Outputs

Audio / TTS Player

Playback widget for text-to-speech output. Includes waveform progress track, voice name, and duration metadata.

0:00 / 0:45 Alloy · OpenAI TTS
Controls + Agentic

Model Selector

Dropdown picker for switching between LLM providers and model variants. Shows tier badge and model capability description.

🔮
claude-opus-4
Most capable, complex reasoning
Premium
🧠
claude-sonnet-4-6
Balanced speed and intelligence
Balanced
claude-haiku-4-5
Fastest, lowest latency
Fast
🟢
gpt-4.1
OpenAI flagship model
OpenAI
Controls + Agentic

Gen Parameters

Range sliders for adjusting temperature, max tokens, top-p, and frequency penalty. Each control shows live value readout.

Temperature 0.7
Controls randomness
Max Tokens 2048
Max response length
Top-p 0.95
Nucleus sampling
Frequency Penalty 0
Penalises repetition
Controls + Agentic

Context Window

Visual usage bar showing how much of the model's context window is consumed, broken down by source category.

Context Used 45,230 / 200,000
System prompt — 2.1k Conversation — 31.4k Documents — 11.7k Available — 155k
Controls + Agentic

Persona / Tone

Pill-based tone selector that applies a personality preset to the system prompt. Only one pill is active at a time.

Controls + Agentic

Step Tracker

Vertical step progress indicator for agentic workflows. Supports done, active, error, and pending states per step.

Fetch context
Retrieved 3 documents
Analyse query
Identified intent
Generate response
Streaming…
Validate output
Pending
Deliver result
Pending
Controls + Agentic

Approval Gate

Human-in-the-loop confirmation dialog shown before the agent executes a destructive or irreversible action.

⚠ Action requires approval
The agent wants to execute: DELETE /api/users/inactive
This will permanently remove 847 inactive accounts from the database. This action cannot be undone.
Controls + Agentic

History Sidebar

Searchable conversation history list grouped by date. Click any item to resume a previous session.

Today
💬 Transformer architecture deep dive 2m ago
💬 Q3 report analysis and summary 1h ago
💬 Python code review — auth module 4h ago
Yesterday
💬 Blog post outline: AI in healthcare Yesterday
💬 Research brief: climate tech trends Yesterday
Controls + Agentic

Memory Panel

Persistent memory store showing facts the model has learned or been told. Items can be edited or deleted inline.

🧠 User prefers concise responses with bullet points over long paragraphs.
🧠 Previously discussed transformer architectures and scaling laws (GPT-4, Claude, PaLM).
🧠 Always cite papers in APA format when referencing academic research.
Controls + Agentic

Thread Branching

Branch selector for parallel conversation threads. Users can explore alternative directions without losing the main context.

Main thread 12 turns · active
Alternative approach 7 turns · branched 4m ago
Simplified version 3 turns · branched 9m ago
Controls + Agentic

Conversation Turns

Alternating user and assistant message bubbles with avatar, content, and timestamp metadata.

U
What is the attention mechanism in transformers?
10:42 AM
Attention allows the model to weigh the importance of different tokens when producing each output token, enabling it to capture long-range dependencies efficiently.
10:42 AM · claude-sonnet-4-6
U
How does multi-head attention differ from single-head?
10:43 AM
Multi-head attention runs several attention operations in parallel across different representation subspaces, then concatenates the results. This lets the model attend to multiple aspects of the input simultaneously — syntax, semantics, coreference, etc.
10:43 AM · 312 tok
Status + Safety

Feedback

Thumbs up/down vote buttons plus emoji reaction panel for granular response quality signals.

Status + Safety

Confidence

Model self-reported confidence score shown as a filled track with colour-coded level indicators.

Confidence · 92%
High confidence
Confidence · 61%
Medium confidence
Confidence · 28%
Low confidence
Status + Safety

Badges & States

Status badges in 6 colour variants. Used across all components to indicate operational state at a glance.

✓ Shipped ● Running ◆ Processing ★ Premium ✕ Error ○ Idle
Grounded Streaming Reasoning Vision Refused Cached
Status + Safety

Skeleton Loaders

Animated placeholder shapes shown while content is loading. Prevents layout shift and signals activity without a spinner.

Status + Safety

Toasts

Transient notification popups in 4 semantic types. Appear at the viewport edge and auto-dismiss after 4 seconds.

Status + Safety

Refusal Block

Displayed when the model declines to answer. Shows policy reason and suggests alternative approaches the user can take.

🚫
Content Policy
I'm unable to assist with that request as it conflicts with our usage policies. If you believe this is an error, you can rephrase your question or contact support for clarification.
Status + Safety

Hallucination Warning

Inline warning surfaced when the model's output has low grounding confidence or cites unverifiable sources.

Possible Hallucination
One or more facts in this response could not be verified against retrieved sources. Please cross-check claims marked with ⚠ before using this output in production.
Status + Safety

Usage Dashboard

Token and cost analytics panel showing consumption across input, output, context, and API calls with a summary row.

Input Tokens
820k / 1M
Output Tokens
380k / 1M
Context Used
45.2k / 200k
API Calls
247 / 500
Total Tokens
1.2M
Est. Cost
$0.84
Requests
247
Avg Latency
1.3s
Status + Safety

Latency Indicator

Colour-coded round-trip latency readout with dot, value, and label. Green for fast, amber for medium, red for slow.

340ms Fast
·
1.2s Medium
·
4.8s Slow