A major intelligence and cross-platform release: Memory v2 is now live, the web app can reach back to your Mac for local actions, managed integrations expanded, and the iOS experience is smoother across login, settings, attachments, and notifications.
Memory v2 is live: your assistant now builds richer concept pages instead of isolated fact snippets, retrieves the right context more reliably, consolidates what it learns in the background, and gives you cleaner memory surfaces for browsing and searching what it knows
Use your Mac from the web app: when your desktop assistant is connected, web conversations can route local actions back through your Mac for workspace files, terminal commands, browser control, and other host-side work without forcing you to switch clients
Live integrations now include Discord, GitHub, Google, HubSpot, Linear, Notion, Twitter, Asana, Outlook / Microsoft, and Todoist, with smoother native OAuth handoff on iOS and desktop
Model settings now include Fireworks and OpenRouter in the provider catalog, making it easier to configure assistants that use your preferred model provider
iOS got a serious polish pass: Sign in with Apple, native OAuth completion, APNs push notification groundwork, better mobile settings layouts, improved attachment preview and download behavior, file downloads through the native share sheet, and cleaner login screens
Everyday reliability improved across notifications, trust-rule chips, document persistence, skill loading, gateway routing, OAuth setup, memory maintenance, and macOS signing and update flows
v0.7.2
A broad reliability release focused on the Chrome extension and cloud connection path, macOS workflow polish, contact and communication flows, heartbeat and schedule reliability, and tighter gateway security.
Chrome extension and cloud reliability improved across the board: requests now send fresh session tokens consistently, platform API calls use the right headers and base URLs, self-hosted pairing is fixed, and the extension is prioritized over the macOS SSE bridge for browser control
Gateway and security hardening tightened local and cloud boundaries, including stricter loopback handling, known-origin checks for pairing and CORS, safer token refresh behavior, secret redaction before recall evidence reaches the model, and reduced elevated capabilities in Docker-mode assistant containers
Contact, messaging, and document workflows got smoother with contact prompt commands and macOS panels, gateway-owned Slack contact upserts, missed @mention catch-up after socket reconnects, stronger trusted contact checks, and PDF export for Document Writer
Conversation workflows are more resilient with reliable notification deep links, Mark as read in the conversation context menu, a cleaner conversation-switch loading skeleton, visible trust-rule save errors, and the new burst-based Worked for X.Ys activity model
Open-source visibility improved in the macOS app with a new Settings card for the public GitHub repo and a View on GitHub link in the About panel
Scheduling, heartbeat, and operations are more robust: scheduled tasks can retry after failures, heartbeat runs use cron-style timing with missed-run detection, SSE disconnects are detected faster with a heartbeat watchdog, and `assistant gateway logs tail` makes gateway debugging easier
v0.7.1
A focused polish release with stronger Language Model controls, more reliable trust rule editing, Chrome extension fixes, macOS quality-of-life improvements, and credential infrastructure upgrades for smoother local and cloud assistant setups.
Language Model controls got more precise: profile-specific context budgets, model-aware max token sliders, refreshed context metadata, and effective context handling across the main agent loop, wake paths, and slash commands make model profiles behave more predictably
Your Own mode is more reliable: explicit user saves are now respected, main agent profiles can override static defaults, profile context and slider fallbacks are preserved, and actual provider metadata is stamped correctly when routing picks a non-default provider
Trust rules and approvals are easier to understand: rule editor copy now shows natural language instead of structured tool data for non-bash tools, trust badge taps open the right edit flow, and approval provenance is tracked with mode, reason, and risk threshold fields
Chrome extension reliability improved with fixes for status probing, activity isolation, privileged tabs, popup icon paths, self-hosted UX, and hostname handling across local and cloud modes
macOS polish and workflow fixes: turn-end notifications now fire when the app is unfocused, task progress widgets default inline with a pop-out option, the profile editor is cleaner, toolbar state is better isolated from conversation list updates, and account deletion requests can be started from the app
Credential and CLI infrastructure moved forward with credential account key normalization, a direct CES credential management CLI, API key migration work, and a new `vellum upgrade --latest` flag for pulling the newest available version
v0.7.0
A redesigned trust and permissions system with a new v3 rules engine, Gemini 3 model support, GPT-5.5 as the new default OpenAI model, and a wide range of new CLI commands and a reworked browser extension experience.
Redesigned trust and permissions system: a new v3 Trust Rules engine replaces the older permissions model, with cleaner presets (Conservative, Relaxed, Autonomous), suggested rules with an Allow & Create Rule button, directory-scoped rules, and a fully redesigned Trust Rules management UI
Gemini 3 model support and catalog improvements: Gemini 3 models are now available with correct pricing, thought signature capture, and tool-call metadata, the default OpenAI model is now GPT-5.5, and OpenAI reasoning effort can now be explicitly disabled or set to an extra-high tier
Expanded CLI: new commands for inspecting installed skills, managing trust rules, registering and listing webhooks, setting and getting environment variables, verbose exec, and SSH and exec support for managed instances
Reworked browser extension: a proper onboarding flow, SSE-based event transport, and direct gateway pairing replacing the legacy native messaging host, for a smoother and more reliable connection between your assistant and Chrome
v0.6.5
The Vellum browser extension is now one-click install from the Chrome Web Store, X/Twitter is generally available as a managed integration, and Slack setup is one click with expanded permissions so your assistant can triage your messages, not just post. Plus a new Voice settings tab on the web platform, a conversation Refresh action, and a wide range of macOS stability and polish fixes.
Vellum browser extension on the Chrome Web Store: install the Vellum Assistant Browser Relay from the Chrome Web Store in one click, no more loading an unpacked extension in developer mode. The extension bridges your assistant to your live Chrome tabs for reading, clicking, filling, and extracting on any site you are already signed into, and now also supports cloud sign-in directly from the extension
X/Twitter integration is generally available: connect X to your assistant through the managed integration without toggling a flag, with expanded scopes so your assistant can read likes and bookmarks in addition to posting and browsing
Slack is more capable and easier to set up: creating a Slack app is now a single step with the redirect URL baked into the manifest, and the install flow has been simplified so broken links during app creation are gone. Permissions were also expanded so your assistant can now read your Slack messages for triage and summarization in addition to posting, and the bot automatically joins public channels instead of requiring an invite
Voice settings on the web platform: the web platform now has a dedicated Voice settings tab matching the macOS app, with push-to-talk presets, and the Test button in voice settings is always enabled so you can preview voices at any time
Conversation Refresh action: a new Refresh action in the conversation menu lets you reload a conversation when its state gets out of sync with the server
macOS stability and polish: dedicated network session for background streaming to fix stalled responses, cold-launch avatar caching, fixed scroll white space and push-to-top jitter during streaming, sidebar skeleton that waits for restoration instead of a timer, smoother home panel transitions, and a wide range of reliability improvements across memory compaction, onboarding, credentials, and OAuth
v0.6.4
Claude Opus 4.7 support, a migration to OpenAI's newer Responses API, major macOS performance fixes, a smoother chat scroll experience, configurable log retention, improved Gmail cleanup, and broad stability polish.
Claude Opus 4.7 support: the newest Anthropic model is now available across the app and is automatically used when your assistant needs its strongest quality-oriented reasoning
OpenAI provider moved to the Responses API: upgraded the underlying connection to OpenAI models for better streaming, tool calls, and compatibility with newer features
Major macOS performance improvements: resolved multiple two-second-plus app hangs caused by layout, font, avatar, sound, and menu-bar initialization, plus a new inverted scroll architecture for noticeably smoother chat scrolling
Configurable LLM log retention: choose how long request logs are kept on your device from Settings > Permissions & Privacy. Options are 1, 7, 30, or 90 days, or never expire
Faster, more accurate voice transcription: Google Gemini speech-to-text now streams over the Live API for real-time partial transcripts, with support for speaker labels when the provider offers them
Smarter Gmail cleanup: persistent blocklist and safelist preferences, a new cold-outreach workflow with automatic classification and enrichment, and more reliable archiving behavior
Conversation archive: archiving and unarchiving conversations now syncs reliably with the server, with archived items sorted by when they were archived
Assistant thinking in progress cards: the progress card now includes a thinking sub-row and factors thinking time into the total duration, so you can see where the assistant is spending its time
Polish and stability fixes: cleaner thinking block layout within the chat column, Reflections grouped under Background in the sidebar, wider model names in the Usage breakdown, a fixed Web Search API key field, smoother onboarding, and a wide range of reliability improvements across memory, credentials, and OAuth
v0.6.3
Dramatically improved chat performance and stability on macOS, a redesigned onboarding flow, real assistant names throughout the UI, a refreshed integrations panel, and broad UX polish.
Dramatically improved chat performance and stability on macOS: resolved a wide range of rendering hangs, layout freezes, and scroll issues, including fixes for blank chat on conversation switch, streaming scroll corrections, and smoother message rendering overall
Redesigned onboarding flow: the setup experience now focuses on what you want your assistant to do and how it should behave, replacing the previous personality quiz with a more practical, goal-oriented approach
Assistant identity improvements: assistants now display real names instead of IDs throughout the app, with random name generation for new assistants and the ability to edit role and description directly from the Identity panel
Refreshed integrations panel: the integrations page has been redesigned with a cleaner grid layout and moved into Settings as a dedicated tab, with provider logos for easier recognition
Smoother scrolling and chat polish: a new scroll model brings smoother conversation scrolling, better alignment throughout the chat, animated conversation names on hover, a new sidebar groups divider, and archived conversations sorted by archive time
Memory and context improvements: memory recall is now more configurable, consolidation is less aggressive to prevent memory loss, and deleted memories are now recoverable
Broad stability and UX fixes: simplified permission controls, skill previews now work before installation, Slack gained /new command support, and numerous fixes across modals, attachments, sidebars, and the composer
v0.6.2
Introduces a referral program for earning credits, Linear integration, cleaner thinking blocks, configurable knowledge base injection, improved API key management, a polished chat and sidebar experience, and broad stability fixes.
Referral program: invite friends with your unique referral link and earn credits when they sign up. Accessible from the Billing tab and the sidebar, with stats tracking and program details built in
Linear integration: connect to Linear through the platform without configuring your own OAuth app, enabled by default during onboarding alongside Outlook
Thinking blocks rendered as markdown and collapsed by default: the assistant's reasoning is now displayed as clean, formatted text that starts collapsed, making responses easier to read
Configurable knowledge base injection: control which knowledge base files are automatically included in conversations, with fixes for duplicate and out-of-order content
Improved API key management: a more reliable system for storing and reading API keys across settings and services
Chat and sidebar polish: a new animated typing indicator, spell checking in the composer, a Recents group replacing ungrouped conversations, and cleaner sidebar text handling
Stability and correctness fixes: numerous fixes across scrolling, history loading, dictation, audio handling, and OAuth flows, reducing crashes and stale data issues
v0.6.1
Introduces the Personal Knowledge Base for reliable fact recall, a refreshed design system, major macOS chat performance fixes, background agents for parallel task execution, and numerous stability improvements.
Personal Knowledge Base (PKB) introduced: your assistant now has a persistent memory system that files and retrieves important information across conversations, with a fully redesigned Memory inspector showing what your assistant remembers and how confident it is
Refreshed design system: updated color palette, new typography, and polished components throughout the app for a cleaner, more modern look
Major chat performance improvements: resolved several severe slowdowns on macOS, including multi-minute freezes when switching conversations and layout bottlenecks that caused the app to hang
Background agents for parallel work: your assistant can now spawn background agents to handle tasks independently, with different roles for different types of work like research, coding, or planning, and the ability to report results back when finished
Stability and rendering fixes: smoother thinking block animations, improved markdown formatting, better sidebar behavior, and more reliable scrolling throughout the app
v0.6.0
The biggest release yet: Vellum goes open source, introduces platform-hosted assistants, a completely revamped memory system with image support, Outlook feature parity, conversation folders, and proactive assistant check-ins.
Open source launch: the Vellum Assistant repository is now publicly available, inviting the community to explore, contribute, and build on the platform
Platform-hosted assistants: assistants can now run fully hosted on the Vellum platform, removing the need for local infrastructure and enabling seamless cloud-based operation
Revamped memory system with image support: the memory system has been completely rebuilt with support for image references, smarter search that combines multiple retrieval strategies, and more reliable memory consolidation
Outlook Calendar and Email reach full feature parity with Google: Outlook Calendar and Outlook Email integrations are now generally available, matching the functionality previously available only for Google Calendar and Gmail
Conversation folders and sidebar improvements: organize conversations into folders with automatic grouping by source, improved icons and count badges, and easy group management
Proactive assistant check-ins: your assistant can now periodically review its notes, reflect on recent conversations, and reach out when it has something worth sharing, enabled by default
Skills system redesign: skills have been rebuilt with better discovery, easier installation, and more reliable behavior across the board
Performance and stability improvements across macOS and iOS: faster conversation loading, smoother scrolling, and reduced memory usage through better caching and background processing
v0.5.16
Major macOS performance and stability improvements, Outlook messaging support, smarter assistant memory, security hardening, and polished UI components.
Significant macOS performance and stability improvements: fixes for chat scroll freezes, sidebar lag, and app responsiveness issues, resulting in a noticeably smoother experience
Outlook messaging support: Vellum can now connect to Microsoft Outlook as a messaging provider, joining the existing Slack integration and expanding where your assistant can be reached
Smarter assistant memory: the assistant now remembers its capabilities and available tools at startup, with improved search for finding relevant memories
Security hardening: removed the ability to bypass permission prompts, tightened access controls across the app, and added stricter validation for sensitive operations
Polished UI components: redesigned skill detail page, improved file browser, better dropdowns and navigation, and a context window indicator showing how much conversation space is left
v0.5.15
Signing key handling improvements and automatic migration for smoother upgrades.
Improved signing key handling to prevent potential authentication issues during normal operation
Automatic key migration when upgrading from older versions, ensuring a smooth upgrade experience without manual steps
v0.5.14
Thinking blocks in chat, overhauled memory and retrieval, /compact command, expanded model support, and collapsible sidebar sections.
Thinking blocks are now visible in chat: see your assistant's reasoning process inline as collapsible blocks, giving you transparency into how responses are formed, enabled by default
Significantly improved memory and retrieval: smarter memory extraction, better search diversity to surface unexpectedly relevant memories, and improved formatting of recalled information
New /compact command and context window indicator: manually trigger conversation compaction at any time, with a color-coded bar in the toolbar showing how much context space is remaining
Expanded model support: DeepSeek, Qwen, Mistral, Meta, Moonshot, and Amazon models added through OpenRouter; Anthropic 1M context window beta and fast mode now supported
Collapsible sidebar sections and channel conversations: Scheduled and Background sidebar sections can now be collapsed, and channel-bound conversations display with a read-only indicator
Ready to raise yours?
Pick a name and share your world. Then watch the relationship grow.