Msty Changelog

v1.0.1

July 11, 2024

  • New: Import local GGUF file
  • New: Support Bakllava and other Llava vision models
  • New: Documentation site is now live (available at https://docs.msty.app/) (work-in-progress)
  • Improve: Allow Python, Shell and other coding files as attachments
  • Improve: Better models fetching for OpenAI API compatible endpoints
  • Fix: Empty system message incompatibility with some OpenAI API compatible providers
  • Fix: Closing app and reopening takes to Onboarding UI on Mac
  • Fix: Microphone (STT) is no longer available
Note: Msty should update itself automatically in the background. Let it run for a few minutes. For the latest Local AI service, you'd need to manually update the service under Local AI settings.

v1.0

July 9, 2024

  • New: Real Time Data (watch one of our videos that covers this feature)
  • New: Improved Knowledge Stack
    • Show progress of composition for each category and individual files
    • Show last successful composition date and time
    • Show the number of files composed for folders and Obsidian vaults
    • Display better current status of a Knowledge Stack
    • Abort composition
    • Fix some issues with composition including the issue with table not found
  • New: Quick actions context menu for Local AI
  • New: New Attachments UI
    • Sticky image attachments across sessions and across new messages
    • Show all the images attachments of a chat not just the last attachments and manage them
    • Attach documents of type .txt, .md, .pdf, .docx, .html, .csv, .json, .ts, .js, .tsx, .vue, and many more!
    • Drag-and-drop documents are now treated as attachments rather than composing as a Knowledge Stack
  • New: 1-click scroll to bottom of chat
  • New: Show size and other params for Ollama models
  • New: Revamped onboarding UI
    • Select a different model when onboarding
    • Use Ollama's models directory without downloading a model
  • New: New featured models - Claude Sonnet 3.5, Qwen 2, Llava Llama, Llava Phi, DeepSeek Code v2, and more
  • New: New General UI settings
    • Set to auto-generate title for new chats for non-local models
    • Easily update the app, fetch the latest featured models info, etc.
    • Show app paths for logs and data
    • Reset settings to default
    • Make it easy to fetch models info without having to install the Local AI service
    • Show App's version
  • New: New Local AI settings
    • Set service configs such as no. of parallel chats, max loaded models, etc.
    • Pass any configurations supported by Ollama to the service as a JSON object
    • Changed Allowed Origins
    • Set the amount of time to keep a model alive
    • Pass any custom configuration to models globally
    • Show service version
    • Watch one of our videos that covers new settings changes in more detail
  • New: Make Local AI service available on the network with a simple toggle
  • New: Revamped Remote Models UI
    • Select models of your choice for providers such as OpenAI, Mistral, Claude, etc.
    • Edit API keys
    • Dedicated Msty remote provider to make it easy to connect to a remotely hosted Msty service
    • Dedicated Ollama remote provider to easily connect without worrying about what the full URL should be
    • Models are fetched automatically for both Msty remote and Ollama remote
  • New: AMD GPU support for Windows
  • New: Pass extra model params to Local and Remote models as a JSON object
  • New: Easily add a new remote model provider from within the model selector without making a trip to the settings
  • Improve: Copy installed models name to the clipboard
  • Improve: The name of files are now added to a Knowledge Stack during composition; should improve the experience of referring to the file during query
  • Improve: Type-in models params rather than using a slider to adjust the value
  • Improve: Show icons for all remote models providers
  • Improve: Latest Ollama/LocalAI service for all the platforms
  • Improve: App icon
  • Improve: App polishing
  • Fix: Can remove an item during Knowledge Stack composition
  • Fix: Model options not getting passed to some OpenAI compatible providers
  • Fix: Clicking on Chat icon creates a new chat
  • Fix: Adding a key fails silently on Linux
  • Removed: Featured models variants. Users are encouraged to download them directly from models hub
  • Removed: Embedded changelog. Users are encouraged to visit the website for the latest changelog
  • Many more bug fixes and improvements. Read our blog post for some of the highlights or watch a video that covers the new features and improvements.

v0.9

Jun 3, 2024

  • New: Hugging Face integration - search and use any GGUF models from Hugging Face
  • New: Create Ollama compatible models from Hugging Face
  • New: Search Ollama and pull any model and any tags
  • New: Open Router support
  • New: Rerank RAG chunks using Jina AI API key
  • New: Set model instructions at the folder level
  • New: Open AI compatible providers. This makes it possible to use endpoints from Ollama remote, and other online providers such as DeepSeek AI.
  • New: New models - Mistral Codestral, Cohere Aya 23, Microsoft Phi 3, IBM Granite
  • New: Allow to resend a user message - very helpful in case of a failed message
  • Improve: Revamped Local AI UI
  • Improve: Faster inference on Windows and Mac using Flash Attention
  • Improve: Preserve input message when switching chats
  • Improve: Streaming scrolling
  • Improve: Allow to stop regenerating messages
  • Improve: Lighter Windows installer
  • Improve: Add .md, .markdown to the list of supported file extensions when browsing files
  • Fix: Can't drop Markdown files to Knowledge Stack
  • Fix: Close model settings popover when clicking outside and if there are no changes
  • Fix: Groq API issues
  • Fix: 500 errors on Windows for some users
  • Fix: Clicking outside temporarily reverses edited chat title

v0.8

May 20, 2024

  • New: Introducing Knowledge Stacks (RAG) in Msty
    • Create and manage multiple Knowledge Stacks
    • Upload files and folders
    • Link Obsidian vaults
    • Add custom notes
    • Link YouTube videos
    • Attach multiple Knowledge Stacks while chatting with LLMs
    • Process documents by character or sentences
    • Get citations from your sources
    • Analytics report on processed files and links

    Knowledge Stacks OnboardingKnowledge Stacks
  • New: Real-time parallel chatting with models
  • New: Branch-off conversation into a new split chat
  • New: Search conversations
  • New: Perplexity AI integration
  • New: Claude vision models support
  • New: GPT-4o support
  • New: Latest Gemini models from Google
  • New: Increased context window across all models
  • Improve: Markdown table styles
  • Improve: Message UI alignments
  • Improve: Highlight the user's message for better visibility
  • Improve: Audio recorder
  • Improve: App navigation
  • Fix: Allow dialogs to close on outside-click and esc
  • Fix: Command prompt window popping up from Ollama server in Windows
  • Fix: Bulk actions enabled in a new chat
  • Fix: Chat message scroll to the bottom

v0.7

April 15, 2024

  • New: Revamped Prompts Library
    • Organize prompts into system, user, and refinement categories
    • Search prompts by tags, title, or description
    • Example user inputs and model outputs

    Revamped Prompts Library
  • New: Revamped quick prompts menu
    • Context-relevant prompts display and search
  • New: Set system prompt from empty chat page
  • New: Start new chats inside a folder
  • New: Transcribe audio recording using Open AI's Whisper model Audio Transcriber
  • New: Use Open AI's GPT-4 to chat with images
  • New: Thumbnails for splits in a chat session - hide/unhide splits
  • New: Create a new split chat using a user message for a pre-filled input message
  • New: Clone message before flattening a chat tree
  • New: Delete assistant message branch in a chat tree
  • New: New Local AI models: Command R from Cohere and CodeGemma from Google
  • Improve: Remember the collapsed state of system prompts in a new chat
  • Improve: Change Local AI's icon based on health status
  • Improve: Confirm split preset delete
  • Improve: Message Formatting
  • Fix: Unknown models from Ollama not working as expected
  • Fix: Selecting a non-recent chat collapses its folder
  • Fix: Performance issue when opening a saved conversation
  • Fix: Multiple hover cards in text module's model downloads
  • Fix: Updating text module stalls on Windows

v0.6

March 30, 2024

  • New: Save splits layout as presets and start a new conversation with a specific layout
  • New: New chat starts with the last use splits layout
  • New: Select a model when regenerating a message
  • New: Flatten a chat tree to discard hallucinating or non-preferred conversation branches
  • New: Move app settings to a modal dialog to make it more accessible
  • Improve: Remember the hide/unhide state of model instructions in the chat tree
  • Improve: Cleanup some UI and icons
  • Fix: An issue with app update
  • Fix: Claude Haiku model issue
  • Fix: App getting hung in offline mode
  • Fix: Sidebar buttons hiding in smaller window sizes
  • Fix: Shift+Enter in input boxes when editing messages

v0.5

March 27, 2024

  • New: Regenerate assistant messages
  • New: Switch to a different model in the middle of a conversation
  • New: Refine prompts before sending Refine Prompts
  • New: Branch user prompts inline
  • New: Branch assistant prompts inline
  • New: Edit user messages
  • New: Edit assistant messages
  • New: Make updates more visible
  • New: Flatten a chat subtree by deleting
  • New: Make bookmarked prompts more visible
  • New: Show a code block's language
  • New: Introduce a non-pitch-dark theme
  • Improve: Resizable sidebar
  • Improve: Visible system prompt
  • Improve: Scroll code blocks instead of wrapping the text
  • Fix: Icons in offline mode
  • Fix: Quick view icon on uploaded images

v0.4

March 6, 2024

  • New: Allow to zoom in/out and restore the zoom level of overall app.
  • New: You can now chat with mind-blowing fast models from Groq AI.
  • New: You can now chat with mind-blowing smart models from Anthropic AI.
  • New: New code models: DeepSeek Coder, StarCoder2, and DolphinCoder.
  • New: Text Module service: Updated to the latest version.
  • New: New look for the bulk chat session actions UI.
  • Improve: Overall app performance improvements with lower CPU usage and a huge improvement in typing latency.
  • Improve: Save and restore sync state of a chat session in split chat mode.
  • Improve: Allow to show all hidden chats.
  • Improve: Move message options such as copy, split, etc, to the bottom of its message.
  • Improve: Swap the location of image attachment and quick prompt buttons.
  • Improve: Text Module service: Improved LLaVA model response and performance.
  • Improve: Text Module service: Better determination of VRAM on macOS
  • Fix: Text Module service: Issue with Local Service hanging when switching models repeatedly.
  • Fix: Model options pane scrolling issue.
  • Fix: Save for this chat option was saving configs globally.
  • Misc: UI/UX improvements.

v0.3

March 1, 2024

  • EPIC: 🌋 LLaVA model. You can chat with your images now! Read more on our blog.
  • EPIC: Auto Ollama models location detection including OLLAMA_MODELS env variable, if defined. We now allow to change the defaults models location as well. Models Directory
  • EPIC: Msty is now available on Linux. 🎉 Both AppImage and .deb packages are available for download. AppImage comes with auto update but .deb doesn't. Linux Support
Linux is experimental at this time. Please talk to us on Discord if you run into any issues.

  • New: Allow to add system prompt. You can set it per chat or globally per model. System Prompt
  • New: Revamped prompt library with a new UI, showing some prompts when starting a chat and a dedicated bookmarked prompt section.
  • New: Add support for Gemini models from Google.
  • New: Allow to clear recent prompts as well as a way to disable saving new prompts.
  • New: Latest models definitions and latest Msty Local service including Mistral Large.
  • New: Check the health status of Text Module service and get the port address it is running on. This makes it easy to connect Msty from other services.
  • Improve: Scroll to bottom of chat when loading it.
  • Improve: Focus message input box after selecting a model.
  • Improve: Fix the download models icon.
  • Fix: Bookmarked prompts are not saved.
  • Fix: Double enter sends empty message.
  • Fix: Massive performance improvements when loading a chat session. When testing internally, a chat with 300 messages took less than 3 seconds where before it was taking about 2 and a half minutes 🤯
  • Fix: Bring back some missing menu items, such as Hide, Hide Others, etc. on MacOS
  • Fix: Formatting of the chat messages giving them a more breathing space.
  • Fix: Enter from numeric keypad doesn't submit a message rather adds a new line.
  • Misc: UI/UX improvements.

v0.2

Feb 19, 2024

  • New: Gemma model from Google DeepMind
  • New: Refine AI generated output using custom prompt
  • New: One-click modification of AI output using pre-defined prompts
  • New: Chat refinement revisions with a sleek navigation for switching between refined messages Refine Outputs
  • New: Windows GPU (CUDA) installer including the latest CUDA drivers; no more manual installation
  • New: Chat model is now loaded for 10 minutes in memory for faster response
  • New: One-key shortcut to message actions such as D to delete a message when your cursor is on the message
  • Improve: Deleting a message doesn't reload the chat conversation but only removes the message from the chat
  • Improve: Sidebar navigation is now more intuitive and less jarring
  • Fix: Output message isn't formatted properly if it contains an unordered list
  • Fix: An exception when chat models generate language that is not supported by the app's code highlighter
  • Fix: A couple of issues with conversation fork and split
  • Fix: Disable title edit when a model isn't available
  • Misc: UI/UX improvements

v0.1

Feb 17, 2024

  • New: Windows support with GPU
  • New: Prompts library with placeholders
  • New: Sticky prompt
  • New: Split conversation to a new chat session
  • New: Fork chat session
  • New: Delete a message from a chat session

v0.0.4

Jan 29, 2024

  • New: Organize chat sessions into folders
    • Create, edit, and delete folders
    • Drag and drop chat sessions to move them to a different folder
  • New: Merge chat sessions into a new session
  • New: Chat sessions bulk actions
    • Select multiple sessions using shift (⇧) or cmd (⌘) keys
    • Bulk delete, merge, and move
  • New: Remember the last used model
  • New: Recent chat input history for pasting convenience
  • New: Edit chat session title
    • Generate contextual titles using AI
  • New: Delete chat from a session
  • New: Chat split panel for resizing chats in a multi-chat session
  • New: Auto expand chat input box
  • New: Remember pinned sidebar state
    • Double-click on the app nav bar or press ⌘1 to toggle
  • New: Copy user message
  • New: Latest Text Module service
    • This requires users to manually update the Text Module service under Local AI
  • New: Local AI text chat models (Tiny Dolphin, Qwen, and DuckDB NSQL)
  • New: In-app changelog
  • Fix: Chat history actions when the sidebar is hovered during a collapsed state
  • Fix: In-app discord invite link

v0.0.3

Jan 10, 2024

  • Fix: Model logos

v0.0.2

Jan 10, 2024

  • New: Configurable text chat parameters
  • New: Local AI text chat models (Mixtral 8x7B, Vicuna, Mistral Openorca, etc.)
  • New: Submit chat on enter
  • New: Latest Text Module service
    • This requires users to manually update the Text Module service under Local AI Update Text Module service
  • Fix: Message streaming flashes
  • Fix: App window drag issue

v0.0.1

Jan 3, 2024

  • New: Initial release Msty onboarding