Msty 1.0

We're excited to announce the release of Msty 1.0! This is our best release yet, with lots of new features, improvements, and bug fixes, and makes Msty better than ever. Here are just some of the highlights of what's new in Msty 1.0:

1.0 is here! Polished so much that it became a feature of its own.

Real Time Data

Since LLMs are trained on a fixed old dataset and the world around us is always changing, they can't provide the latest information on a topic. To address this, we've added a new feature called Real Time Data. This feature allows you to get real-time information from the web, so you can get the most up-to-date information on any topic.

If you have ever used Perplexity, this is very similar but with the power of using any model you want and keeping your data private.

Real Time Data is just a click away.

Enabling this feature couldn't be easier - just click on the Use Real Time Data button below the chat input box, and you're good to go! Once you submit your query, Msty fetches the most recent information from the web, feeds it to the model, and provides you with the AI's response along with the sources it used.

Or, if you want to get even more focused information than what is presented, you can remove some sources and re-generate the response and the new response is shown as a separate branch. You can watch one of our videos on how to learn more about this feature here.

New Attachments UI

We've also added the ability to attach documents and images to your chats. We were already supporting image attachments before, but the images were never sticky, and you'd have to attach with every message and there was no way to see all the images that were attached and manage the selections.

Attachments UI is now more user-friendly and supports documents and images.

Not just that, we have introduced a new UI, that makes attaching images and documents very easy. Document attachments supports many file types like PDF, DOCX, TXT, JS, TS, TSX, MD, and more. Once attached, Msty will read the contents of the documents and send it to the model as contexts. This is very helpful when you want to provide more context to the model esp. if you are using a model with a large context window where you can provide a lot of data to the model and much more than what you get using a Knowledge Stack.

Knowledge Stack Improvements

You can now see the progress of the Knowledge Stack composition. This is very helpful when you are composing a Knowledge Stack and want to know how much of the stack has been composed.

A circular progress bar is shown for each individual files within a folder or an Obsidian vault. Other small improvements include showing total no. of files within folders and vaults that have been composed, date and time of last successful composition, and the ability to abort the composition.

A cool new feature that is hidden but a very useful is the ability to ignore files and folders from the composition using a .mstyignore file. If you have used .gitignore before, this is very similar and works the same way using the same syntax.

Revamped Settings UI

We've also revamped the settings UI to make it easier to find and change settings - both general settings and Local AI settings. There are so many new features packed into the settings UI that we can't list them all here and each one of them needs its own dedicated blog post. But some of the highlights include the ability to set automatically generating chat titles for non-local models (local models always auto generate titles), easy way to check for updates, fetch latest models info, reset settings, and more.

Similarly, the Local AI settings comes with even more goodies - changing service configurations such as no. of parallel chats, max loaded models, and a free form Advanced Configs section where you can put in any configuration that you want to pass to the service. You can also set global values for local models such as keep alive value and we plan to add some more in the future.

One of the big features that's kind of hidden away in the settings is the ability to make Local AI service available on your network so that other devices can access the service for inference. This way it is now possible to have a single powerful machine running Msty and other devices like a phone or a tablet can access the service for inference.

This is very useful when you have a powerful machine running Msty and you want to use the service on a device that is not as powerful as the machine running Msty.

And like always, we made it very easy to enable this feature, just a click of a button and you are good to go. For your convenience, Msty also shows the network IP address of the machine running Msty for your copy-paste pleasure. Here's a quick video that covers the new settings UI.

Revamped Onboarding Experience

It's important that new users have a great experience when they first start using Msty, so we've revamped the onboarding experience to make it even easier for new users to get started. It is still a one-click experience but now you can choose to select one of the hand-picked models like Llama3, Phi3, Codestral, LLaVa etc. to get started with. Before today, we'd download TinyDolphin for you automatically since it is a good balance between size and quality.

We obsessed over the UI that most of you won't see more than once.

Also, if you have Ollama models already available, Msty will use some heuristic to find the default models directory, and if it finds it, it will give you an option to continue with that instead. And, yes, still just one click!

Context Menu for Local AI Service

We've added a new context menu for the Local AI service, which makes it easier to access the service and perform common tasks. You can now right-click on the Local AI icon on the sidebar to access the context menu, which allows you to start and stop the service, open the settings, and more. This is very useful when you want to quickly access the service or perform common tasks without having to open the settings window.

Making simple things easier - Local AI Context Menu makes it easier to access the service and perform common tasks.

Improved Remote Model Providers UI

API Keys have now been renamed to Remote Model Providers since they are not just API keys but could be just a remote service provider, like Msty running on a different machine for which you don't need a key. There have been other number of improvements to the UI, like being able to edit a key and modify models list that are available for chat, dedicated Msty Remote and Ollama Remote providers that fetch the models automatically for you, showing models' purpose/capabilities when possible, and more.

Polished UI

Msty is already known for its clean and polished UI, but we've made many small improvements to make it even better. We've removed any cruft that you don't need to see, tweaked icons, improved tooltips, and made many small tweaks to the UI to make it even more user-friendly.

We took things away until there was nothing more to be removed, and then we removed some more. If you have used Msty before, the polished UI is not going to escape your eyes, and you will notice it right away - a perfect harmony of form and function.

These are just some of the highlights of what's new in Msty 1.0. We've also made lots of other improvements and bug fixes, so be sure to check out the full release notes for all the details. We hope you enjoy using Msty 1.0 as much as we enjoyed making it, and we can't wait to hear your feedback!

Interact with any AI model with just a click of a button