- Goes back to a max line length of 100
- Makes whitespace insignificant in Svelte files (see below)
- Avoids parens around single argument closures, in TS, when unncessary
*Whitespace Significance*
In HTML, `<p><b> 1 </b></p>` and `<p><b>1</b></p>` are not the same
thing. In the former, the "1" has spaces around it. If you were to try
to split this into multiple lines...
```html
<p>
<b>
1
</b>
</p>
```
...you would lose that whitespace. The newlines reset any significance
around individual tokens. This meant prettier would format that code
as...
```html
<p>
<b
> 1 </b>
</p>
```
...which is insane and hideous.
We're saying all whitespace is insignificant from now on. Meaning
prettier no longer needs to retain it and can format that code as a sane
person.
This means you need to explicitly use ` ` characters if you
explicitly need whitespace around things. OR put it in a `span` and use
css.
TL;DR: do not rely on whitespace significance in HTML.
A magical tool for using local LLMs with MCP servers
Tome
Tome is the simplest way to get started with local LLMs and MCP. Tome manages your MCP servers so there's no fiddling with uv/npm or json files - connect it to Ollama (or OpenAI/Gemini), find an MCP server via our Smithery marketplace integration (or paste your own uvx/npx command), and chat with an MCP-powered model in seconds.
This is a Technical Preview so bear in mind things will be rough around the edges. Join us on Discord to share tips, tricks, and issues you run into. Star this repo to stay on top of updates and feature releases!
Features
- Instant connection to Ollama (local or remote) for local model management
- Integration with cloud AI providers like OpenAI and Google Gemini
- Chat with MCP-powered models, customize context window and temperature
- Install MCP servers by pasting in a command (e.g.,
uvx mcp-server-fetch) or through the built-in Smithery marketplace which offers thousands of servers via a single click
Getting Started
Requirements
- MacOS or Windows (Linux coming soon!)
- Ollama or an OpenAI or Gemini API key
- Download the latest release of Tome
Quickstart
- Install Tome and Ollama (or add an OpenAI or Gemini API key)
- Install a Tool supported model (we're partial to Qwen3, either 14B or 8B depending on your RAM)
- Open the MCP tab in Tome and install your first MCP server (Fetch is an easy one to get started with, just paste
uvx mcp-server-fetchinto the server field). - Chat with your MCP-powered model! Ask it to fetch the top story on Hacker News.
Vision
We want to make local LLMs and MCP accessible to everyone. We're building a tool that allows you to be creative with LLMs, regardless of whether you're an engineer, tinkerer, hobbyist, or anyone in between.
Core Principles
- Tome is local first: You are in control of where your data goes.
- Tome is for everyone: You shouldn't have to manage programming languages, package managers, or json config files.
What's Next
- Model support: Currently Tome uses Ollama for model management but we'd like to expand support for other LLM engines and possibly even cloud models, let us know if you have any requests.
- Operating system support: We're planning on adding support for Windows, followed by Linux.
- App builder: we believe long term that the best experiences will not be in a chat interface. We have plans to add additional tools that will enable you to create powerful applications and workflows.
- ??? Let us know what you'd like to see! Join our community via the links below, we'd love to hear from you.