diff --git a/README.md b/README.md index a37b345..9d52b96 100644 --- a/README.md +++ b/README.md @@ -20,13 +20,13 @@ This is our very first Technical Preview so bear in mind things will be rough ar - MacOS (Sequoia 15.0 or higher recommended) - [Ollama](https://ollama.com/) (Either local or remote, you can configure any Ollama URL in settings) -- [Download the latest release of Tome](#) +- [Download the latest release of Tome](https://github.com/runebookai/tome/releases/download/v0.1.0/Tome_0.1.0_aarch64.dmg) ## Quickstart We'll be updating our [home page](https://runebook.ai) in the coming weeks with docs and an end-to-end tutorial, here's a quick getting started guide in the meantime. -1. Install [Tome](#) and [Ollama](https://ollama.com) +1. Install [Tome](https://github.com/runebookai/tome/releases/download/v0.1.0/Tome_0.1.0_aarch64.dmg) and [Ollama](https://ollama.com) 2. Install a [Tool supported model](https://ollama.com/search?c=tools) (we're partial to Qwen2.5, either 14B or 7B depending on your RAM) 3. Open the MCP tab in Tome and install your first [MCP server](https://github.com/modelcontextprotocol/servers) (Fetch is an easy one to get started with, just paste `uvx mcp-server-fetch` into the server field) 4. Chat with your MCP-powered model! Ask it to fetch the top story on Hacker News. diff --git a/static/images/screenshot.png b/static/images/screenshot.png index a560341..fd6bd53 100644 Binary files a/static/images/screenshot.png and b/static/images/screenshot.png differ