mirror of
https://github.com/runebookai/tome.git
synced 2025-07-21 00:27:30 +03:00
Adds Release links to README and revised screenshot
This commit is contained in:
@@ -20,13 +20,13 @@ This is our very first Technical Preview so bear in mind things will be rough ar
|
||||
|
||||
- MacOS (Sequoia 15.0 or higher recommended)
|
||||
- [Ollama](https://ollama.com/) (Either local or remote, you can configure any Ollama URL in settings)
|
||||
- [Download the latest release of Tome](#)
|
||||
- [Download the latest release of Tome](https://github.com/runebookai/tome/releases/download/v0.1.0/Tome_0.1.0_aarch64.dmg)
|
||||
|
||||
## Quickstart
|
||||
|
||||
We'll be updating our [home page](https://runebook.ai) in the coming weeks with docs and an end-to-end tutorial, here's a quick getting started guide in the meantime.
|
||||
|
||||
1. Install [Tome](#) and [Ollama](https://ollama.com)
|
||||
1. Install [Tome](https://github.com/runebookai/tome/releases/download/v0.1.0/Tome_0.1.0_aarch64.dmg) and [Ollama](https://ollama.com)
|
||||
2. Install a [Tool supported model](https://ollama.com/search?c=tools) (we're partial to Qwen2.5, either 14B or 7B depending on your RAM)
|
||||
3. Open the MCP tab in Tome and install your first [MCP server](https://github.com/modelcontextprotocol/servers) (Fetch is an easy one to get started with, just paste `uvx mcp-server-fetch` into the server field)
|
||||
4. Chat with your MCP-powered model! Ask it to fetch the top story on Hacker News.
|
||||
|
||||
Binary file not shown.
|
Before Width: | Height: | Size: 995 KiB After Width: | Height: | Size: 934 KiB |
Reference in New Issue
Block a user