{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Memory & Context Management with Claude Sonnet 4.5\n", "\n", "Learn how to build AI agents that learn and improve across conversations using Claude's memory tool and context editing capabilities." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Table of Contents\n", "\n", "1. [Introduction: Why Memory Matters](#introduction)\n", "2. [Use Cases](#use-cases)\n", "3. [Quick Start Examples](#quick-start)\n", "4. [How It Works](#how-it-works)\n", "5. [Code Review Assistant Demo](#demo)\n", "6. [Real-World Applications](#real-world)\n", "7. [Best Practices](#best-practices)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Setup\n", "\n", "### For VSCode Users\n", "\n", "```bash\n", "# 1. Create virtual environment\n", "python -m venv .venv\n", "\n", "# 2. Activate it\n", "source .venv/bin/activate # macOS/Linux\n", "# or: .venv\\Scripts\\activate # Windows\n", "\n", "# 3. Install dependencies\n", "pip install -r requirements.txt\n", "\n", "# 4. In VSCode: Select .venv as kernel (top right)\n", "```\n", "\n", "### API Key\n", "\n", "```bash\n", "cp .env.example .env\n", "# Edit .env and add your ANTHROPIC_API_KEY\n", "```\n", "\n", "Get your API key from: https://console.anthropic.com/\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 1. Introduction: Why Memory Matters {#introduction}\n", "\n", "This cookbook demonstrates practical implementations of the context engineering patterns described in [Effective context engineering for AI agents](https://www.anthropic.com/engineering/effective-context-engineering-for-ai-agents). That post covers why context is a finite resource, how attention budgets work, and strategies for building effective agentsβ€”the techniques you'll see in action here.\n", "\n", "### The Problem\n", "\n", "Large language models have finite context windows (200k tokens for Claude 4). While this seems large, several challenges emerge:\n", "\n", "- **Context limits**: Long conversations or complex tasks can exceed available context\n", "- **Computational cost**: Processing large contexts is expensive - attention mechanisms scale quadratically\n", "- **Repeated patterns**: Similar tasks across conversations require re-explaining context every time\n", "- **Information loss**: When context fills up, earlier important information gets lost\n", "\n", "### The Solution\n", "\n", "Claude Sonnet 4.5 introduces two powerful capabilities:\n", "\n", "1. **Memory Tool** (`memory_20250818`): Enables cross-conversation learning\n", " - Claude can write down what it learns for future reference\n", " - File-based system under `/memories` directory\n", " - Client-side implementation gives you full control\n", "\n", "2. **Context Editing** (`clear_tool_uses_20250919`): Automatically manages context\n", " - Clears old tool results when context grows large\n", " - Keeps recent context while preserving memory\n", " - Configurable triggers and retention policies\n", "\n", "### The Benefit\n", "\n", "Build AI agents that **get better at your specific tasks over time**:\n", "\n", "- **Session 1**: Claude solves a problem, writes down the pattern\n", "- **Session 2**: Claude applies the learned pattern immediately (faster!)\n", "- **Long sessions**: Context editing keeps conversations manageable\n", "\n", "Think of it as giving Claude a notebook to take notes and refer back to - just like humans do." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2. Use Cases {#use-cases}\n", "\n", "Memory and context management enable powerful new workflows:\n", "\n", "### πŸ” Code Review Assistant\n", "- Learns debugging patterns from past reviews\n", "- Recognizes similar bugs instantly in future sessions\n", "- Builds team-specific code quality knowledge\n", "- **Production ready**: Integrate with [claude-code-action](https://github.com/anthropics/claude-code-action) for GitHub PR reviews\n", "\n", "### πŸ“š Research Assistant\n", "- Accumulates knowledge on topics over multiple sessions\n", "- Connects insights across different research threads\n", "- Maintains bibliography and source tracking\n", "\n", "### πŸ’¬ Customer Support Bot\n", "- Learns user preferences and communication style\n", "- Remembers common issues and solutions\n", "- Builds product knowledge base from interactions\n", "\n", "### πŸ“Š Data Analysis Helper\n", "- Remembers dataset patterns and anomalies\n", "- Stores analysis techniques that work well\n", "- Builds domain-specific insights over time\n", "\n", "**Supported Models**: Claude Opus 4 (`claude-opus-4-20250514`), Claude Opus 4.1 (`claude-opus-4-1-20250805`), Claude Sonnet 4 (`claude-sonnet-4-20250514`), and Claude Sonnet 4.5 (`claude-sonnet-4-5-20250929`)\n", "\n", "**This cookbook focuses on the Code Review Assistant** as it clearly demonstrates both memory (learning patterns) and context editing (handling long reviews)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 3. Quick Start Examples {#quick-start}\n", "\n", "Let's see memory and context management in action with simple examples." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Setup\n", "\n", "First, install dependencies and configure your environment:" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m24.0\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m25.2\u001b[0m\n", "\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n", "Note: you may need to restart the kernel to use updated packages.\n" ] } ], "source": [ "# Install required packages\n", "# Option 1: From requirements.txt\n", "# %pip install -q -r requirements.txt\n", "\n", "# Option 2: Direct install\n", "%pip install -q anthropic python-dotenv ipykernel\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**⚠️ Important**: Create a `.env` file in this directory:\n", "\n", "```bash\n", "# Copy .env.example to .env and add your API key\n", "cp .env.example .env\n", "```\n", "\n", "Then edit `.env` to add your Anthropic API key from https://console.anthropic.com/" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "βœ“ API key loaded\n", "βœ“ Using model: claude-sonnet-4-5-20250929\n" ] } ], "source": [ "import os\n", "from typing import Any, cast\n", "\n", "from anthropic import Anthropic\n", "from dotenv import load_dotenv\n", "\n", "# Load environment variables\n", "load_dotenv()\n", "\n", "API_KEY = os.getenv(\"ANTHROPIC_API_KEY\")\n", "MODEL = os.getenv(\"ANTHROPIC_MODEL\")\n", "\n", "if not API_KEY:\n", " raise ValueError(\n", " \"ANTHROPIC_API_KEY not found. \"\n", " \"Copy .env.example to .env and add your API key.\"\n", " )\n", "\n", "if not MODEL:\n", " raise ValueError(\n", " \"ANTHROPIC_MODEL not found. \"\n", " \"Copy .env.example to .env and set the model.\"\n", " )\n", "\n", "MODEL = cast(str, MODEL)\n", "\n", "client = Anthropic(api_key=API_KEY)\n", "\n", "print(\"βœ“ API key loaded\")\n", "print(f\"βœ“ Using model: {MODEL}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Example 1: Basic Memory Usage\n", "\n", "Let's see Claude use memory to store information for future reference." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Helper Functions**\n", "\n", "These examples use helper functions from `demo_helpers.py`:\n", "\n", "- **`run_conversation_loop()`**: Handles the API conversation loop\n", " - Calls Claude's API with memory tool enabled\n", " - Executes tool uses (memory operations)\n", " - Continues until Claude stops using tools\n", " - Returns the final response\n", "\n", "- **`run_conversation_turn()`**: Single turn (used in Example 3)\n", " - Same as above but returns after one API call\n", " - Useful when you need fine-grained control\n", "\n", "- **`print_context_management_info()`**: Displays context clearing stats\n", " - Shows tokens saved, tool uses cleared\n", " - Helps visualize when context editing triggers" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**⚠️ Note on Memory Clearing**\n", "\n", "The following cell clears all memory files to provide a clean slate for this demonstration. This is useful for running the notebook multiple times to see consistent results.\n", "\n", "**In production applications**, you should carefully consider whether to clear all memory, as it permanently removes learned patterns. Consider using selective deletion or organizing memory into project-specific directories instead." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "🧹 Clearing previous memories...\n", "βœ“ Memory cleared\n", "\n", "============================================================\n", "πŸ“ SESSION 1: Learning from a bug\n", "============================================================\n", "\n", "πŸ”„ Turn 1:\n", "πŸ’¬ Claude: I'll review this multi-threaded web scraper for you. Let me first check my memory and then analyze the code.\n", "\n", " πŸ”§ Memory tool: view /memories\n", " βœ“ Result: Directory: /memories\n", "(empty)\n", "\n", "πŸ”„ Turn 2:\n", " πŸ”§ Memory tool: create /memories/review_progress.md\n", " βœ“ Result: File created successfully at /memories/review_progress.md\n", "\n", "πŸ”„ Turn 3:\n", "πŸ’¬ Claude: \n", "\n", "## Code Review: Multi-threaded Web Scraper - Race Condition Issues\n", "\n", "Great catch on identifying this as a concurrency issue! I've found **multiple critical race conditions** that explain the inconsistent results.\n", "\n", "### πŸ”΄ **Critical Issues**\n", "\n", "#### **1. Race Condition in `self.results` (Primary Bug)**\n", "**Location:** Line in `scrape_urls()` method\n", "```python\n", "self.results.append(result) # RACE CONDITION\n", "```\n", "\n", "**Problem:** \n", "- Python's `list.append()` is **NOT thread-safe** for concurrent modifications\n", "- Multiple threads simultaneously appending can cause:\n", " - **Lost updates:** One thread's append overwrites another's\n", " - **Inconsistent state:** Internal list structure can be corrupted\n", " - **Unpredictable counts:** Results randomly dropped\n", "\n", "**Why it happens:**\n", "`list.append()` involves multiple operations:\n", "1. Read current list size\n", "2. Allocate space\n", "3. Insert item\n", "4. Update size\n", "\n", "Between these steps, another thread can interfere, causing data loss.\n", "\n", "#### **2. Race Condition in `self.failed_urls`**\n", "**Location:** Same method\n", "```python\n", "self.failed_urls.append(result[\"url\"]) # RACE CONDITION\n", "```\n", "\n", "**Problem:** Identical issue - concurrent appends without synchronization.\n", "\n", "#### **3. Shared State Without Protection**\n", "**Location:** Class initialization\n", "```python\n", "self.results = [] # BUG: Shared mutable state accessed by multiple threads!\n", "self.failed_urls = [] # BUG: Another race condition!\n", "```\n", "\n", "**Problem:** Instance variables shared across threads without any locking mechanism.\n", "\n", "---\n", "\n", "### βœ… **Solutions**\n", "\n", "Here are three approaches to fix this, from simplest to most robust:\n", "\n", "#### **Solution 1: Use Thread-Safe Queue (Recommended)**\n", "\n", "```python\n", "import time\n", "from concurrent.futures import ThreadPoolExecutor, as_completed\n", "from typing import List, Dict, Any\n", "from queue import Queue\n", "import requests\n", "\n", "\n", "class WebScraper:\n", " \"\"\"Web scraper that fetches multiple URLs concurrently.\"\"\"\n", "\n", " def __init__(self, max_workers: int = 10):\n", " self.max_workers = max_workers\n", " # Use thread-safe Queue instead of lists\n", " self.results_queue = Queue()\n", " self.failed_queue = Queue()\n", "\n", " def fetch_url(self, url: str) -> Dict[str, Any]:\n", " \"\"\"Fetch a single URL and return the result.\"\"\"\n", " try:\n", " response = requests.get(url, timeout=5)\n", " response.raise_for_status()\n", " return {\n", " \"url\": url,\n", " \"status\": response.status_code,\n", " \"content_length\": len(response.content),\n", " }\n", " except requests.exceptions.RequestException as e:\n", " return {\"url\": url, \"error\": str(e)}\n", "\n", " def scrape_urls(self, urls: List[str]) -> List[Dict[str, Any]]:\n", " \"\"\"Scrape multiple URLs concurrently - FIXED with Queue.\"\"\"\n", " with ThreadPoolExecutor(max_workers=self.max_workers) as executor:\n", " futures = [executor.submit(self.fetch_url, url) for url in urls]\n", "\n", " for future in as_completed(futures):\n", " result = future.result()\n", " \n", " # Thread-safe operations\n", " if \"error\" in result:\n", " self.failed_queue.put(result[\"url\"])\n", " else:\n", " self.results_queue.put(result)\n", "\n", " # Convert queues to lists after all threads complete\n", " results = []\n", " while not self.results_queue.empty():\n", " results.append(self.results_queue.get())\n", " \n", " return results\n", "\n", " def get_stats(self) -> Dict[str, int]:\n", " \"\"\"Get scraping statistics.\"\"\"\n", " results_count = self.results_queue.qsize()\n", " failed_count = self.failed_queue.qsize()\n", " \n", " return {\n", " \"total_results\": results_count,\n", " \"failed_urls\": failed_count,\n", " \"success_rate\": (\n", " results_count / (results_count + failed_count)\n", " if (results_count + failed_count) > 0\n", " else 0\n", " ),\n", " }\n", "```\n", "\n", "#### **Solution 2: Use Threading Lock**\n", "\n", "```python\n", "import threading\n", "from concurrent.futures import ThreadPoolExecutor, as_completed\n", "from typing import List, Dict, Any\n", "import requests\n", "\n", "\n", "class WebScraper:\n", " \"\"\"Web scraper that fetches multiple URLs concurrently.\"\"\"\n", "\n", " def __init__(self, max_workers: int = 10):\n", " self.max_workers = max_workers\n", " self.results = []\n", " self.failed_urls = []\n", " self.lock = threading.Lock() # Add lock for synchronization\n", "\n", " def fetch_url(self, url: str) -> Dict[str, Any]:\n", " \"\"\"Fetch a single URL and return the result.\"\"\"\n", " try:\n", " response = requests.get(url, timeout=5)\n", " response.raise_for_status()\n", " return {\n", " \"url\": url,\n", " \"status\": response.status_code,\n", " \"content_length\": len(response.content),\n", " }\n", " except requests.exceptions.RequestException as e:\n", " return {\"url\": url, \"error\": str(e)}\n", "\n", " def scrape_urls(self, urls: List[str]) -> List[Dict[str, Any]]:\n", " \"\"\"Scrape multiple URLs concurrently - FIXED with lock.\"\"\"\n", " with ThreadPoolExecutor(max_workers=self.max_workers) as executor:\n", " futures = [executor.submit(self.fetch_url, url) for url in urls]\n", "\n", " for future in as_completed(futures):\n", " result = future.result()\n", " \n", " # Protected by lock - only one thread at a time\n", " with self.lock:\n", " if \"error\" in result:\n", " self.failed_urls.append(result[\"url\"])\n", " else:\n", " self.results.append(result)\n", "\n", " return self.results\n", "\n", " def get_stats(self) -> Dict[str, int]:\n", " \"\"\"Get scraping statistics.\"\"\"\n", " with self.lock: # Protect reads too\n", " return {\n", " \"total_results\": len(self.results),\n", " \"failed_urls\": len(self.failed_urls),\n", " \"success_rate\": (\n", " len(self.results) / (len(self.results) + len(self.failed_urls))\n", " if (len(self.results) + len(self.failed_urls)) > 0\n", " else 0\n", " ),\n", " }\n", "```\n", "\n", "#### **Solution 3: Let ThreadPoolExecutor Handle It (Best)**\n", "\n", "```python\n", "from concurrent.futures import ThreadPoolExecutor, as_completed\n", "from typing import List, Dict, Any\n", "import requests\n", "\n", "\n", "class WebScraper:\n", " \"\"\"Web scraper that fetches multiple URLs concurrently.\"\"\"\n", "\n", " def __init__(self, max_workers: int = 10):\n", " self.max_workers = max_workers\n", "\n", " def fetch_url(self, url: str) -> Dict[str, Any]:\n", " \"\"\"Fetch a single URL and return the result.\"\"\"\n", " try:\n", " response = requests.get(url, timeout=5)\n", " response.raise_for_status()\n", " return {\n", " \"url\": url,\n", " \"status\": response.status_code,\n", " \"content_length\": len(response.content),\n", " }\n", " except requests.exceptions.RequestException as e:\n", " return {\"url\": url, \"error\": str(e)}\n", "\n", " def scrape_urls(self, urls: List[str]) -> List[Dict[str, Any]]:\n", " \"\"\"\n", " Scrape multiple URLs concurrently - FIXED by avoiding shared state.\n", " Collect results after futures complete (no concurrent modifications).\n", " \"\"\"\n", " results = []\n", " failed_urls = []\n", " \n", " with ThreadPoolExecutor(max_workers=self.max_workers) as executor:\n", " futures = [executor.submit(self.fetch_url, url) for url in urls]\n", "\n", " # Single-threaded collection after async work completes\n", " for future in as_completed(futures):\n", " result = future.result()\n", " \n", " if \"error\" in result:\n", " failed_urls.append(result[\"url\"])\n", " else:\n", " results.append(result)\n", "\n", " self.results = results # Store after all work is done\n", " self.failed_urls = failed_urls\n", " return results\n", "\n", " def get_stats(self) -> Dict[str, int]:\n", " \"\"\"Get scraping statistics.\"\"\"\n", " return {\n", " \"total_\n", "\n", "\n", "============================================================\n", "βœ… Session 1 complete!\n", "============================================================\n" ] } ], "source": [ "# Import helper functions\n", "from memory_demo.demo_helpers import run_conversation_loop, run_conversation_turn, print_context_management_info\n", "from memory_tool import MemoryToolHandler\n", "\n", "# Initialize\n", "client = Anthropic(api_key=API_KEY)\n", "memory = MemoryToolHandler(base_path=\"./demo_memory\")\n", "\n", "# Clear any existing memories to start fresh\n", "print(\"🧹 Clearing previous memories...\")\n", "memory.clear_all_memory()\n", "print(\"βœ“ Memory cleared\\n\")\n", "\n", "# Load example code with a race condition bug\n", "with open(\"memory_demo/sample_code/web_scraper_v1.py\", \"r\") as f:\n", " code_to_review = f.read()\n", "\n", "messages = [\n", " {\n", " \"role\": \"user\",\n", " \"content\": f\"I'm reviewing a multi-threaded web scraper that sometimes returns fewer results than expected. The count is inconsistent across runs. Can you find the issue?\\n\\n```python\\n{code_to_review}\\n```\"\n", " }\n", "]\n", "\n", "print(\"=\" * 60)\n", "print(\"πŸ“ SESSION 1: Learning from a bug\")\n", "print(\"=\" * 60)\n", "\n", "# Run conversation loop\n", "response = run_conversation_loop(\n", " client=client,\n", " model=MODEL,\n", " messages=messages,\n", " memory_handler=memory,\n", " system=\"You are a code reviewer.\",\n", " max_tokens=2048,\n", " max_turns=5,\n", " verbose=True\n", ")\n", "\n", "print(\"\\n\" + \"=\" * 60)\n", "print(\"βœ… Session 1 complete!\")\n", "print(\"=\" * 60)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**What happened?**\n", "\n", "1. Claude checked its memory (empty on first run)\n", "2. Identified the bug: **race condition** - multiple threads modifying shared state (`self.results` and `self.failed_urls`) without synchronization\n", "3. Stored the concurrency pattern in memory for future reference\n", "\n", "Now let's see the magic - Claude applying this learned pattern in a **new conversation**:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Example 2: Cross-Conversation Learning\n", "\n", "Start a completely new conversation - memory persists!" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "============================================================\n", "πŸš€ SESSION 2: Applying learned pattern\n", "============================================================\n", "\n", "πŸ”„ Turn 1:\n", " πŸ”§ Memory tool: view /memories\n", " βœ“ Result: Directory: /memories\n", "- review_progress.md\n", "\n", "πŸ”„ Turn 2:\n", " πŸ”§ Memory tool: view /memories/review_progress.md\n", " βœ“ Result: 1: # Code Review: Multi-threaded Web Scraper\n", " 2: \n", " 3: ## Task\n", " 4: Revie...\n", "\n", "πŸ”„ Turn 3:\n", " πŸ”§ Memory tool: str_replace /memories/review_progress.md\n", " βœ“ Result: File /memories/review_progress.md has been edited successfully\n", "\n", "πŸ”„ Turn 4:\n", "πŸ’¬ Claude: \n", "\n", "# Code Review: Async API Client\n", "\n", "## πŸ”΄ Critical Issues\n", "\n", "### 1. **Race Condition on Shared State (High Severity)**\n", "**Location:** `fetch_all()` method, lines with `self.responses.append()` and `self.error_count += 1`\n", "\n", "**Problem:** \n", "Multiple coroutines concurrently modify shared instance variables without synchronization. While the comments correctly identify this as an issue, there's an important nuance: Python's asyncio is **single-threaded**, so the GIL isn't the protective factor hereβ€”the real issue is that coroutines can be suspended mid-operation during `await` points.\n", "\n", "However, the actual race condition risk here is **somewhat overstated** because:\n", "- List `.append()` is atomic in CPython\n", "- The `+=` operation on integers is also atomic\n", "- Coroutines only switch at `await` points, and there are none between the operations\n", "\n", "**BUT** this is still problematic because:\n", "- It relies on CPython implementation details\n", "- It's not guaranteed by the language specification\n", "- The code is not portable to other Python implementations\n", "- Future refactoring could introduce `await` points that cause real races\n", "\n", "**Fix:** Use proper async coordination or refactor to avoid shared state:\n", "\n", "```python\n", "async def fetch_all(self, endpoints: List[str]) -> List[Dict[str, Any]]:\n", " \"\"\"Fetch multiple endpoints concurrently.\"\"\"\n", " async with aiohttp.ClientSession() as session:\n", " tasks = [self.fetch_endpoint(session, endpoint) for endpoint in endpoints]\n", " \n", " # Collect all results at once - no shared state modification\n", " results = await asyncio.gather(*tasks, return_exceptions=False)\n", " \n", " # Process results in single-threaded manner after all are collected\n", " responses = []\n", " error_count = 0\n", " \n", " for result in results:\n", " if \"error\" in result:\n", " error_count += 1\n", " else:\n", " responses.append(result)\n", " \n", " # Update instance state only once\n", " self.responses = responses\n", " self.error_count = error_count\n", " \n", " return responses\n", "```\n", "\n", "### 2. **Stateful Design Anti-Pattern (Medium-High Severity)**\n", "**Location:** Instance variables `self.responses` and `self.error_count`\n", "\n", "**Problem:**\n", "The client stores results as instance variables, which means:\n", "- **Not reusable:** Calling `fetch_all()` multiple times accumulates results\n", "- **Not thread-safe:** If someone wraps this in a thread pool, real race conditions occur\n", "- **Confusing API:** Results are both returned AND stored in instance\n", "- **Memory leak potential:** Old responses never cleared\n", "\n", "**Fix:** Remove stateful design entirely:\n", "\n", "```python\n", "async def fetch_all(self, endpoints: List[str]) -> Dict[str, Any]:\n", " \"\"\"Fetch multiple endpoints concurrently.\"\"\"\n", " async with aiohttp.ClientSession() as session:\n", " tasks = [self.fetch_endpoint(session, endpoint) for endpoint in endpoints]\n", " results = await asyncio.gather(*tasks, return_exceptions=False)\n", " \n", " responses = []\n", " errors = []\n", " \n", " for result in results:\n", " if \"error\" in result:\n", " errors.append(result)\n", " else:\n", " responses.append(result)\n", " \n", " # Return everything as a structured result\n", " return {\n", " \"responses\": responses,\n", " \"errors\": errors,\n", " \"summary\": {\n", " \"total_responses\": len(responses),\n", " \"error_count\": len(errors),\n", " \"success_rate\": (\n", " len(responses) / len(results) if results else 0\n", " )\n", " }\n", " }\n", "```\n", "\n", "## ⚠️ Medium Issues\n", "\n", "### 3. **Error Handling Loses Information**\n", "**Location:** `fetch_endpoint()` exception handler\n", "\n", "**Problem:**\n", "All errors are caught and returned as dictionaries with just the error string. This loses:\n", "- Exception type information\n", "- Stack traces (useful for debugging)\n", "- HTTP status codes for failed requests\n", "\n", "**Fix:**\n", "```python\n", "async def fetch_endpoint(\n", " self, session: aiohttp.ClientSession, endpoint: str\n", ") -> Dict[str, Any]:\n", " \"\"\"Fetch a single endpoint.\"\"\"\n", " url = f\"{self.base_url}/{endpoint}\"\n", " try:\n", " async with session.get(\n", " url, timeout=aiohttp.ClientTimeout(total=5)\n", " ) as response:\n", " data = await response.json()\n", " return {\n", " \"endpoint\": endpoint,\n", " \"status\": response.status,\n", " \"data\": data,\n", " \"success\": True,\n", " }\n", " except aiohttp.ClientError as e:\n", " return {\n", " \"endpoint\": endpoint,\n", " \"error\": str(e),\n", " \"error_type\": type(e).__name__,\n", " \"success\": False,\n", " }\n", " except Exception as e:\n", " # Log unexpected errors\n", " return {\n", " \"endpoint\": endpoint,\n", " \"error\": str(e),\n", " \"error_type\": type(e).__name__,\n", " \"success\": False,\n", " }\n", "```\n", "\n", "### 4. **Using `asyncio.as_completed()` Unnecessarily**\n", "**Location:** `fetch_all()` method\n", "\n", "**Problem:**\n", "The code uses `asyncio.as_completed()` but doesn't benefit from processing results as they arrive. It still waits for all results and returns them all at once. This adds complexity without benefit.\n", "\n", "**Fix:** Use `asyncio.gather()` which is simpler and more idiomatic for this use case.\n", "\n", "### 5. **Missing HTTP Status Code Validation**\n", "**Location:** `fetch_endpoint()` method\n", "\n", "**Problem:**\n", "A 404 or 500 status code will be treated as success if it returns valid JSON. The code only checks for exceptions, not HTTP error status codes.\n", "\n", "**Fix:**\n", "```python\n", "async with session.get(\n", " url, timeout=aiohttp.ClientTimeout(total=5)\n", ") as response:\n", " if response.status >= 400:\n", " return {\n", " \"endpoint\": endpoint,\n", " \"status\": response.status,\n", " \"error\": f\"HTTP {response.status}\",\n", " \"success\": False,\n", " }\n", " data = await response.json()\n", " return {\n", " \"endpoint\": endpoint,\n", " \"status\": response.status,\n", " \"data\": data,\n", " \"success\": True,\n", " }\n", "```\n", "\n", "## πŸ’‘ Minor Issues & Suggestions\n", "\n", "### 6. **Session Should Be Reusable**\n", "**Location:** `fetch_all()` creates new session each time\n", "\n", "**Suggestion:**\n", "For better performance, consider managing the session at the class level or allowing session reuse:\n", "\n", "```python\n", "class AsyncAPIClient:\n", " def __init__(self, base_url: str):\n", " self.base_url = base_url\n", " self._session: Optional[aiohttp.ClientSession] = None\n", " \n", " async def __aenter__(self):\n", " self._session = aiohttp.ClientSession()\n", " return self\n", " \n", " async def __aexit__(self, exc_type, exc_val, exc_tb):\n", " if self._session:\n", " await self._session.close()\n", " \n", " async def fetch_endpoint(self, endpoint: str) -> Dict[str, Any]:\n", " if not self._session:\n", " raise RuntimeError(\"Use client as async context manager\")\n", " # ... use self._session\n", "```\n", "\n", "### 7. **Type Hints Could Be More Specific**\n", "The return type `Dict[str, Any]` is too generic. Consider using TypedDict:\n", "\n", "```python\n", "from typing import TypedDict\n", "\n", "class SuccessResponse(TypedDict):\n", " endpoint: str\n", " status: int\n", " data: Any\n", " success: bool\n", "\n", "class ErrorResponse(TypedDict):\n", " endpoint: str\n", " error: str\n", " error_type: str\n", " success: bool\n", "```\n", "\n", "### 8. **Missing Docstring Details**\n", "Add information about return value structure, exceptions raised, and parameters.\n", "\n", "## πŸ“‹ Summary\n", "\n", "| Severity | Count | Issues |\n", "|----------|-------|---------|\n", "| πŸ”΄ Critical | 2 | Shared state race conditions, stateful design anti-pattern |\n", "| ⚠️ Medium | 3 | Error handling, unnecessary complexity, missing status validation |\n", "| πŸ’‘ Minor | 3 | Session management, type hints, documentation |\n", "\n", "## Recommended Refactored Version\n", "\n", "```python\n", "\"\"\"\n", "Async API client with proper async patterns.\n", "\"\"\"\n", "\n", "import asyncio\n", "from typing import List, Dict, Any, TypedDict\n", "\n", "import aiohttp\n", "\n", "\n", "class ResponseResult(TypedDict):\n", " endpoint: str\n", " status: int\n", " data:\n", "\n", "\n", "============================================================\n", "βœ… Session 2 complete!\n", "============================================================\n" ] } ], "source": [ "# NEW conversation (empty messages)\n", "# Load API client code with similar concurrency issue\n", "with open(\"memory_demo/sample_code/api_client_v1.py\", \"r\") as f:\n", " code_to_review = f.read()\n", "\n", "messages = [\n", " {\n", " \"role\": \"user\",\n", " \"content\": f\"Review this API client code:\\n\\n```python\\n{code_to_review}\\n```\"\n", " }\n", "]\n", "\n", "print(\"=\" * 60)\n", "print(\"πŸš€ SESSION 2: Applying learned pattern\")\n", "print(\"=\" * 60)\n", "\n", "# Run conversation loop\n", "response = run_conversation_loop(\n", " client=client,\n", " model=MODEL,\n", " messages=messages,\n", " memory_handler=memory,\n", " system=\"You are a code reviewer.\",\n", " max_tokens=2048,\n", " max_turns=5,\n", " verbose=True\n", ")\n", "\n", "print(\"\\n\" + \"=\" * 60)\n", "print(\"βœ… Session 2 complete!\")\n", "print(\"=\" * 60)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Notice the difference:**\n", "\n", "- Claude **immediately checked memory** and found the thread-safety/concurrency pattern\n", "- Recognized the similar issue in async code **instantly** without re-learning\n", "- Response was **faster** because it applied stored knowledge about shared mutable state\n", "\n", "This is **cross-conversation learning** in action!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Example 3: Context Clearing While Preserving Memory\n", "\n", "What happens during a **long review session** with many code files?\n", "\n", "- Context fills up with tool results from previous reviews\n", "- But memory (learned patterns) must persist!\n", "\n", "Let's trigger **context editing** to see how Claude manages this automatically.\n", "\n", "**Note on configuration:** We use `clear_at_least: 50` tokens because memory tool operations have small results (~50-150 tokens each). In production with larger tool results (like web search or code execution), you'd use higher values like 3000-5000 tokens." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "============================================================\n", "πŸ“š SESSION 3: Long review session with context clearing\n", "============================================================\n", "\n", "πŸ“ Review 1: Data processor\n", " πŸ”§ Memory tool: str_replace /memories/review_progress.md\n", " βœ“ Result: File /memories/review_progress.md has been edited successfully\n", " πŸ“Š Input tokens: 6,243\n", " βœ‚οΈ Context editing triggered!\n", " β€’ Cleared 1 tool uses\n", " β€’ Saved 66 tokens\n", " β€’ After clearing: 6,243 tokens\n", "\n", "πŸ“ Review 2: SQL query builder\n", " πŸ”§ Memory tool: str_replace /memories/review_progress.md\n", " βœ“ Result: File /memories/review_progress.md has been edited successfully\n", " πŸ“Š Input tokens: 7,471\n", " βœ‚οΈ Context editing triggered!\n", " β€’ Cleared 1 tool uses\n", " β€’ Saved 66 tokens\n", " β€’ After clearing: 7,471 tokens\n", "\n", "============================================================\n", "βœ… Session 3 complete!\n", "============================================================\n" ] } ], "source": [ "# Configure context management to clear aggressively for demo\n", "CONTEXT_MANAGEMENT = {\n", " \"edits\": [\n", " {\n", " \"type\": \"clear_tool_uses_20250919\",\n", " \"trigger\": {\"type\": \"input_tokens\", \"value\": 5000}, # Lower threshold to trigger clearing sooner\n", " \"keep\": {\"type\": \"tool_uses\", \"value\": 1}, # Keep only the last tool use\n", " \"clear_at_least\": {\"type\": \"input_tokens\", \"value\": 50}\n", " }\n", " ]\n", "}\n", "\n", "# Continue from previous session - memory persists!\n", "# Add multiple code reviews to build up context\n", "\n", "print(\"=\" * 60)\n", "print(\"πŸ“š SESSION 3: Long review session with context clearing\")\n", "print(\"=\" * 60)\n", "print()\n", "\n", "# Review 1: Data processor (larger file)\n", "with open(\"memory_demo/sample_code/data_processor_v1.py\", \"r\") as f:\n", " data_processor_code = f.read()\n", "\n", "messages.extend([\n", " {\n", " \"role\": \"user\",\n", " \"content\": f\"Review this data processor:\\n\\n```python\\n{data_processor_code}\\n```\"\n", " }\n", "])\n", "\n", "print(\"πŸ“ Review 1: Data processor\")\n", "response = run_conversation_turn(\n", " client=client,\n", " model=MODEL,\n", " messages=messages,\n", " memory_handler=memory,\n", " system=\"You are a code reviewer.\",\n", " context_management=CONTEXT_MANAGEMENT,\n", " max_tokens=2048,\n", " verbose=True\n", ")\n", "\n", "# Add response to messages\n", "messages.append({\"role\": \"assistant\", \"content\": response[1]})\n", "if response[2]:\n", " messages.append({\"role\": \"user\", \"content\": response[2]})\n", "\n", "print(f\" πŸ“Š Input tokens: {response[0].usage.input_tokens:,}\")\n", "context_cleared, saved = print_context_management_info(response[0])\n", "print()\n", "\n", "# Review 2: Add SQL code\n", "with open(\"memory_demo/sample_code/sql_query_builder.py\", \"r\") as f:\n", " sql_code = f.read()\n", "\n", "messages.extend([\n", " {\n", " \"role\": \"user\",\n", " \"content\": f\"Review this SQL query builder:\\n\\n```python\\n{sql_code}\\n```\"\n", " }\n", "])\n", "\n", "print(\"πŸ“ Review 2: SQL query builder\")\n", "response = run_conversation_turn(\n", " client=client,\n", " model=MODEL,\n", " messages=messages,\n", " memory_handler=memory,\n", " system=\"You are a code reviewer.\",\n", " context_management=CONTEXT_MANAGEMENT,\n", " max_tokens=2048,\n", " verbose=True\n", ")\n", "\n", "messages.append({\"role\": \"assistant\", \"content\": response[1]})\n", "if response[2]:\n", " messages.append({\"role\": \"user\", \"content\": response[2]})\n", "\n", "print(f\" πŸ“Š Input tokens: {response[0].usage.input_tokens:,}\")\n", "context_cleared, saved = print_context_management_info(response[0])\n", "print()\n", "\n", "print(\"=\" * 60)\n", "print(\"βœ… Session 3 complete!\")\n", "print(\"=\" * 60)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**What just happened?**\n", "\n", "As context grew during multiple reviews:\n", "1. **Context clearing triggered automatically** when input tokens exceeded 5,000\n", "2. **Old tool results were removed** - cleared 2 tool uses, saving ~66 tokens each time\n", "3. **Memory files remained intact** - Claude can still query learned patterns\n", "4. **Token usage continued to grow** but at a slower rate due to clearing\n", "\n", "This demonstrates the key benefit:\n", "- **Short-term memory** (conversation context with tool results) β†’ Cleared to save space\n", "- **Long-term memory** (stored patterns in `/memories`) β†’ Persists across sessions\n", "\n", "**Why such small token savings?** Memory tool operations return compact results (file paths, success messages). The `str_replace` operations only return \"File edited successfully\" plus metadata. In production use cases with larger tool results (web searches returning full articles, code execution with long outputs), context clearing would save thousands of tokens.\n", "\n", "Let's verify memory survived the clearing:" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "πŸ“‚ Memory files in demo_memory/:\n", "\n", "demo_memory/\n", " memories/\n", " β”œβ”€β”€ review_progress.md (257 bytes)\n", "\n", "βœ… All learned patterns preserved despite context clearing!\n" ] } ], "source": [ "# Verify memory persists after context clearing\n", "import os\n", "\n", "print(\"πŸ“‚ Memory files in demo_memory/:\")\n", "print()\n", "\n", "for root, dirs, files in os.walk(\"./demo_memory\"):\n", " # Calculate relative path for display\n", " level = root.replace(\"./demo_memory\", \"\").count(os.sep)\n", " indent = \" \" * level\n", " folder_name = os.path.basename(root) or \"demo_memory\"\n", " print(f\"{indent}{folder_name}/\")\n", " \n", " sub_indent = \" \" * (level + 1)\n", " for file in files:\n", " file_path = os.path.join(root, file)\n", " size = os.path.getsize(file_path)\n", " print(f\"{sub_indent}β”œβ”€β”€ {file} ({size} bytes)\")\n", "\n", "print()\n", "print(\"βœ… All learned patterns preserved despite context clearing!\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 4. How It Works {#how-it-works}\n", "\n", "### Memory Tool Architecture\n", "\n", "The memory tool is **client-side** - you control the storage. Claude makes tool calls, your application executes them.\n", "\n", "#### Memory Tool Commands\n", "\n", "| Command | Description | Example |\n", "|---------|-------------|---------|\n", "| `view` | Show directory or file contents | `{\"command\": \"view\", \"path\": \"/memories\"}` |\n", "| `create` | Create or overwrite a file | `{\"command\": \"create\", \"path\": \"/memories/notes.md\", \"file_text\": \"...\"}` |\n", "| `str_replace` | Replace text in a file | `{\"command\": \"str_replace\", \"path\": \"...\", \"old_str\": \"...\", \"new_str\": \"...\"}` |\n", "| `insert` | Insert text at line number | `{\"command\": \"insert\", \"path\": \"...\", \"insert_line\": 2, \"insert_text\": \"...\"}` |\n", "| `delete` | Delete a file or directory | `{\"command\": \"delete\", \"path\": \"/memories/old.txt\"}` |\n", "| `rename` | Rename or move a file | `{\"command\": \"rename\", \"old_path\": \"...\", \"new_path\": \"...\"}` |\n", "\n", "See `memory_tool.py` for the complete implementation with path validation and security measures." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Understanding the Demo Code\n", "\n", "Key implementation details from `code_review_demo.py`:\n", "\n", "```python\n", "class CodeReviewAssistant:\n", " def __init__(self, memory_storage_path=\"./memory_storage\"):\n", " self.client = Anthropic(api_key=API_KEY)\n", " self.memory_handler = MemoryToolHandler(base_path=memory_storage_path)\n", " self.messages = []\n", " \n", " def review_code(self, code, filename, description=\"\"):\n", " # 1. Add user message\n", " self.messages.append({...})\n", " \n", " # 2. Conversation loop with tool execution\n", " while True:\n", " response = self.client.beta.messages.create(\n", " model=MODEL,\n", " system=self._create_system_prompt(),\n", " messages=self.messages,\n", " tools=[{\"type\": \"memory_20250818\", \"name\": \"memory\"}],\n", " betas=[\"context-management-2025-06-27\"],\n", " context_management=CONTEXT_MANAGEMENT\n", " )\n", " \n", " # 3. Execute tool uses\n", " tool_results = []\n", " for content in response.content:\n", " if content.type == \"tool_use\":\n", " result = self._execute_tool_use(content)\n", " tool_results.append({...})\n", " \n", " # 4. Continue if there are tool uses, otherwise done\n", " if tool_results:\n", " self.messages.append({\"role\": \"user\", \"content\": tool_results})\n", " else:\n", " break\n", "```\n", "\n", "**The key pattern**: Keep calling the API while there are tool uses, executing them and feeding results back." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### What Claude Actually Learns\n", "\n", "This is what makes memory powerful - **semantic pattern recognition**, not just syntax:\n", "\n", "**Session 1: Thread-Based Web Scraper**\n", "\n", "```python\n", "# Bug: Race condition\n", "class WebScraper:\n", " def __init__(self):\n", " self.results = [] # Shared state!\n", " \n", " def scrape_urls(self, urls):\n", " with ThreadPoolExecutor() as executor:\n", " for future in as_completed(futures):\n", " self.results.append(future.result()) # RACE!\n", "```\n", "\n", "**What Claude Stores in Memory** (example file: `/memories/concurrency_patterns/thread_safety.md`):\n", "\n", "When Claude encounters this pattern, it stores the following insights to its memory files:\n", "- **Symptom**: Inconsistent results in concurrent operations\n", "- **Cause**: Shared mutable state (lists/dicts) modified from multiple threads\n", "- **Solution**: Use locks, thread-safe data structures, or return results instead\n", "- **Red flags**: Instance variables in thread callbacks, unused locks, counter increments\n", "\n", "---\n", "\n", "**Session 2: Async API Client** (New conversation!)\n", "\n", "Claude checks memory FIRST, finds the thread-safety pattern, then:\n", "1. **Recognizes** similar pattern in async code (coroutines can interleave too)\n", "2. **Applies** the solution immediately (no re-learning needed)\n", "3. **Explains** with reference to stored knowledge\n", "\n", "```python\n", "# Claude spots this immediately:\n", "async def fetch_all(self, endpoints):\n", " for coro in asyncio.as_completed(tasks):\n", " self.responses.append(await coro) # Same pattern!\n", "```\n", "\n", "---\n", "\n", "**Why This Matters:**\n", "\n", "- ❌ **Syntax checkers** miss race conditions entirely\n", "- βœ… **Claude learns** architectural patterns and applies them across contexts\n", "- βœ… **Cross-language**: Pattern applies to Go, Java, Rust concurrency too\n", "- βœ… **Gets better**: Each review adds to the knowledge base" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Sample Code Files\n", "\n", "The demo uses these sample files (all have concurrency/thread-safety bugs):\n", "\n", "- `memory_demo/sample_code/web_scraper_v1.py` - Race condition: threads modifying shared state\n", "- `memory_demo/sample_code/api_client_v1.py` - Similar concurrency bug in async context\n", "- `memory_demo/sample_code/data_processor_v1.py` - Multiple concurrency issues for long session demo\n", "\n", "Let's look at one:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**`memory_demo/sample_code/web_scraper_v1.py`**\n", "\n", "```python\n", "\"\"\"\n", "Concurrent web scraper with a race condition bug.\n", "Multiple threads modify shared state without synchronization.\n", "\"\"\"\n", "\n", "import time\n", "from concurrent.futures import ThreadPoolExecutor, as_completed\n", "from typing import List, Dict\n", "\n", "import requests\n", "\n", "\n", "class WebScraper:\n", " \"\"\"Web scraper that fetches multiple URLs concurrently.\"\"\"\n", "\n", " def __init__(self, max_workers: int = 10):\n", " self.max_workers = max_workers\n", " self.results = [] # BUG: Shared mutable state accessed by multiple threads!\n", " self.failed_urls = [] # BUG: Another race condition!\n", "\n", " def fetch_url(self, url: str) -> Dict[str, any]:\n", " \"\"\"Fetch a single URL and return the result.\"\"\"\n", " try:\n", " response = requests.get(url, timeout=5)\n", " response.raise_for_status()\n", " return {\n", " \"url\": url,\n", " \"status\": response.status_code,\n", " \"content_length\": len(response.content),\n", " }\n", " except requests.exceptions.RequestException as e:\n", " return {\"url\": url, \"error\": str(e)}\n", "\n", " def scrape_urls(self, urls: List[str]) -> List[Dict[str, any]]:\n", " \"\"\"\n", " Scrape multiple URLs concurrently.\n", "\n", " BUG: self.results is accessed from multiple threads without locking!\n", " This causes race conditions where results can be lost or corrupted.\n", " \"\"\"\n", " with ThreadPoolExecutor(max_workers=self.max_workers) as executor:\n", " futures = [executor.submit(self.fetch_url, url) for url in urls]\n", "\n", " for future in as_completed(futures):\n", " result = future.result()\n", "\n", " # RACE CONDITION: Multiple threads append to self.results simultaneously\n", " if \"error\" in result:\n", " self.failed_urls.append(result[\"url\"]) # RACE CONDITION\n", " else:\n", " self.results.append(result) # RACE CONDITION\n", "\n", " return self.results\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Bug**: Multiple threads modify `self.results` and `self.failed_urls` without locking!\n", "\n", "Claude will:\n", "1. Identify the race conditions\n", "2. Store the pattern in `/memories/concurrency_patterns/thread_safety.md`\n", "3. Apply this concurrency pattern to async code in Session 2" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Demo Overview\n", "\n", "We've built a complete Code Review Assistant. The implementation is in `memory_demo/code_review_demo.py`.\n", "\n", "**To run the interactive demo:**\n", "```bash\n", "python memory_demo/code_review_demo.py\n", "```\n", "\n", "The demo demonstrates:\n", "1. **Session 1**: Review Python code with a bug β†’ Claude learns the pattern\n", "2. **Session 2**: Review similar code (new conversation) β†’ Claude applies the pattern\n", "3. **Session 3**: Long review session β†’ Context editing keeps it manageable" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 7. Best Practices & Security {#best-practices}\n", "\n", "### Memory Management\n", "\n", "**Do:**\n", "- βœ… Store task-relevant patterns, not conversation history\n", "- βœ… Organize with clear directory structure\n", "- βœ… Use descriptive file names\n", "- βœ… Periodically review and clean up memory\n", "\n", "**Don't:**\n", "- ❌ Store sensitive information (passwords, API keys, PII)\n", "- ❌ Let memory grow unbounded\n", "- ❌ Store everything indiscriminately\n", "\n", "### Security: Path Traversal Protection\n", "\n", "**Critical**: Always validate paths to prevent directory traversal attacks. See `memory_tool.py` for implementation.\n", "\n", "### Security: Memory Poisoning\n", "\n", "**⚠️ Critical Risk**: Memory files are read back into Claude's context, making them a potential vector for prompt injection.\n", "\n", "**Mitigation strategies:**\n", "1. **Content Sanitization**: Filter dangerous patterns before storing\n", "2. **Memory Scope Isolation**: Per-user/per-project isolation \n", "3. **Memory Auditing**: Log and scan all memory operations\n", "4. **Prompt Engineering**: Instruct Claude to ignore instructions in memory\n", "\n", "See `memory_tool.py` for complete security implementation and tests in `tests/`." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Next Steps\n", "\n", "### Resources\n", "\n", "- **API docs**: [Claude API reference](https://docs.claude.com/en/api/messages)\n", "- **Usage docs**: [Memory tool](https://docs.claude.com/en/docs/agents-and-tools/tool-use/memory-tool)\n", "- **GitHub Action**: [claude-code-action](https://github.com/anthropics/claude-code-action)\n", "- **Support**: [support.claude.com](https://support.claude.com)\n", "\n", "### Feedback\n", "\n", "Memory and context management are in **beta**. Share your feedback to help us improve!" ] } ], "metadata": { "kernelspec": { "display_name": ".venv", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.13" } }, "nbformat": 4, "nbformat_minor": 4 }