working docs
@@ -1,7 +1,7 @@
|
||||
<picture>
|
||||
<source media="(prefers-color-scheme: dark)" srcset="docs/_static/ell-wide-dark.png">
|
||||
<source media="(prefers-color-scheme: light)" srcset="docs/_static/ell-wide-light.png">
|
||||
<img alt="ell logo that inverts based on color scheme" src="docs/_static/ell-wide.png">
|
||||
<source media="(prefers-color-scheme: dark)" srcset="docs/src/_static/ell-wide-dark.png">
|
||||
<source media="(prefers-color-scheme: light)" srcset="docs/src/_static/ell-wide-light.png">
|
||||
<img alt="ell logo that inverts based on color scheme" src="docs/src/_static/ell-wide.png">
|
||||
</picture>
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
# from the environment for the first two.
|
||||
SPHINXOPTS ?=
|
||||
SPHINXBUILD ?= sphinx-build
|
||||
SOURCEDIR = .
|
||||
SOURCEDIR = src
|
||||
BUILDDIR = _build
|
||||
|
||||
# Put it first so that "make" without argument is like "make help".
|
||||
|
||||
@@ -1,40 +0,0 @@
|
||||
.. ell documentation master file, created by
|
||||
sphinx-quickstart on Thu Aug 29 13:45:32 2024.
|
||||
You can adapt this file completely to your liking, but it should at least
|
||||
contain the root `toctree` directive.
|
||||
|
||||
============
|
||||
Introduction
|
||||
============
|
||||
|
||||
Welcome to the documentation for ``ell``, a lightweight, functional prompt engineering framework.
|
||||
|
||||
Overview
|
||||
--------
|
||||
|
||||
``ell`` is built on a few core principles:
|
||||
|
||||
1. **Prompts are programs, not strings**: In ``ell``, we think of using a language model as a discrete subroutine called a **language model program** (LMP).
|
||||
|
||||
2. **Prompts are parameters of a machine learning model**: ``ell`` treats prompts as learnable parameters.
|
||||
|
||||
3. **Every call to a language model is valuable**: ``ell`` emphasizes the importance of each language model interaction.
|
||||
|
||||
Key Features
|
||||
------------
|
||||
|
||||
- Functional approach to prompt engineering
|
||||
- Visualization and tracking of prompts using ``ell-studio``
|
||||
- Support for various storage backends (SQLite, PostgreSQL)
|
||||
- Integration with popular language models
|
||||
|
||||
For installation instructions, usage examples, and contribution guidelines, please refer to the project's GitHub repository.
|
||||
|
||||
Contents
|
||||
--------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
:caption: Contents:
|
||||
|
||||
reference/index
|
||||
@@ -7,7 +7,7 @@ REM Command file for Sphinx documentation
|
||||
if "%SPHINXBUILD%" == "" (
|
||||
set SPHINXBUILD=sphinx-build
|
||||
)
|
||||
set SOURCEDIR=.
|
||||
set SOURCEDIR=src
|
||||
set BUILDDIR=_build
|
||||
|
||||
%SPHINXBUILD% >NUL 2>NUL
|
||||
|
||||
54
docs/src/.old/index.rst
Normal file
@@ -0,0 +1,54 @@
|
||||
|
||||
Key Features
|
||||
~~~~~~~~~~~~
|
||||
|
||||
- **Prompts as Programs**: ell treats prompts as encapsulated functions rather than simple strings, allowing for more complex and reusable prompt logic.
|
||||
- **Automatic Versioning and Tracking**: Every change to your Language Model Programs (LMPs) is automatically versioned and tracked, enabling easy inspection and comparison of different iterations.
|
||||
- **Multimodal Support**: Seamlessly handle text, images, and other modalities in both inputs and outputs.
|
||||
- **Tool Integration**: Define and use tools within your language model interactions, expanding the capabilities of your LMPs.
|
||||
- **Structured Outputs**: Generate and work with structured data from language models using Pydantic models.
|
||||
- ``ell`` **Studio**: Tensorboard for prompt engineering. A open source built-in visualization tool for inspecting your prompts, versions, and invocations.
|
||||
|
||||
Design Philosophy
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
1. **Encapsulation**: Prompts are programs, not strings. ell forces implementers to encapsulate prompt generation in single functions or methods.
|
||||
2. **High-Quality Tooling**: Prompt engineering deserves the same high-quality tooling and design patterns as machine learning engineering.
|
||||
3. **Efficient Iteration**: Every call to a language model is valuable. ell provides tools for versioning, tracing, and analyzing prompt iterations.
|
||||
4. **Real-World Use Cases**: ell is designed for complex, multi-step language model interactions, not just one-shot demos.
|
||||
|
||||
How to Use This Documentation
|
||||
-----------------------------
|
||||
|
||||
This documentation is organized to help you get started quickly and then dive deeper into ell's features:
|
||||
|
||||
1. **Installation**: Begin by installing ell on your system.
|
||||
2. **Getting Started**: Learn the basics of creating and running your first Language Model Program.
|
||||
3. **Core Concepts**: Understand the fundamental ideas and components that make up ell, including the Message API and tool use.
|
||||
4. **Advanced Features**: Explore more complex functionalities like multimodal inputs, versioning, and tracing.
|
||||
5. **ell Studio**: Learn how to use the built-in visualization tool for analyzing your LMPs.
|
||||
6. **Best Practices**: Discover tips and strategies for effectively using ell in your projects.
|
||||
7. **API Reference**: Find detailed information about ell's functions and classes.
|
||||
8. **Tutorials**: Walk through comprehensive examples of building real-world applications with ell.
|
||||
9. **Troubleshooting**: Get help with common issues and learn debugging techniques.
|
||||
|
||||
Whether you're new to prompt engineering or an experienced practitioner, this documentation will guide you in leveraging ell to create sophisticated and efficient language model applications.
|
||||
|
||||
Table of Contents
|
||||
-----------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
:caption: Contents:
|
||||
|
||||
installation
|
||||
getting_started/index
|
||||
core_concepts/index
|
||||
advanced_features/index
|
||||
ell_studio/index
|
||||
best_practices/index
|
||||
api_reference/index
|
||||
tutorials/index
|
||||
troubleshooting
|
||||
glossary
|
||||
changelog
|
||||
BIN
docs/src/_static/compositionality.webp
Normal file
|
After Width: | Height: | Size: 141 KiB |
|
Before Width: | Height: | Size: 132 KiB After Width: | Height: | Size: 132 KiB |
|
Before Width: | Height: | Size: 183 KiB After Width: | Height: | Size: 183 KiB |
BIN
docs/src/_static/ell_studio.webp
Normal file
|
After Width: | Height: | Size: 1.6 MiB |
BIN
docs/src/_static/gif1.webp
Normal file
|
After Width: | Height: | Size: 479 KiB |
BIN
docs/src/_static/invocations.webp
Normal file
|
After Width: | Height: | Size: 594 KiB |
|
Before Width: | Height: | Size: 383 KiB After Width: | Height: | Size: 383 KiB |
BIN
docs/src/_static/useitanywhere.webp
Normal file
|
After Width: | Height: | Size: 3.3 MiB |
BIN
docs/src/_static/useitanywhere_compressed.webp
Normal file
|
After Width: | Height: | Size: 1.2 MiB |
BIN
docs/src/_static/versions.webp
Normal file
|
After Width: | Height: | Size: 2.8 MiB |
BIN
docs/src/_static/versions_small.webp
Normal file
|
After Width: | Height: | Size: 461 KiB |
137
docs/src/advanced_features/multimodal_inputs.rst
Normal file
@@ -0,0 +1,137 @@
|
||||
Multimodal Inputs in ell
|
||||
========================
|
||||
|
||||
Introduction
|
||||
------------
|
||||
|
||||
ell supports multimodal inputs, allowing Language Model Programs (LMPs) to work with various types of data beyond just text. This feature enables more complex and rich interactions with language models, particularly useful for tasks involving images, audio, or structured data.
|
||||
|
||||
Supported Input Types
|
||||
---------------------
|
||||
|
||||
ell currently supports the following input types:
|
||||
|
||||
1. Text
|
||||
2. Images
|
||||
3. Structured Data (via Pydantic models)
|
||||
|
||||
Future versions may include support for additional modalities like audio or video.
|
||||
|
||||
Working with Multimodal Inputs
|
||||
------------------------------
|
||||
|
||||
Text Inputs
|
||||
^^^^^^^^^^^
|
||||
|
||||
Text inputs are the most basic form and are handled as strings:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
@ell.simple(model="gpt-4")
|
||||
def text_lmp(text_input: str) -> str:
|
||||
return f"Analyze this text: {text_input}"
|
||||
|
||||
Image Inputs
|
||||
^^^^^^^^^^^^
|
||||
|
||||
To work with images, use the ``PIL.Image.Image`` type:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from PIL import Image
|
||||
|
||||
@ell.simple(model="gpt-4-vision-preview")
|
||||
def image_analysis_lmp(image: Image.Image) -> str:
|
||||
return [
|
||||
ell.system("Analyze the given image and describe its contents."),
|
||||
ell.user([
|
||||
ell.ContentBlock(text="What do you see in this image?"),
|
||||
ell.ContentBlock(image=image)
|
||||
])
|
||||
]
|
||||
|
||||
# Usage
|
||||
image = Image.open("example.jpg")
|
||||
description = image_analysis_lmp(image)
|
||||
print(description)
|
||||
|
||||
Structured Data Inputs
|
||||
^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
For structured data, use Pydantic models:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from pydantic import BaseModel
|
||||
|
||||
class UserProfile(BaseModel):
|
||||
name: str
|
||||
age: int
|
||||
interests: List[str]
|
||||
|
||||
@ell.simple(model="gpt-4")
|
||||
def profile_analysis_lmp(profile: UserProfile) -> str:
|
||||
return f"Analyze this user profile:\n{profile.model_dump_json()}"
|
||||
|
||||
# Usage
|
||||
user = UserProfile(name="Alice", age=30, interests=["reading", "hiking"])
|
||||
analysis = profile_analysis_lmp(user)
|
||||
print(analysis)
|
||||
|
||||
Combining Multiple Input Types
|
||||
------------------------------
|
||||
|
||||
You can combine different input types in a single LMP:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
@ell.complex(model="gpt-4-vision-preview")
|
||||
def multi_input_lmp(text: str, image: Image.Image, profile: UserProfile) -> List[ell.Message]:
|
||||
return [
|
||||
ell.system("You are an AI assistant capable of analyzing text, images, and user profiles."),
|
||||
ell.user([
|
||||
ell.ContentBlock(text=f"Analyze this text: {text}"),
|
||||
ell.ContentBlock(image=image),
|
||||
ell.ContentBlock(text=f"Consider this user profile: {profile.model_dump_json()}")
|
||||
])
|
||||
]
|
||||
|
||||
# Usage
|
||||
text_input = "This is a sample text."
|
||||
image_input = Image.open("example.jpg")
|
||||
profile_input = UserProfile(name="Bob", age=25, interests=["sports", "music"])
|
||||
|
||||
response = multi_input_lmp(text_input, image_input, profile_input)
|
||||
print(response.text)
|
||||
|
||||
Best Practices for Multimodal Inputs
|
||||
------------------------------------
|
||||
|
||||
1. **Type Annotations**: Always use proper type annotations for your inputs to ensure ell handles them correctly.
|
||||
2. **Input Validation**: For structured data, leverage Pydantic's validation capabilities to ensure data integrity.
|
||||
3. **Clear Instructions**: When combining multiple input types, provide clear instructions to the language model on how to process each input.
|
||||
4. **Model Compatibility**: Ensure the chosen language model supports the input types you're using (e.g., using a vision-capable model for image inputs).
|
||||
5. **Input Size**: Be mindful of input sizes, especially for images, as there may be limitations on the maximum size supported by the model.
|
||||
|
||||
Handling Large Inputs
|
||||
---------------------
|
||||
|
||||
For large inputs, especially images, you may need to resize or compress them before passing to the LMP:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from PIL import Image
|
||||
|
||||
def prepare_image(image_path: str, max_size: tuple = (1024, 1024)) -> Image.Image:
|
||||
with Image.open(image_path) as img:
|
||||
img.thumbnail(max_size)
|
||||
return img
|
||||
|
||||
# Usage
|
||||
prepared_image = prepare_image("large_image.jpg")
|
||||
result = image_analysis_lmp(prepared_image)
|
||||
|
||||
Conclusion
|
||||
----------
|
||||
|
||||
Multimodal inputs in ell greatly expand the capabilities of your Language Model Programs, allowing them to process and analyze various types of data. By effectively combining different input modalities, you can create more sophisticated and context-aware AI applications.
|
||||
85
docs/src/audio_transcript.txt
Normal file
250
docs/src/best_practices/designing_effective_lmps.rst
Normal file
@@ -0,0 +1,250 @@
|
||||
Designing Effective Language Model Programs
|
||||
===========================================
|
||||
|
||||
Introduction
|
||||
------------
|
||||
|
||||
Creating effective Language Model Programs (LMPs) is crucial for leveraging the full power of large language models in your applications. This guide will cover best practices and strategies for designing LMPs that are efficient, maintainable, and produce high-quality results.
|
||||
|
||||
Principles of Effective LMP Design
|
||||
----------------------------------
|
||||
|
||||
1. Single Responsibility
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Each LMP should have a single, well-defined purpose. This makes your programs easier to understand, test, and maintain.
|
||||
|
||||
Good example:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
@ell.simple(model="gpt-4")
|
||||
def summarize_article(article: str) -> str:
|
||||
"""Summarize the given article in three sentences."""
|
||||
return f"Please summarize this article in three sentences:\n\n{article}"
|
||||
|
||||
2. Clear and Concise Prompts
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Write clear, concise prompts that give the language model specific instructions.
|
||||
|
||||
Good example:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
@ell.simple(model="gpt-4")
|
||||
def generate_product_name(description: str) -> str:
|
||||
"""You are a creative marketing expert. Generate a catchy product name."""
|
||||
return f"Create a catchy, memorable name for a product with this description: {description}. The name should be no more than 3 words long."
|
||||
|
||||
3. Leverage System Messages
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Use the system message (function docstring in ``@ell.simple``) to set the context and role for the language model.
|
||||
|
||||
Good example:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
@ell.simple(model="gpt-4")
|
||||
def code_reviewer(code: str) -> str:
|
||||
"""You are an experienced software engineer conducting a code review. Your feedback should be constructive, specific, and actionable."""
|
||||
return f"Review the following code and provide feedback:\n\n```python\n{code}\n```"
|
||||
|
||||
4. Use Strong Typing
|
||||
^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Leverage Python's type hints to make your LMPs more robust and self-documenting.
|
||||
|
||||
Good example:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from typing import List
|
||||
|
||||
@ell.simple(model="gpt-4")
|
||||
def categorize_items(items: List[str]) -> List[str]:
|
||||
"""Categorize each item in the list."""
|
||||
items_str = "\n".join(items)
|
||||
return f"Categorize each of the following items into one of these categories: Food, Clothing, Electronics, or Other.\n\n{items_str}"
|
||||
|
||||
5. Modular Design
|
||||
^^^^^^^^^^^^^^^^^
|
||||
|
||||
Break complex tasks into smaller, composable LMPs.
|
||||
|
||||
Good example:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
@ell.simple(model="gpt-4")
|
||||
def extract_key_points(text: str) -> str:
|
||||
"""Extract the key points from the given text."""
|
||||
return f"Extract the 3-5 most important points from this text:\n\n{text}"
|
||||
|
||||
@ell.simple(model="gpt-4")
|
||||
def generate_summary(key_points: str) -> str:
|
||||
"""Generate a summary based on key points."""
|
||||
return f"Create a coherent summary paragraph using these key points:\n\n{key_points}"
|
||||
|
||||
def summarize_long_text(text: str) -> str:
|
||||
points = extract_key_points(text)
|
||||
return generate_summary(points)
|
||||
|
||||
6. Error Handling
|
||||
^^^^^^^^^^^^^^^^^
|
||||
|
||||
Design your LMPs to handle potential errors gracefully.
|
||||
|
||||
Good example:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
@ell.simple(model="gpt-4")
|
||||
def answer_question(question: str) -> str:
|
||||
"""You are a helpful AI assistant answering user questions."""
|
||||
return f"""
|
||||
Answer the following question. If you're not sure about the answer, say "I'm not sure" and explain why:
|
||||
|
||||
Question: {question}
|
||||
"""
|
||||
|
||||
7. Consistent Formatting
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Maintain consistent formatting in your prompts for better readability and maintainability.
|
||||
|
||||
Good example:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
@ell.simple(model="gpt-4")
|
||||
def analyze_sentiment(text: str) -> str:
|
||||
"""Analyze the sentiment of the given text."""
|
||||
return f"""
|
||||
Analyze the sentiment of the following text. Respond with one of these options:
|
||||
- Positive
|
||||
- Neutral
|
||||
- Negative
|
||||
|
||||
Then provide a brief explanation for your choice.
|
||||
|
||||
Text: {text}
|
||||
"""
|
||||
|
||||
Advanced Techniques
|
||||
-------------------
|
||||
|
||||
1. Few-Shot Learning
|
||||
^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Provide examples in your prompts to guide the model's responses.
|
||||
|
||||
Example:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
@ell.simple(model="gpt-4")
|
||||
def generate_poem(topic: str) -> str:
|
||||
"""You are a skilled poet. Generate a short poem on the given topic."""
|
||||
return f"""
|
||||
Write a short, four-line poem about {topic}. Here's an example format:
|
||||
|
||||
Topic: Sun
|
||||
Poem:
|
||||
Golden orb in azure sky,
|
||||
Warming earth as day goes by,
|
||||
Life-giving light, nature's friend,
|
||||
Day's journey to night's soft end.
|
||||
|
||||
Now, create a poem about: {topic}
|
||||
"""
|
||||
|
||||
2. Chain of Thought Prompting
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Encourage the model to show its reasoning process.
|
||||
|
||||
Example:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
@ell.simple(model="gpt-4")
|
||||
def solve_math_problem(problem: str) -> str:
|
||||
"""You are a math tutor helping a student solve a problem."""
|
||||
return f"""
|
||||
Solve the following math problem. Show your work step-by-step, explaining each step clearly.
|
||||
|
||||
Problem: {problem}
|
||||
|
||||
Solution:
|
||||
1) [First step]
|
||||
2) [Second step]
|
||||
...
|
||||
|
||||
Final Answer: [Your answer here]
|
||||
"""
|
||||
|
||||
3. Iterative Refinement
|
||||
^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Use multiple LMPs in sequence to refine outputs.
|
||||
|
||||
Example:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
@ell.simple(model="gpt-4")
|
||||
def draft_essay(topic: str) -> str:
|
||||
"""Create a first draft of an essay on the given topic."""
|
||||
return f"Write a first draft of a short essay about {topic}."
|
||||
|
||||
@ell.simple(model="gpt-4")
|
||||
def improve_essay(essay: str) -> str:
|
||||
"""Improve the given essay draft."""
|
||||
return f"""
|
||||
Improve the following essay draft. Focus on:
|
||||
1. Clarifying main points
|
||||
2. Improving transitions between paragraphs
|
||||
3. Enhancing the conclusion
|
||||
|
||||
Essay draft:
|
||||
{essay}
|
||||
"""
|
||||
|
||||
def create_polished_essay(topic: str) -> str:
|
||||
first_draft = draft_essay(topic)
|
||||
return improve_essay(first_draft)
|
||||
|
||||
Best Practices for Complex LMPs
|
||||
-------------------------------
|
||||
|
||||
When working with ``@ell.complex`` and multi-turn conversations:
|
||||
|
||||
1. **Maintain Context**: Ensure that relevant information is carried through the conversation.
|
||||
|
||||
2. **Use Tools Judiciously**: When integrating tools, provide clear instructions on when and how to use them.
|
||||
|
||||
3. **Handle Ambiguity**: Design your LMPs to ask for clarification when inputs are ambiguous.
|
||||
|
||||
4. **Stateful Interactions**: For stateful LMPs, clearly define what information should be maintained between turns.
|
||||
|
||||
Example of a complex LMP:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
@ell.complex(model="gpt-4", tools=[get_weather, search_database])
|
||||
def travel_assistant(message_history: List[ell.Message]) -> List[ell.Message]:
|
||||
return [
|
||||
ell.system("""
|
||||
You are a travel assistant helping users plan their trips.
|
||||
Use the get_weather tool to check weather conditions and the search_database tool to find information about destinations.
|
||||
Always confirm the user's preferences before making recommendations.
|
||||
"""),
|
||||
*message_history
|
||||
]
|
||||
|
||||
Conclusion
|
||||
----------
|
||||
|
||||
Designing effective Language Model Programs is both an art and a science. By following these principles and techniques, you can create LMPs that are more efficient, maintainable, and produce higher quality results. Remember to iterate on your designs, test thoroughly, and always consider the end-user experience when crafting your prompts.
|
||||
@@ -22,8 +22,8 @@ exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
|
||||
html_theme = "sphinxawesome_theme"
|
||||
|
||||
# Configure syntax highlighting for Awesome Sphinx Theme
|
||||
pygments_style = "friendly"
|
||||
pygments_style_dark = "monokai"
|
||||
pygments_style = "default"
|
||||
pygments_style_dark = "dracula"
|
||||
|
||||
# Additional theme configuration
|
||||
html_theme_options = {
|
||||
@@ -47,4 +47,6 @@ html_theme_options = {
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
templates_path = ['_templates']
|
||||
132
docs/src/core_concepts/decorators.rst
Normal file
@@ -0,0 +1,132 @@
|
||||
Decorators in ell
|
||||
=================
|
||||
|
||||
Introduction
|
||||
------------
|
||||
|
||||
Decorators are a fundamental concept in ell, used to transform regular Python functions into Language Model Programs (LMPs). ell provides two main decorators: ``@ell.simple`` and ``@ell.complex``. Understanding these decorators is crucial for effectively using ell in your projects.
|
||||
|
||||
@ell.simple Decorator
|
||||
---------------------
|
||||
|
||||
The ``@ell.simple`` decorator is used for straightforward, text-in-text-out interactions with language models.
|
||||
|
||||
Syntax
|
||||
^^^^^^
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
@ell.simple(model: str, client: Optional[openai.Client] = None, exempt_from_tracking=False, **api_params)
|
||||
def my_simple_lmp(input_param: str) -> str:
|
||||
"""System prompt goes here."""
|
||||
return f"User prompt based on {input_param}"
|
||||
|
||||
Parameters
|
||||
^^^^^^^^^^
|
||||
|
||||
- ``model``: The name or identifier of the language model to use (e.g., "gpt-4").
|
||||
- ``client``: An optional OpenAI client instance. If not provided, a default client will be used.
|
||||
- ``exempt_from_tracking``: If True, the LMP usage won't be tracked. Default is False.
|
||||
- ``**api_params``: Additional keyword arguments to pass to the underlying API call (e.g., temperature, max_tokens).
|
||||
|
||||
Usage
|
||||
^^^^^
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
@ell.simple(model="gpt-4", temperature=0.7)
|
||||
def generate_haiku(topic: str) -> str:
|
||||
"""You are a master haiku poet. Create a haiku about the given topic."""
|
||||
return f"Write a haiku about: {topic}"
|
||||
|
||||
haiku = generate_haiku("autumn leaves")
|
||||
print(haiku)
|
||||
|
||||
Key Points
|
||||
^^^^^^^^^^
|
||||
|
||||
1. The function's docstring becomes the system prompt.
|
||||
2. The return value of the function becomes the user prompt.
|
||||
3. The decorated function returns a string, which is the model's response.
|
||||
4. Supports multimodal inputs but always returns text.
|
||||
|
||||
@ell.complex Decorator
|
||||
----------------------
|
||||
|
||||
The ``@ell.complex`` decorator is used for more advanced scenarios, including multi-turn conversations, tool usage, and structured outputs.
|
||||
|
||||
Syntax
|
||||
^^^^^^
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
@ell.complex(model: str, client: Optional[openai.Client] = None, exempt_from_tracking=False, tools: Optional[List[Callable]] = None, **api_params)
|
||||
def my_complex_lmp(message_history: List[ell.Message]) -> List[ell.Message]:
|
||||
return [
|
||||
ell.system("System message here"),
|
||||
*message_history
|
||||
]
|
||||
|
||||
Parameters
|
||||
^^^^^^^^^^
|
||||
|
||||
- ``model``: The name or identifier of the language model to use.
|
||||
- ``client``: An optional OpenAI client instance.
|
||||
- ``exempt_from_tracking``: If True, the LMP usage won't be tracked.
|
||||
- ``tools``: A list of tool functions that can be used by the LLM.
|
||||
- ``**api_params``: Additional API parameters.
|
||||
|
||||
Usage
|
||||
^^^^^
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
@ell.complex(model="gpt-4", tools=[get_weather])
|
||||
def weather_assistant(message_history: List[ell.Message]) -> List[ell.Message]:
|
||||
return [
|
||||
ell.system("You are a weather assistant with access to real-time weather data."),
|
||||
*message_history
|
||||
]
|
||||
|
||||
response = weather_assistant([ell.user("What's the weather like in New York?")])
|
||||
print(response.text)
|
||||
|
||||
if response.tool_calls:
|
||||
tool_results = response.call_tools_and_collect_as_message()
|
||||
final_response = weather_assistant(message_history + [response, tool_results])
|
||||
print(final_response.text)
|
||||
|
||||
Key Points
|
||||
^^^^^^^^^^
|
||||
|
||||
1. Works with lists of ``ell.Message`` objects for more complex interactions.
|
||||
2. Supports tool integration for expanded capabilities.
|
||||
3. Can handle multi-turn conversations and maintain context.
|
||||
4. Allows for structured inputs and outputs using Pydantic models.
|
||||
5. Supports multimodal inputs and outputs.
|
||||
|
||||
Choosing Between Simple and Complex
|
||||
-----------------------------------
|
||||
|
||||
- Use ``@ell.simple`` for:
|
||||
- Single-turn, text-in-text-out interactions
|
||||
- Quick prototyping and simple use cases
|
||||
- When you don't need tool integration or structured outputs
|
||||
|
||||
- Use ``@ell.complex`` for:
|
||||
- Multi-turn conversations
|
||||
- Integrating tools or external data sources
|
||||
- Working with structured data (using Pydantic models)
|
||||
- Multimodal inputs or outputs
|
||||
- Advanced control over the interaction flow
|
||||
|
||||
Best Practices
|
||||
--------------
|
||||
|
||||
1. Start with ``@ell.simple`` for basic tasks and migrate to ``@ell.complex`` as your needs grow.
|
||||
2. Use clear and concise docstrings to provide effective system prompts.
|
||||
3. Leverage type hints for better code clarity and error catching.
|
||||
4. When using ``@ell.complex``, break down complex logic into smaller, composable LMPs.
|
||||
5. Use the ``exempt_from_tracking`` parameter judiciously, as tracking provides valuable insights.
|
||||
|
||||
By mastering these decorators, you'll be able to create powerful and flexible Language Model Programs tailored to your specific needs.
|
||||
47
docs/src/core_concepts/index.rst
Normal file
@@ -0,0 +1,47 @@
|
||||
Core Concepts in ell
|
||||
====================
|
||||
|
||||
Welcome to the Core Concepts section of the ell documentation. This section covers the fundamental ideas and components that form the backbone of ell. Understanding these concepts is crucial for effectively using ell in your projects.
|
||||
|
||||
In this section:
|
||||
----------------
|
||||
|
||||
1. :doc:`Language Model Programs (LMPs) <language_model_programs>`
|
||||
|
||||
- What are Language Model Programs?
|
||||
- The philosophy behind treating prompts as programs
|
||||
|
||||
2. :doc:`Decorators <decorators>`
|
||||
|
||||
- The @ell.simple decorator
|
||||
- The @ell.complex decorator
|
||||
- Choosing between simple and complex LMPs
|
||||
|
||||
3. :doc:`Messages and Content Blocks <messages_and_content_blocks>`
|
||||
|
||||
- Understanding the Message system
|
||||
- Working with different types of ContentBlocks
|
||||
|
||||
4. :doc:`Tools <tools>`
|
||||
|
||||
- Defining and using tools in ell
|
||||
- Integrating tools with Language Model Programs
|
||||
|
||||
.. 5. :doc:`Structured Outputs <structured_outputs>`
|
||||
|
||||
.. - Using Pydantic models for structured data
|
||||
.. - Benefits of working with structured outputs in LLM interactions
|
||||
|
||||
By mastering these core concepts, you'll have a solid foundation for building sophisticated applications with ell. Each concept builds upon the others, so we recommend going through them in order.
|
||||
|
||||
Let's start by exploring Language Model Programs, the fundamental building blocks of ell!
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:caption: Core Concepts:
|
||||
|
||||
language_model_programs
|
||||
decorators
|
||||
messages_and_content_blocks
|
||||
tools
|
||||
.. structured_outputs
|
||||
93
docs/src/core_concepts/language_model_programs.rst
Normal file
@@ -0,0 +1,93 @@
|
||||
Language Model Programs (LMPs)
|
||||
==============================
|
||||
|
||||
Introduction
|
||||
------------
|
||||
|
||||
Language Model Programs (LMPs) are a core concept in ell. They represent a paradigm shift in how we interact with large language models, treating prompts not as simple strings, but as full-fledged programs with logic, structure, and reusability.
|
||||
|
||||
What are Language Model Programs?
|
||||
---------------------------------
|
||||
|
||||
An LMP in ell is a Python function decorated with either ``@ell.simple`` or ``@ell.complex``. This function encapsulates the logic for generating a prompt or a series of messages to be sent to a language model.
|
||||
|
||||
Key characteristics of LMPs:
|
||||
|
||||
1. **Encapsulation**: All the logic for creating a prompt is contained within a single function.
|
||||
2. **Reusability**: LMPs can be easily reused across different parts of your application.
|
||||
3. **Versioning**: ell automatically versions your LMPs, allowing you to track changes over time.
|
||||
4. **Tracing**: Every invocation of an LMP is traced, providing insights into your application's behavior.
|
||||
|
||||
Simple vs Complex LMPs
|
||||
----------------------
|
||||
|
||||
ell provides two main types of LMPs:
|
||||
|
||||
Simple LMPs
|
||||
^^^^^^^^^^^
|
||||
|
||||
Simple LMPs are created using the ``@ell.simple`` decorator. They are designed for straightforward, single-turn interactions with a language model.
|
||||
|
||||
Example of a Simple LMP:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
@ell.simple(model="gpt-4")
|
||||
def summarize_text(text: str) -> str:
|
||||
"""You are an expert at summarizing text."""
|
||||
return f"Please summarize the following text:\n\n{text}"
|
||||
|
||||
Key points:
|
||||
|
||||
- The function's docstring becomes the system prompt.
|
||||
- The return value becomes the user prompt.
|
||||
- The LMP returns a single string response from the model.
|
||||
|
||||
Complex LMPs
|
||||
^^^^^^^^^^^^
|
||||
|
||||
Complex LMPs are created using the ``@ell.complex`` decorator. They allow for more advanced scenarios, including:
|
||||
|
||||
- Multi-turn conversations
|
||||
- Tool usage
|
||||
- Structured inputs and outputs
|
||||
- Multimodal interactions
|
||||
|
||||
Example of a Complex LMP:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
@ell.complex(model="gpt-4", tools=[some_tool])
|
||||
def interactive_assistant(message_history: List[ell.Message]) -> List[ell.Message]:
|
||||
return [
|
||||
ell.system("You are a helpful assistant with access to tools."),
|
||||
] + message_history
|
||||
|
||||
Key points:
|
||||
|
||||
- Complex LMPs work with lists of ``ell.Message`` objects.
|
||||
- They can integrate tools and handle multi-turn conversations.
|
||||
- They offer more control over the interaction with the language model.
|
||||
|
||||
Benefits of Language Model Programs
|
||||
-----------------------------------
|
||||
|
||||
1. **Modularity**: LMPs encourage breaking down complex prompt engineering tasks into manageable, reusable components.
|
||||
2. **Versioning**: Automatic versioning allows you to track changes and compare different iterations of your prompts.
|
||||
3. **Tracing**: Invocation tracing helps in debugging and optimizing your language model interactions.
|
||||
4. **Type Safety**: By using Python's type hints, LMPs provide better code clarity and catch potential errors early.
|
||||
5. **Testability**: LMPs can be easily unit tested, improving the reliability of your prompt engineering process.
|
||||
|
||||
Best Practices for LMPs
|
||||
-----------------------
|
||||
|
||||
1. Keep each LMP focused on a single task or concept.
|
||||
2. Use descriptive names for your LMP functions.
|
||||
3. Leverage the function's docstring to provide clear instructions to the language model.
|
||||
4. Use type hints to clarify the expected inputs and outputs of your LMPs.
|
||||
5. For complex interactions, break down your logic into multiple LMPs that can be composed together.
|
||||
|
||||
Conclusion
|
||||
----------
|
||||
|
||||
Language Model Programs are a powerful abstraction that allows you to work with language models in a more structured, maintainable, and scalable way. By thinking of prompts as programs, ell enables you to apply software engineering best practices to the field of prompt engineering.
|
||||
130
docs/src/core_concepts/messages_and_content_blocks.rst
Normal file
@@ -0,0 +1,130 @@
|
||||
Messages and Content Blocks in ell
|
||||
==================================
|
||||
|
||||
In ell, the Message and ContentBlock classes are fundamental to handling communication with language models, especially in multi-turn conversations and when dealing with various types of content. This guide will help you understand how to work with these important components.
|
||||
|
||||
Messages
|
||||
--------
|
||||
|
||||
A Message in ell represents a single interaction in a conversation with a language model. It has two main components:
|
||||
|
||||
1. A role (e.g., "system", "user", "assistant")
|
||||
2. Content (represented by a list of ContentBlocks)
|
||||
|
||||
Creating Messages
|
||||
^^^^^^^^^^^^^^^^^
|
||||
|
||||
ell provides helper functions to create messages with specific roles:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
import ell
|
||||
|
||||
system_message = ell.system("You are a helpful AI assistant.")
|
||||
user_message = ell.user("What's the weather like today?")
|
||||
assistant_message = ell.assistant("I'm sorry, I don't have access to real-time weather information.")
|
||||
|
||||
You can also create messages directly:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from ell import Message, ContentBlock
|
||||
|
||||
message = Message(role="user", content=[ContentBlock(text="Hello, world!")])
|
||||
|
||||
Content Blocks
|
||||
--------------
|
||||
|
||||
ContentBlocks are used to represent different types of content within a message. They can contain:
|
||||
|
||||
- Text
|
||||
- Images
|
||||
- Audio (future support)
|
||||
- Tool calls
|
||||
- Tool results
|
||||
- Structured data (parsed content)
|
||||
|
||||
Creating Content Blocks
|
||||
^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Here are examples of creating different types of ContentBlocks:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from ell import ContentBlock
|
||||
from PIL import Image
|
||||
|
||||
# Text content block
|
||||
text_block = ContentBlock(text="This is a text message.")
|
||||
|
||||
# Image content block
|
||||
image = Image.open("example.jpg")
|
||||
image_block = ContentBlock(image=image)
|
||||
|
||||
# Tool call content block
|
||||
tool_call_block = ContentBlock(tool_call=some_tool_call)
|
||||
|
||||
# Parsed content block (structured data)
|
||||
from pydantic import BaseModel
|
||||
|
||||
class UserInfo(BaseModel):
|
||||
name: str
|
||||
age: int
|
||||
|
||||
user_info = UserInfo(name="Alice", age=30)
|
||||
parsed_block = ContentBlock(parsed=user_info)
|
||||
|
||||
Working with Content Blocks in Messages
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
You can combine multiple ContentBlocks in a single Message:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
multi_content_message = ell.user([
|
||||
ContentBlock(text="Here's an image of a cat:"),
|
||||
ContentBlock(image=cat_image)
|
||||
])
|
||||
|
||||
Using Messages and Content Blocks in LMPs
|
||||
-----------------------------------------
|
||||
|
||||
In complex LMPs, you'll often work with lists of Messages:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
@ell.complex(model="gpt-4")
|
||||
def chat_bot(message_history: List[ell.Message]) -> List[ell.Message]:
|
||||
return [
|
||||
ell.system("You are a friendly chat bot."),
|
||||
*message_history,
|
||||
ell.assistant("How can I help you today?")
|
||||
]
|
||||
|
||||
Accessing Message Content
|
||||
-------------------------
|
||||
|
||||
You can access the content of a Message in different ways:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
# Get all text content
|
||||
text_content = message.text
|
||||
|
||||
# Get only the text content, excluding non-text elements
|
||||
text_only = message.text_only
|
||||
|
||||
# Access specific content types
|
||||
tool_calls = message.tool_calls
|
||||
tool_results = message.tool_results
|
||||
parsed_content = message.parsed_content
|
||||
|
||||
Best Practices
|
||||
--------------
|
||||
|
||||
1. Use the appropriate ContentBlock type for each piece of content.
|
||||
2. When working with complex LMPs, always return a list of Messages.
|
||||
3. Use the helper functions (ell.system, ell.user, ell.assistant) for clarity.
|
||||
4. When dealing with multimodal content, combine different ContentBlock types in a single Message.
|
||||
|
||||
By mastering Messages and ContentBlocks, you'll be able to create sophisticated interactions with language models, handle various types of data, and build complex conversational flows in your ell applications.
|
||||
123
docs/src/core_concepts/tools.rst
Normal file
@@ -0,0 +1,123 @@
|
||||
Tools in ell
|
||||
============
|
||||
|
||||
Introduction
|
||||
------------
|
||||
|
||||
Tools in ell are a powerful feature that allows Language Model Programs (LMPs) to interact with external functions or APIs. This enables LMPs to access real-world data, perform computations, or take actions based on the language model's decisions.
|
||||
|
||||
Defining Tools
|
||||
--------------
|
||||
|
||||
Tools are defined using the ``@ell.tool()`` decorator. Here's the basic structure:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from pydantic import Field
|
||||
|
||||
@ell.tool()
|
||||
def tool_name(param1: str, param2: int = Field(description="Parameter description")) -> str:
|
||||
"""Tool description goes here."""
|
||||
# Tool implementation
|
||||
return "Result"
|
||||
|
||||
Key Points:
|
||||
|
||||
1. Use the ``@ell.tool()`` decorator to define a tool.
|
||||
2. Provide type annotations for all parameters.
|
||||
3. Use Pydantic's ``Field`` for additional parameter metadata.
|
||||
4. Write a clear docstring describing the tool's purpose and usage.
|
||||
5. The return type should be one of: ``str``, JSON-serializable object, Pydantic model, or ``List[ell.ContentBlock]``.
|
||||
|
||||
Example: Weather Tool
|
||||
---------------------
|
||||
|
||||
Let's create a simple weather tool:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from pydantic import Field
|
||||
|
||||
@ell.tool()
|
||||
def get_weather(
|
||||
location: str = Field(description="City name or coordinates"),
|
||||
unit: str = Field(description="Temperature unit: 'celsius' or 'fahrenheit'", default="celsius")
|
||||
) -> str:
|
||||
"""Get the current weather for a given location."""
|
||||
# Implement actual weather API call here
|
||||
return f"The weather in {location} is sunny and 25°{unit[0].upper()}"
|
||||
|
||||
Using Tools in LMPs
|
||||
-------------------
|
||||
|
||||
To use tools in your LMPs, you need to:
|
||||
|
||||
1. Pass the tools to the ``@ell.complex`` decorator.
|
||||
2. Handle tool calls in your LMP logic.
|
||||
|
||||
Here's an example:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
@ell.complex(model="gpt-4", tools=[get_weather])
|
||||
def weather_assistant(message_history: List[ell.Message]) -> List[ell.Message]:
|
||||
return [
|
||||
ell.system("You are a helpful weather assistant. Use the get_weather tool when asked about weather."),
|
||||
*message_history
|
||||
]
|
||||
|
||||
# Using the weather assistant
|
||||
response = weather_assistant([ell.user("What's the weather like in Paris?")])
|
||||
|
||||
if response.tool_calls:
|
||||
tool_results = response.call_tools_and_collect_as_message()
|
||||
final_response = weather_assistant(message_history + [response, tool_results])
|
||||
print(final_response.text)
|
||||
|
||||
Tool Results
|
||||
------------
|
||||
|
||||
When a tool is called, it returns a ``ToolResult`` object, which contains:
|
||||
|
||||
- ``tool_call_id``: A unique identifier for the tool call.
|
||||
- ``result``: A list of ``ContentBlock`` objects representing the tool's output.
|
||||
|
||||
You can access tool results using the ``tool_results`` property of the response message:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
for tool_result in response.tool_results:
|
||||
print(f"Tool call ID: {tool_result.tool_call_id}")
|
||||
for content_block in tool_result.result:
|
||||
print(f"Result: {content_block.text}")
|
||||
|
||||
Parallel Tool Execution
|
||||
-----------------------
|
||||
|
||||
For efficiency, ell supports parallel execution of multiple tool calls:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
if response.tool_calls:
|
||||
tool_results = response.call_tools_and_collect_as_message(parallel=True, max_workers=3)
|
||||
|
||||
This can significantly speed up operations when multiple independent tool calls are made.
|
||||
|
||||
Best Practices for Tools
|
||||
------------------------
|
||||
|
||||
1. **Atomic Functionality**: Design tools to perform single, well-defined tasks.
|
||||
2. **Clear Documentation**: Provide detailed docstrings explaining the tool's purpose, parameters, and return value.
|
||||
3. **Error Handling**: Implement robust error handling within your tools to gracefully manage unexpected inputs or API failures.
|
||||
4. **Type Safety**: Use type annotations and Pydantic models to ensure type safety and clear interfaces.
|
||||
5. **Stateless Design**: Where possible, design tools to be stateless to simplify usage and avoid unexpected behavior.
|
||||
6. **Performance Considerations**: For tools that may be time-consuming, consider implementing caching or optimizing for repeated calls.
|
||||
|
||||
Limitations and Considerations
|
||||
------------------------------
|
||||
|
||||
- Tools are only available in LMPs decorated with ``@ell.complex``.
|
||||
- The language model decides when and how to use tools based on the conversation context and tool descriptions.
|
||||
- Ensure that sensitive operations are properly secured, as tool usage is determined by the language model.
|
||||
|
||||
By effectively using tools, you can greatly extend the capabilities of your Language Model Programs, allowing them to interact with real-world data and systems in powerful ways.
|
||||
86
docs/src/getting_started/basic_usage.rst
Normal file
@@ -0,0 +1,86 @@
|
||||
Basic Usage
|
||||
===========
|
||||
|
||||
This guide will walk you through creating and running your first Language Model Program (LMP) using ell. We'll start with a simple example and gradually introduce more features.
|
||||
|
||||
Creating Your First Language Model Program
|
||||
------------------------------------------
|
||||
|
||||
Let's create a simple LMP that generates a short story based on a given prompt.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
import ell
|
||||
|
||||
@ell.simple(model="gpt-4")
|
||||
def generate_story(prompt: str) -> str:
|
||||
"""You are a helpful AI assistant."""
|
||||
return f"Write a short story based on this prompt: {prompt}"
|
||||
|
||||
# Use the LMP
|
||||
story = generate_story("A time traveler's first day in the future")
|
||||
print(story)
|
||||
|
||||
Let's break down this example:
|
||||
|
||||
1. We import the ``ell`` library.
|
||||
2. We define a function called ``generate_story`` and decorate it with ``@ell.simple``.
|
||||
3. The decorator specifies that we want to use the "gpt-4" model.
|
||||
4. Our function takes a ``prompt`` as input and returns a string.
|
||||
5. The function's docstring becomes the system prompt for the language model.
|
||||
6. The return value of the function becomes the user prompt.
|
||||
7. We call the function with a prompt and print the result.
|
||||
|
||||
Understanding the Output
|
||||
------------------------
|
||||
|
||||
When you run this code, ell will:
|
||||
|
||||
1. Construct the full prompt by combining the system prompt (the docstring) and the user prompt (the return value of the function).
|
||||
2. Send this prompt to the specified language model (in this case, GPT-4).
|
||||
3. Receive the generated text from the model.
|
||||
4. Return this text as the result of the ``generate_story`` function call.
|
||||
|
||||
The output you see will be a short story generated by the language model based on the prompt you provided.
|
||||
|
||||
Using Verbose Mode
|
||||
--------------------
|
||||
|
||||
Before using ell in a project, it's a good practice to set up some basic configuration. Here's how you can do that:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
import ell
|
||||
|
||||
ell.init(verbose=True)
|
||||
|
||||
@ell.simple(model="gpt-4")
|
||||
def generate_story(prompt: str) -> str:
|
||||
"""You are a helpful AI assistant."""
|
||||
return f"Write a short story based on this prompt: {prompt}"
|
||||
|
||||
# Use the LMP
|
||||
story = generate_story("A time traveler's first day in the future")
|
||||
print(story)
|
||||
# Your LMPs and code here...
|
||||
|
||||
This configuration:
|
||||
|
||||
- Sets up a local storage directory for versioning and tracing.
|
||||
- Enables verbose output for more detailed logging.
|
||||
- Turns on autocommit for automatic saving of versions and traces.
|
||||
|
||||
Next Steps
|
||||
----------
|
||||
|
||||
You've now created your first Language Model Program with ell! From here, you can explore more advanced features such as:
|
||||
|
||||
- Using the ``@ell.complex`` decorator for multi-turn conversations and tool use.
|
||||
- Working with structured inputs and outputs.
|
||||
- Integrating multimodal inputs like images.
|
||||
- Leveraging ell's versioning and tracing capabilities.
|
||||
|
||||
Check out the Core Concepts section to dive deeper into these topics.
|
||||
|
||||
|
||||
|
||||
92
docs/src/getting_started/configuration.rst
Normal file
@@ -0,0 +1,92 @@
|
||||
Configuring ell
|
||||
===============
|
||||
|
||||
Proper configuration is crucial for getting the most out of ell. This guide will walk you through the process of setting up ell for your project, including storage configuration for versioning and tracing.
|
||||
|
||||
Basic Configuration
|
||||
-------------------
|
||||
|
||||
To initialize ell with basic settings, use the ``ell.init()`` function in your project:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
import ell
|
||||
|
||||
ell.init(
|
||||
store='./logdir',
|
||||
verbose=True,
|
||||
autocommit=True
|
||||
)
|
||||
|
||||
Let's break down these parameters:
|
||||
|
||||
- ``store``: Specifies the directory for storing versioning and tracing data.
|
||||
- ``verbose``: When set to ``True``, enables detailed logging of ell operations.
|
||||
- ``autocommit``: When ``True``, automatically saves versions and traces.
|
||||
|
||||
Storage Configuration
|
||||
---------------------
|
||||
|
||||
ell uses a storage backend to keep track of LMP versions and invocations. By default, it uses a local SQLite database. To set up storage:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
ell.set_store('./logdir', autocommit=True)
|
||||
|
||||
This creates a SQLite database in the ``./logdir`` directory. For production environments, you might want to use a more robust database like PostgreSQL:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from ell.stores.sql import PostgresStore
|
||||
|
||||
postgres_store = PostgresStore("postgresql://user:password@localhost/db_name")
|
||||
ell.set_store(postgres_store, autocommit=True)
|
||||
|
||||
Customizing Default Parameters
|
||||
------------------------------
|
||||
|
||||
You can set default parameters for your language model calls:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
ell.set_default_lm_params(
|
||||
temperature=0.7,
|
||||
max_tokens=150
|
||||
)
|
||||
|
||||
These parameters will be used as defaults for all LMPs unless overridden.
|
||||
|
||||
Setting a Default System Prompt
|
||||
-------------------------------
|
||||
|
||||
To set a default system prompt for all your LMPs:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
ell.set_default_system_prompt("You are a helpful AI assistant.")
|
||||
|
||||
|
||||
Advanced Configuration
|
||||
----------------------
|
||||
|
||||
For more advanced use cases, you can configure ell to use specific OpenAI clients for different models:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
import openai
|
||||
|
||||
gpt4_client = openai.Client(api_key="your-gpt4-api-key")
|
||||
gpt3_client = openai.Client(api_key="your-gpt3-api-key")
|
||||
|
||||
ell.config.register_model("gpt-4", gpt4_client)
|
||||
ell.config.register_model("gpt-3.5-turbo", gpt3_client)
|
||||
|
||||
Configuration Best Practices
|
||||
----------------------------
|
||||
|
||||
1. Always set up proper storage for versioning and tracing in production environments.
|
||||
2. Use environment variables for sensitive information like API keys.
|
||||
3. Configure logging to help with debugging and monitoring.
|
||||
4. Set sensible defaults for language model parameters to ensure consistent behavior across your project.
|
||||
|
||||
By properly configuring ell, you'll be able to leverage its full capabilities and streamline your development process. Remember to adjust these settings as your project grows and your needs evolve.
|
||||
34
docs/src/getting_started/index.rst
Normal file
@@ -0,0 +1,34 @@
|
||||
Getting Started
|
||||
========================
|
||||
|
||||
Welcome to the Getting Started guide for ell! This section will help you begin your journey with ell, from setting up your environment to creating your first Language Model Program (LMP).
|
||||
|
||||
In this section:
|
||||
----------------
|
||||
|
||||
1. :doc:`Basic Usage <basic_usage>`
|
||||
|
||||
- Learn how to create and run your first Language Model Program
|
||||
- Understand the structure of simple LMPs
|
||||
|
||||
2. :doc:`Configuration <configuration>`
|
||||
|
||||
- Set up ell for your project
|
||||
- Configure storage for versioning and tracing
|
||||
- Customize ell's behavior to suit your needs
|
||||
|
||||
By the end of this guide, you'll have a solid foundation for working with ell and be ready to explore more advanced features.
|
||||
|
||||
Let's begin with Basic Usage to create your first LMP!
|
||||
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 3
|
||||
:caption: Getting Started:
|
||||
:hidden:
|
||||
|
||||
|
||||
Basic Usage <basic_usage>
|
||||
Configuration <configuration>
|
||||
|
||||
|
||||
192
docs/src/index.rst
Normal file
@@ -0,0 +1,192 @@
|
||||
|
||||
.. raw:: html
|
||||
|
||||
<style>
|
||||
.rounded-image {
|
||||
border-radius: 10px;
|
||||
overflow: hidden;
|
||||
}
|
||||
|
||||
</style>
|
||||
.. raw:: html
|
||||
|
||||
<script>
|
||||
function invertImage(dark) {
|
||||
var images = document.querySelectorAll('.invertible-image img');
|
||||
var htmlElement = document.documentElement;
|
||||
images.forEach(function(image) {
|
||||
if (!dark) {
|
||||
image.style.filter = 'invert(100%) hue-rotate(160deg)';
|
||||
} else {
|
||||
image.style.filter = 'none';
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
|
||||
|
||||
// Run when the 'dark' class is added or removed from the <html> element
|
||||
const htmlElement = document.documentElement;
|
||||
|
||||
// Use MutationObserver to detect changes in the class attribute
|
||||
const observer = new MutationObserver((mutations) => {
|
||||
console.log(document.documentElement.classList)
|
||||
mutations.forEach((mutation) => {
|
||||
invertImage(document.documentElement.classList.contains('dark'));
|
||||
|
||||
});
|
||||
});
|
||||
|
||||
observer.observe(htmlElement, { attributes: true, attributeFilter: ['class'] });
|
||||
</script>
|
||||
|
||||
.. _introduction:
|
||||
===========================================
|
||||
ell: The Language Model Programming Library
|
||||
===========================================
|
||||
|
||||
|
||||
.. title:: Introduction
|
||||
|
||||
|
||||
|
||||
``ell`` is a lightweight, functional prompt engineering library built on a few key principles.
|
||||
|
||||
|
||||
Prompts are programs, not strings
|
||||
----------------------------------
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
import ell
|
||||
ell.init(verbose=True)
|
||||
|
||||
@ell.simple(model="gpt-4o-mini")
|
||||
def hello(world: str):
|
||||
"""You are a helpful assistant""" # System prompt
|
||||
name = world.capitalize()
|
||||
return f"Say hello to {name}!" # User prompt
|
||||
|
||||
hello("sam altman") # just a str, "Hello Sam Altman! ..."
|
||||
|
||||
|
||||
|
||||
.. image:: _static/gif1.webp
|
||||
:alt: ell demonstration
|
||||
:class: rounded-image invertible-image
|
||||
:width: 100%
|
||||
|
||||
|
||||
|
||||
|
||||
Prompts aren't just strings; they are all the code that leads to strings being sent to a language model. In ell, we think of one particular way of using a language model as a discrete subroutine called a **language model program** (LMP).
|
||||
|
||||
LMPs are fully encapsulated functions that produce either a string prompt or a list of messages to be sent to various multimodal language models. This encapsulation creates a clean interface for users, who only need to be aware of the required data specified to the LMP.
|
||||
|
||||
|
||||
|
||||
Prompt engineering libraries shouldn't interfere with your workflow
|
||||
--------------------------------------------------------------------
|
||||
|
||||
``ell`` is designed to be a lightweight and unobtrusive library. It doesn't require you to change your coding style or use special editors.
|
||||
|
||||
.. image:: _static/useitanywhere_compressed.webp
|
||||
:alt: ell demonstration
|
||||
:class: rounded-image
|
||||
:width: 100%
|
||||
|
||||
You can continue to use regular Python code in your IDE to define and modify your prompts, while leveraging ell's features to visualize and analyze your prompts. Migrate from langchain to ``ell`` one function at a time.
|
||||
|
||||
Prompt engineering is an optimization process
|
||||
------------------------------------------------
|
||||
|
||||
The process of prompt engineering involves many iterations, similar to the optimization processes in machine learning. Because LMPs are just functions, ``ell`` procides rich tooling for this process.
|
||||
|
||||
.. image:: _static/versions_small.webp
|
||||
:alt: ell demonstration
|
||||
:class: rounded-image .invertible-image
|
||||
:width: 100%
|
||||
|
||||
|
||||
``ell`` provides **automatic versioning and serialization of prompts** through static and dynamic analysis. This process is similar to `checkpointing` in a machine learning training loop, but it doesn't require any special IDE or editor - it's all done with regular Python code.
|
||||
|
||||
.. code-block:: python
|
||||
:emphasize-lines: 3,3
|
||||
|
||||
import ell
|
||||
|
||||
ell.init(store='./logdir') # Versions your LMPs and their calls
|
||||
|
||||
# ... define your lmps
|
||||
|
||||
hello("strawberry") # the source code of the LMP the call is saved to the store
|
||||
|
||||
|
||||
|
||||
|
||||
Every call to a language model is valuable
|
||||
------------------------------------------------
|
||||
Every call to a language model is worth its weight in credits. In practice, LLM invocations are used for fine tuning, distillation, k-shot prompting, reinforcement learning from human feedback, and more. A good prompt engineering system should capture these as first class concepts.
|
||||
|
||||
.. image:: _static/invocations.webp
|
||||
:alt: ell demonstration
|
||||
:class: rounded-image invertible-image
|
||||
:width: 100%
|
||||
|
||||
|
||||
In addition to storing the source code of every LMP, ``ell`` optionally saves every call to a language model locally. This allows you to generate invocaiton datasets, compare LMP outputs by version, and generally do more with the full spectrum of prompt engineering artifacts.
|
||||
|
||||
|
||||
|
||||
Complexity when you need it, simplicity when you don't
|
||||
--------------------------------------------------------
|
||||
|
||||
Using language models is **just passing strings around, except when it's not.**
|
||||
|
||||
.. code-block:: python
|
||||
:emphasize-lines: 7,7
|
||||
|
||||
import ell
|
||||
|
||||
@ell.tool()
|
||||
def scrape_website(url : str):
|
||||
return requests.get(url).text
|
||||
|
||||
@ell.complex(model="gpt-5-omni", tools=[scrape_website])
|
||||
def get_news_story(topic : str):
|
||||
return [
|
||||
ell.system("""Use the web to find a news story about the topic"""),
|
||||
ell.user(f"Find a news story about {topic}.")
|
||||
]
|
||||
|
||||
message_response = get_news_story("stock market")
|
||||
if message_response.tool_calls:
|
||||
for tool_call in message_response.tool_calls:
|
||||
#...
|
||||
if message_response.text:
|
||||
print(message_response.text)
|
||||
if message_response.audio:
|
||||
# message_response.play_audio() supprot for multimodal outputs will work as soon as the LLM supports it
|
||||
pass
|
||||
|
||||
Using ``@ell.simple`` causes LMPs to yield **simple string outputs.** But when more complex or multimodal output is needed, ``@ell.complex`` can be used to yield ``Message`` objects responses from language mdoels.
|
||||
|
||||
|
||||
----------------------------
|
||||
|
||||
To get started with ``ell``, see the :doc:`Getting Started <getting_started/index>` section, or go onto :doc:`Installation <installation>` and get ell installed.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 3
|
||||
:caption: User Guide:
|
||||
:hidden:
|
||||
|
||||
Introduction <self>
|
||||
|
||||
installation
|
||||
getting_started/index
|
||||
core_concepts/index
|
||||
advanced_features/index
|
||||
ell_studio/index
|
||||
best_practices/index
|
||||
|
||||
113
docs/src/installation.rst
Normal file
@@ -0,0 +1,113 @@
|
||||
Installation
|
||||
============
|
||||
|
||||
This guide will walk you through the process of installing ell on your system.
|
||||
|
||||
Steps
|
||||
------
|
||||
|
||||
1. **Install ell using pip:**
|
||||
|
||||
Open your terminal or command prompt and run the following command:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
pip install ell
|
||||
|
||||
This will download and install the latest stable version of ell and its dependencies.
|
||||
|
||||
2. **Verify the installation:**
|
||||
|
||||
After the installation is complete, you can verify that ell was installed correctly by running:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
python -c "import ell; print(ell.__version__)"
|
||||
|
||||
This should print the version number of ell that you just installed.
|
||||
|
||||
|
||||
Setting Up API Keys
|
||||
-------------------
|
||||
|
||||
To use ``ell`` with various language models, you'll need to set up API keys for the models you want to use. Below are the steps to configure API keys for OpenAI and Anthropic.
|
||||
|
||||
1. **OpenAI API Key:**
|
||||
|
||||
To use OpenAI's models, you need an API key from OpenAI. Follow these steps:
|
||||
|
||||
a. Sign up or log in to your OpenAI account at https://beta.openai.com/signup/.
|
||||
|
||||
b. Navigate to the API section and generate a new API key.
|
||||
|
||||
c. Set the API key as an environment variable:
|
||||
|
||||
- On Windows:
|
||||
Open Command Prompt and run:
|
||||
|
||||
.. code-block:: batch
|
||||
|
||||
setx OPENAI_API_KEY "your-openai-api-key"
|
||||
|
||||
- On macOS and Linux:
|
||||
Add the following line to your shell profile (e.g., `.bashrc`, `.zshrc`):
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
export OPENAI_API_KEY='your-openai-api-key'
|
||||
|
||||
d. Restart your terminal or command prompt to apply the changes.
|
||||
|
||||
2. **Anthropic API Key:**
|
||||
|
||||
To use Anthropic's models, you need an API key from Anthropic. Follow these steps:
|
||||
|
||||
a. Sign up or log in to your Anthropic account at https://www.anthropic.com/.
|
||||
|
||||
b. Navigate to the API section and generate a new API key.
|
||||
|
||||
c. Set the API key as an environment variable:
|
||||
|
||||
- On Windows:
|
||||
Open Command Prompt and run:
|
||||
|
||||
.. code-block:: batch
|
||||
|
||||
setx ANTHROPIC_API_KEY "your-anthropic-api-key"
|
||||
|
||||
- On macOS and Linux:
|
||||
Add the following line to your shell profile (e.g., `.bashrc`, `.zshrc`):
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
export ANTHROPIC_API_KEY='your-anthropic-api-key'
|
||||
|
||||
d. Restart your terminal or command prompt to apply the changes.
|
||||
|
||||
Once you have set up your API keys, ell will automatically use them to access the respective language models. You are now ready to start creating and running Language Model Programs with ell!
|
||||
|
||||
|
||||
|
||||
Troubleshooting Installation Issues
|
||||
-----------------------------------
|
||||
|
||||
If you encounter any issues during installation, try the following:
|
||||
|
||||
1. Ensure you have the latest version of pip:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
pip install --upgrade pip
|
||||
|
||||
2. If you're using a virtual environment, make sure it's activated before installing ell.
|
||||
|
||||
3. On some systems, you may need to use ``pip3`` instead of ``pip`` to ensure you're using Python 3.
|
||||
|
||||
4. If you encounter permission errors, you may need to use ``sudo`` on Unix-based systems or run your command prompt as an administrator on Windows.
|
||||
|
||||
If you continue to have problems, please check the Troubleshooting section of this documentation or file an issue on the ell GitHub repository.
|
||||
|
||||
Next Steps
|
||||
----------
|
||||
|
||||
Now that you have ell installed, you're ready to start using it! Head over to the Getting Started guide to create your first Language Model Program.
|
||||
@@ -50,8 +50,8 @@ export function CodeHighlighter({
|
||||
const diffRenderer = useCallback(
|
||||
({ stylesheet, useInlineStyles }) =>
|
||||
DiffRenderer({
|
||||
previousCode: code,
|
||||
code: previousCode,
|
||||
previousCode: previousCode,
|
||||
code: code,
|
||||
stylesheet,
|
||||
useInlineStyles,
|
||||
startingLineNumber,
|
||||
|
||||
@@ -6,9 +6,9 @@ ell.config.verbose = True
|
||||
|
||||
|
||||
class Test(BaseModel):
|
||||
name: str
|
||||
age: int
|
||||
height_precise: float
|
||||
name: str = Field(description="The name of the person")
|
||||
age: int = Field(description="The age of the person")
|
||||
height_precise: float = Field(description="The height of the person in meters")
|
||||
is_cool: bool
|
||||
|
||||
@ell.complex(model='gpt-4o-2024-08-06', response_format=Test)
|
||||
|
||||
@@ -6,7 +6,8 @@ from ell.stores.sql import SQLiteStore
|
||||
|
||||
ell.config.verbose = True
|
||||
ell.set_store('./logdir', autocommit=True)
|
||||
|
||||
# equivalent to
|
||||
# ell.init(store='./logdir', autocommit=True, verbose=True)
|
||||
|
||||
|
||||
def get_random_length():
|
||||
|
||||
18
examples/server_example.py
Normal file
@@ -0,0 +1,18 @@
|
||||
from flask import Flask
|
||||
|
||||
import ell
|
||||
|
||||
@ell.simple(model="gpt-4o-mini")
|
||||
def hello(name: str):
|
||||
"""You are a helpful assistant"""
|
||||
return f"Write a welcome message for {name}."
|
||||
|
||||
app = Flask(__name__)
|
||||
|
||||
|
||||
@app.route('/')
|
||||
def home():
|
||||
return hello("world")
|
||||
|
||||
if __name__ == '__main__':
|
||||
app.run(debug=True)
|
||||
@@ -15,5 +15,3 @@ import ell.models
|
||||
|
||||
# Import everything from configurator
|
||||
from ell.configurator import *
|
||||
|
||||
|
||||
|
||||
@@ -11,7 +11,7 @@ _config_logger = logging.getLogger(__name__)
|
||||
|
||||
class Config(BaseModel):
|
||||
model_config = ConfigDict(arbitrary_types_allowed=True)
|
||||
model_registry: Dict[str, openai.Client] = Field(default_factory=dict)
|
||||
registry: Dict[str, openai.Client] = Field(default_factory=dict)
|
||||
verbose: bool = False
|
||||
wrapped_logging: bool = True
|
||||
override_wrapped_logging_width: Optional[int] = None
|
||||
@@ -29,7 +29,7 @@ class Config(BaseModel):
|
||||
|
||||
def register_model(self, model_name: str, client: openai.Client) -> None:
|
||||
with self._lock:
|
||||
self.model_registry[model_name] = client
|
||||
self.registry[model_name] = client
|
||||
|
||||
@property
|
||||
def has_store(self) -> bool:
|
||||
@@ -41,7 +41,7 @@ class Config(BaseModel):
|
||||
self._local.stack = []
|
||||
|
||||
with self._lock:
|
||||
current_registry = self._local.stack[-1] if self._local.stack else self.model_registry
|
||||
current_registry = self._local.stack[-1] if self._local.stack else self.registry
|
||||
new_registry = current_registry.copy()
|
||||
new_registry.update(overrides)
|
||||
|
||||
@@ -52,7 +52,7 @@ class Config(BaseModel):
|
||||
self._local.stack.pop()
|
||||
|
||||
def get_client_for(self, model_name: str) -> Optional[openai.Client]:
|
||||
current_registry = self._local.stack[-1] if hasattr(self._local, 'stack') and self._local.stack else self.model_registry
|
||||
current_registry = self._local.stack[-1] if hasattr(self._local, 'stack') and self._local.stack else self.registry
|
||||
client = current_registry.get(model_name)
|
||||
fallback = False
|
||||
if model_name not in current_registry.keys():
|
||||
|
||||
@@ -11,7 +11,7 @@ import time
|
||||
|
||||
|
||||
def main():
|
||||
parser = ArgumentParser(description="ELL Studio Data Server")
|
||||
parser = ArgumentParser(description="ell studio")
|
||||
parser.add_argument("--storage-dir" , default=None,
|
||||
help="Directory for filesystem serializer storage (default: current directory)")
|
||||
parser.add_argument("--pg-connection-string", default=None,
|
||||
@@ -49,7 +49,7 @@ def main():
|
||||
await app.notify_clients("database_updated")
|
||||
else:
|
||||
# Use a threshold for time comparison to account for filesystem differences
|
||||
time_threshold = 1 # 1 second threshold
|
||||
time_threshold = 0.1 # 1 second threshold
|
||||
time_changed = abs(current_stat.st_mtime - last_stat.st_mtime) > time_threshold
|
||||
size_changed = current_stat.st_size != last_stat.st_size
|
||||
inode_changed = current_stat.st_ino != last_stat.st_ino
|
||||
|
||||
@@ -34,7 +34,7 @@ def _warnings(model, fn, default_client_from_decorator):
|
||||
if not default_client_from_decorator:
|
||||
# Check to see if the model is registered and warn the user we're gonna defualt to OpenAI.
|
||||
|
||||
if model not in config.model_registry:
|
||||
if model not in config.registry:
|
||||
logger.warning(f"""{Fore.LIGHTYELLOW_EX}WARNING: Model `{model}` is used by LMP `{fn.__name__}` but no client could be found that supports `{model}`. Defaulting to use the OpenAI client `{config._default_openai_client}` for `{model}`. This is likely because you've spelled the model name incorrectly or are using a newer model from a provider added after this ell version was released.
|
||||
|
||||
* If this is a mistake either specify a client explicitly in the decorator:
|
||||
@@ -50,5 +50,5 @@ or explicitly specify the client when the calling the LMP:
|
||||
ell.lm(model, client=my_client)(...)
|
||||
```
|
||||
{Style.RESET_ALL}""")
|
||||
elif (client_to_use := config.model_registry[model]) is None or not client_to_use.api_key:
|
||||
elif (client_to_use := config.registry[model]) is None or not client_to_use.api_key:
|
||||
logger.warning(_no_api_key_warning(model, fn.__name__, client_to_use or '', long=False))
|
||||
@@ -104,7 +104,7 @@ def model_usage_logger_pre(
|
||||
logger.info(f"Invoking LMP: {invoking_lmp.__name__} (hash: {lmp_hash[:8]})")
|
||||
|
||||
print(f"{PIPE_COLOR}╔{'═' * (terminal_width - 2)}╗{RESET}")
|
||||
print(f"{PIPE_COLOR}║ {color}{BOLD}{UNDERLINE}{invoking_lmp.__name__}{RESET}{color}({formatted_params}) # ({lmp_hash[:8]}...){RESET}")
|
||||
print(f"{PIPE_COLOR}║ {color}{BOLD}{UNDERLINE}{invoking_lmp.__name__}{RESET}{color}({formatted_params}){RESET}")
|
||||
print(f"{PIPE_COLOR}╠{'═' * (terminal_width - 2)}╣{RESET}")
|
||||
print(f"{PIPE_COLOR}║ {BOLD}Prompt:{RESET}")
|
||||
print(f"{PIPE_COLOR}╟{'─' * (terminal_width - 2)}╢{RESET}")
|
||||
|
||||