Files
humanlayer/docs/quickstart-python.mdx
2025-06-17 09:21:42 -07:00

498 lines
14 KiB
Plaintext

---
title: "Python Quickstart"
description: "Get started with Humanlayer using Python and OpenAI"
icon: "python"
---
### Setup Environment
```bash
mkdir humanlayer-demo && cd humanlayer-demo
```
Install the required package with pip:
```bash
pip install humanlayer openai
```
<AccordionGroup>
<Accordion icon="sparkles" title="Using uv (faster alternative)">
[uv](https://github.com/astral-sh/uv) is a fast Python package installer and resolver.
[Install uv](https://github.com/astral-sh/uv#installation):
```bash
curl -LsSf https://astral.sh/uv/install.sh | sh
```
Then use uv to install the packages:
```bash
uv init
uv add humanlayer openai
```
</Accordion>
<Accordion icon="terminal" title="Using a virtual environment">
We recommend using a virtual environment to manage your dependencies. Create and activate a virtual environment:
```bash
python -m venv env
source env/bin/activate # On Windows, use `env\Scripts\activate`
```
Then install the packages:
```bash
pip install humanlayer openai
```
</Accordion>
</AccordionGroup>
Next, get your OpenAI API key from the [OpenAI dashboard](https://platform.openai.com/api-keys).
```bash
export OPENAI_API_KEY=<your-openai-api-key>
```
and set your HumanLayer API key
<Accordion icon="video" title="Getting your HumanLayer API key">
Head over to the [HumanLayer dashboard](https://app.humanlayer.dev) and create
a new project to get your API key.
<Frame>
<img
src="https://www.humanlayer.dev/get-token.gif"
alt="HumanLayer Get Started"
/>
</Frame>
</Accordion>
```bash
export HUMANLAYER_API_KEY=<your-humanlayer-api-key>
```
### Create a simple agent
First, let's create a simple math agent that can perform basic arithmetic operations.
<Accordion icon="file" title="main.py">
```python
import json
import logging
from openai import OpenAI
PROMPT = "multiply 2 and 5, then add 32 to the result"
def add(x: int, y: int) -> int:
"""Add two numbers together."""
return x + y
def multiply(x: int, y: int) -> int:
"""multiply two numbers"""
return x * y
math_tools_map = {
"add": add,
"multiply": multiply,
}
math_tools_openai = [
{
"type": "function",
"function": {
"name": "add",
"description": "Add two numbers together.",
"parameters": {
"type": "object",
"properties": {
"x": {"type": "number"},
"y": {"type": "number"},
},
"required": ["x", "y"],
},
},
},
{
"type": "function",
"function": {
"name": "multiply",
"description": "multiply two numbers",
"parameters": {
"type": "object",
"properties": {
"x": {"type": "number"},
"y": {"type": "number"},
},
"required": ["x", "y"],
},
},
},
]
logger = logging.getLogger(__name__)
def run_chain(prompt: str, tools_openai: list[dict], tools_map: dict) -> str:
client = OpenAI()
messages = [{"role": "user", "content": prompt}]
response = client.chat.completions.create(
model="gpt-4o",
messages=messages,
tools=tools_openai,
tool_choice="auto",
)
while response.choices[0].finish_reason != "stop":
response_message = response.choices[0].message
tool_calls = response_message.tool_calls
if tool_calls:
messages.append(response_message) # extend conversation with assistant's reply
logger.info(
"last message led to %s tool calls: %s",
len(tool_calls),
[(tool_call.function.name, tool_call.function.arguments) for tool_call in tool_calls],
)
for tool_call in tool_calls:
function_name = tool_call.function.name
function_to_call = tools_map[function_name]
function_args = json.loads(tool_call.function.arguments)
logger.info("CALL tool %s with %s", function_name, function_args)
function_response_json: str
try:
function_response = function_to_call(**function_args)
function_response_json = json.dumps(function_response)
except Exception as e:
function_response_json = json.dumps(
{
"error": str(e),
}
)
logger.info(
"tool %s responded with %s",
function_name,
function_response_json[:200],
)
messages.append(
{
"tool_call_id": tool_call.id,
"role": "tool",
"name": function_name,
"content": function_response_json,
}
) # extend conversation with function response
response = client.chat.completions.create(
model="gpt-4o",
messages=messages,
tools=tools_openai,
)
return response.choices[0].message.content
if __name__ == "__main__":
logging.basicConfig(level=logging.INFO)
result = run_chain(PROMPT, math_tools_openai, math_tools_map)
print("\n\n----------Result----------\n\n")
print(result)
```
</Accordion>
<Accordion icon="key" title="Using a .env file">
You can also use a `.env` file to store your keys:
```bash
echo "OPENAI_API_KEY=<your-openai-api-key>" >> .env
echo "HUMANLAYER_API_KEY=<your-humanlayer-api-key>" >> .env
```
Install the `python-dotenv` package:
```bash
pip install python-dotenv
```
then load the keys near the top of your script:
```python
from dotenv import load_dotenv
load_dotenv()
```
</Accordion>
You can run this script with:
```bash
python main.py
```
Or, with uv:
```bash
uv run main.py
```
You should see output like:
```
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
INFO:__main__:last message led to 1 tool calls: [('multiply', '{"x":2,"y":5}')]
INFO:__main__:CALL tool multiply with {'x': 2, 'y': 5}
INFO:__main__:tool multiply responded with 10
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
INFO:__main__:last message led to 1 tool calls: [('add', '{"x":10,"y":32}')]
INFO:__main__:CALL tool add with {'x': 10, 'y': 32}
INFO:__main__:tool add responded with 42
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
----------Result----------
The result of multiplying 2 and 5, and then adding 32, is 42.
```
Sure, not a particularly useful agent, but enough to get you started with tool calling.
### Add HumanLayer
Now, let's add a human approval step to the script to see how it works.
We'll wrap the multiply function with `@hl.require_approval()` so that the LLM has to get your approval before it can run the function.
At the top of the script, import HumanLayer and instantiate it:
```python
from humanlayer import HumanLayer
hl = HumanLayer(
verbose=True,
run_id="openai-math-example",
)
```
Then, wrap the multiply function with `@hl.require_approval()`:
```diff
def add(x: int, y: int) -> int:
"""Add two numbers together."""
return x + y
+ @hl.require_approval()
def multiply(x: int, y: int) -> int:
"""multiply two numbers"""
return x * y
```
<Accordion icon="file" title="full main.py script">
```python
import json
import logging
from openai import OpenAI
from humanlayer import HumanLayer
hl = HumanLayer(
verbose=True,
# run_id is optional -it can be used to identify the agent in approval history
run_id="openai-math-example",
)
PROMPT = "multiply 2 and 5, then add 32 to the result"
def add(x: int, y: int) -> int:
"""Add two numbers together."""
return x + y
@hl.require_approval()
def multiply(x: int, y: int) -> int:
"""multiply two numbers"""
return x * y
math_tools_map = {
"add": add,
"multiply": multiply,
}
math_tools_openai = [
{
"type": "function",
"function": {
"name": "add",
"description": "Add two numbers together.",
"parameters": {
"type": "object",
"properties": {
"x": {"type": "number"},
"y": {"type": "number"},
},
"required": ["x", "y"],
},
},
},
{
"type": "function",
"function": {
"name": "multiply",
"description": "multiply two numbers",
"parameters": {
"type": "object",
"properties": {
"x": {"type": "number"},
"y": {"type": "number"},
},
"required": ["x", "y"],
},
},
},
]
logger = logging.getLogger(__name__)
def run_chain(prompt: str, tools_openai: list[dict], tools_map: dict) -> str:
client = OpenAI()
messages = [{"role": "user", "content": prompt}]
response = client.chat.completions.create(
model="gpt-4o",
messages=messages,
tools=tools_openai,
tool_choice="auto",
)
while response.choices[0].finish_reason != "stop":
response_message = response.choices[0].message
tool_calls = response_message.tool_calls
if tool_calls:
messages.append(response_message) # extend conversation with assistant's reply
logger.info(
"last message led to %s tool calls: %s",
len(tool_calls),
[(tool_call.function.name, tool_call.function.arguments) for tool_call in tool_calls],
)
for tool_call in tool_calls:
function_name = tool_call.function.name
function_to_call = tools_map[function_name]
function_args = json.loads(tool_call.function.arguments)
logger.info("CALL tool %s with %s", function_name, function_args)
function_response_json: str
try:
function_response = function_to_call(**function_args)
function_response_json = json.dumps(function_response)
except Exception as e:
function_response_json = json.dumps(
{
"error": str(e),
}
)
logger.info(
"tool %s responded with %s",
function_name,
function_response_json[:200],
)
messages.append(
{
"tool_call_id": tool_call.id,
"role": "tool",
"name": function_name,
"content": function_response_json,
}
) # extend conversation with function response
response = client.chat.completions.create(
model="gpt-4o",
messages=messages,
tools=tools_openai,
)
return response.choices[0].message.content
if __name__ == "__main__":
logging.basicConfig(level=logging.INFO)
result = run_chain(PROMPT, math_tools_openai, math_tools_map)
print("\n\n----------Result----------\n\n")
print(result)
```
</Accordion>
Now, when you run the script, it will prompt you for approval before performing the multiply operation.
Since we chose verbose mode, you'll see a note in the console.
```
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
INFO:__main__:last message led to 1 tool calls: [('multiply', '{"x":2,"y":5}')]
INFO:__main__:CALL tool multiply with {'x': 2, 'y': 5}
HumanLayer: waiting for approval for multiply
```
Try rejecting the tool call with something like "use 7 instead of 5" to see how your feedback is passed into the agent.
<Accordion icon="video" title="Rejecting a tool call">
When you reject a tool call, HumanLayer will pass your feedback back to the agent, which can then adjust its approach based on your input.
{/* Missing GIF */}
<Frame>
<img src="https://www.humanlayer.dev/reject-tool.gif" alt="Rejecting a tool call" />
</Frame>
</Accordion>
You should see some more output:
```
HumanLayer: human denied multiply with message: use 7 instead of 5
INFO:__main__:tool multiply responded with "User denied multiply with message: use 7 instead of 5"
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
INFO:__main__:last message led to 1 tool calls: [('multiply', '{"x":2,"y":7}')]
INFO:__main__:CALL tool multiply with {'x': 2, 'y': 7}
HumanLayer: waiting for approval for multiply
```
This time, you can approve the tool call.
{/* Missing GIF */}
<Accordion icon="video" title="Approving a tool call">
<Frame>
<img
src="https://www.humanlayer.dev/approve-tool.gif"
alt="Approving a tool call"
/>
</Frame>
</Accordion>
That's it, you've now added a human approval step to your tool-calling agent!
### Connect Slack or try an email approval
Head over to the integrations panel to [connect Slack as an approval channel](/channels/slack) and run the script again.
Or you can try an [email approval](/channels/email) by setting the contact channel in @require_approval.
### Integrate your favorite Agent Framework
HumanLayer is designed to work with any LLM agent framework. See [Frameworks](/integrations) for more information.