docs and examples updates for vercel, chainlit, and more

- Add [quickstart guide for using the humanlayer typescript sdk](https://humanlayer.dev/docs/quickstart-typescript)
- Add [guide for using Function Calls for Classification](https://humanlayer.dev/docs/core/classifications) with human-in-the-loop
- Add [framework guide for using humanlayer with chainlit](https://humanlayer.dev/docs/frameworks/chainlit)
- Add [framework guide for using humanlayer with the vercel ai sdk](https://humanlayer.dev/docs/frameworks/vercel-ai-sdk)
- Update [humanlayer-ts readme](https://github.com/humanlayer/humanlayer-ts) to include quickstart guide

- Add [example of a fullstack chat app with nextjs and the vercel ai sdk](https://github.com/humanlayer/humanlayer/tree/main/examples/vercel_ai_nextjs)
- Simplify the [chainlit example](https://github.com/humanlayer/humanlayer/tree/main/examples/chainlit) by using `AsyncHumanLayer`
- Update [langchain email example](https://github.com/humanlayer/humanlayer/tree/main/examples/langchain/09-email-contact.py) to showcase the use of custom jinja templates for email payloads
This commit is contained in:
dexhorthy
2025-02-09 20:20:38 -08:00
parent 50a16ad7b8
commit 021149207b
25 changed files with 4170 additions and 52 deletions

View File

@@ -1,3 +1,21 @@
## 0.7.5-alpha
(pre-release notes)
### Documentation
- Add [quickstart guide for using the humanlayer typescript sdk](https://humanlayer.dev/docs/quickstart-typescript)
- Add [guide for using Function Calls for Classification](https://humanlayer.dev/docs/core/classifications) with human-in-the-loop
- Add [framework guide for using humanlayer with chainlit](https://humanlayer.dev/docs/frameworks/chainlit)
- Add [framework guide for using humanlayer with the vercel ai sdk](https://humanlayer.dev/docs/frameworks/vercel-ai-sdk)
- Update [humanlayer-ts readme](https://github.com/humanlayer/humanlayer-ts) to include quickstart guide
### Examples
- Add [example of a fullstack chat app with nextjs and the vercel ai sdk](https://github.com/humanlayer/humanlayer/tree/main/examples/vercel_ai_nextjs)
- Simplify the [chainlit example](https://github.com/humanlayer/humanlayer/tree/main/examples/chainlit) by using `AsyncHumanLayer`
- Update [langchain email example](https://github.com/humanlayer/humanlayer/tree/main/examples/langchain/09-email-contact.py) to showcase the use of custom jinja templates for email payloads
## 0.7.4
- Add [v1beta2 webhook payload](https://humanlayer.dev/docs/core/response-webhooks) types to ts and python sdks

View File

@@ -1,8 +0,0 @@
---
title: "Classifications"
description: "Agent-to-human outreach to collect or confirm structured data labels"
---
## Private Beta
Classifications are currently in private beta. To request access, please contact us at [contact@humanlayer.dev](mailto:contact@humanlayer.dev).

View File

@@ -18,13 +18,6 @@ description: "API Reference for Humanlayer Agent Endpoints"
>
Generic agent-to-human outreach for help or feedback
</Card>
<Card
title="Classifications"
icon="tag"
href="/api-reference/classifications"
>
Agent-to-human outreach to collect structured data labels
</Card>
</CardGroup>
## Authentication

View File

@@ -0,0 +1,119 @@
---
title: "Classifications"
description: "Agent-to-human outreach to collect or confirm structured data labels"
icon: "tags"
---
## Overview
Classifications enable agents to collect structured data labels from humans. Common use cases:
- Content moderation
- Sentiment analysis
- Support ticket triage
- Data validation
- Training data collection
## Basic Example
```typescript
import { humanlayer, ResponseOption } from "humanlayer";
// Initialize with descriptive run ID
const hl = humanlayer({ runId: "email-classifier" });
// Define clear, mutually exclusive options
const priorities = [
new ResponseOption({
name: "urgent",
title: "Urgent",
description: "Requires immediate attention (SLA < 1 hour)",
promptFill: "This is urgent because...",
}),
new ResponseOption({
name: "high",
title: "High Priority",
description: "Important but not time-critical (SLA < 24 hours)",
}),
new ResponseOption({
name: "normal",
title: "Normal",
description: "Standard priority (SLA < 72 hours)",
}),
];
// Create classification tool
const classifyTicket = hl.humanAsTool({
responseOptions: priorities,
});
// Use in your agent
const priority = await classifyTicket(
"Please classify the priority of this support ticket:\n\n" +
"Subject: Service Down\n" +
"Message: Our production API is returning 500 errors",
);
console.log(`Ticket classified as: ${priority}`);
```
## Synchronous vs Async
### Synchronous (Default)
Agent waits for human input:
```typescript
const result = await classifyEmail("Please classify this email's sentiment");
```
### Asynchronous
Agent continues while waiting for classification:
```typescript
// Request classification
const callId = await classifyEmail.createRequest(
"Please classify this email's sentiment",
);
// Continue processing other tasks...
// Later, fetch the result
const result = await classifyEmail.getResponse(callId);
```
## Best Practices
1. **Clear Options**
- Use mutually exclusive categories
- Include specific examples in descriptions
- Set clear criteria for each option
2. **Structured Feedback**
- Use `promptFill` for consistent responses
- Guide humans to provide specific details
- Collect rationale for important decisions
3. **Quality Control**
- Consider multiple reviewers for critical data
- Track inter-rater agreement
- Use `runId` to group related classifications
4. **Efficient Workflows**
- Batch similar items together
- Use async mode for large volumes
- Provide sufficient context in prompts
## Next Steps
- [Configure contact channels](/channels/introduction)
- [Customize response options](/core/customize-response-options)
- [See email classifier example](https://github.com/humanlayer/humanlayer/tree/main/examples/ts_email_classifier)
## Private Beta
Classifications are currently in private beta. To request access, please contact us at [contact@humanlayer.dev](mailto:contact@humanlayer.dev).

View File

@@ -235,6 +235,30 @@ Core concepts around contact channels:
- Components handle their own authentication flow
- Minimal configuration required in parent components
- Keep token management internal to components where possible
### Async Framework Integration
- Use AsyncHumanLayer for async frameworks (FastAPI, Chainlit, etc.)
- All HumanLayer methods become async (create_function_call, get_function_call, etc.)
- No need for make_async wrappers or other async adapters
- Polling loops should use framework-specific sleep functions (e.g., cl.sleep for Chainlit)
### Vercel AI SDK Integration
- Use raw JSON schema for tool parameters instead of zod
- Tools should be defined with parameters in OpenAI function format
- Streaming responses require OpenAIStream and StreamingTextResponse from 'ai'
- Tool execution should be async and return strings
- Tool definitions don't use zod schemas directly, convert to JSON schema format
- For injecting messages during tool calls:
- Use TransformStream to modify the stream
- Add newlines around injected messages for clean separation
- Track first chunk if special handling is needed
- Use TextEncoder for converting messages to stream format
- Return text-delta type chunks for proper streaming
- Inject messages after the original chunk to maintain flow
- Authentication handled at multiple levels:
- JWT token generation in framework-specific auth endpoints
- Signing key configuration in HumanLayer dashboard

View File

@@ -0,0 +1,216 @@
---
title: "Chainlit"
description: "Use Humanlayer with Chainlit"
icon: "message-bot"
---
## Overview
[Chainlit](https://github.com/Chainlit/chainlit) is a Python framework for building chat interfaces. HumanLayer adds human oversight to your AI applications.
## Installation
```bash
pip install humanlayer chainlit python-dotenv
```
## Complete Example
Create a customer service chat interface with human oversight:
```python
import json
import logging
from openai import AsyncOpenAI
from humanlayer import AsyncHumanLayer, FunctionCallSpec
import chainlit as cl
from dotenv import load_dotenv
load_dotenv()
hl = AsyncHumanLayer(verbose=True)
# Functions that don't require approval
@cl.step(type="tool")
async def fetch_active_orders(email: str) -> int:
"""Fetch active orders."""
return [1]
# Functions requiring approval
@cl.step(type="tool")
async def reimburse_order(order_id, reason) -> int:
"""Process a refund with human approval"""
call = await hl.create_function_call(
spec=FunctionCallSpec(
fn="reimburse_order",
kwargs={"order_id": order_id, "reason": reason},
),
)
with cl.Step(name="Checking with Human") as step:
while (not call.status) or (call.status.approved is None):
await cl.sleep(3)
call = await hl.get_function_call(call_id=call.call_id)
if call.status.approved:
# Reimbursement logic here
function_response_json = json.dumps(True)
else:
function_response_json = json.dumps(
{"error": f"call {call.spec.fn} not approved, comment was {call.status.comment}"}
)
step.output = function_response_json
return function_response_json
# Define available tools
math_tools_map = {
"reimburse_order": reimburse_order,
"fetch_active_orders": fetch_active_orders,
}
# Define OpenAI function schemas
math_tools_openai = [
{
"type": "function",
"function": {
"name": "reimburse_order",
"description": "Reimburses an order.",
"parameters": {
"type": "object",
"properties": {
"order_id": {"type": "string"},
"reason": {"type": "string"},
},
"required": ["order_id", "reason"],
},
},
},
{
"type": "function",
"function": {
"name": "fetch_active_orders",
"description": "Fetches active orders using the user's email.",
"parameters": {
"type": "object",
"properties": {
"email": {"type": "string"},
},
},
},
},
]
logger = logging.getLogger(__name__)
async def run_chain(messages: list[dict], tools_openai: list[dict], tools_map: dict) -> str:
client = AsyncOpenAI()
response = await client.chat.completions.create(
model="gpt-4",
messages=messages,
tools=tools_openai,
tool_choice="auto",
)
while response.choices[0].finish_reason != "stop":
response_message = response.choices[0].message
tool_calls = response_message.tool_calls
if tool_calls:
messages.append(response_message)
logger.info("last message led to %s tool calls", len(tool_calls))
for tool_call in tool_calls:
function_name = tool_call.function.name
function_to_call = tools_map[function_name]
function_args = json.loads(tool_call.function.arguments)
logger.info("CALL tool %s with %s", function_name, function_args)
function_response_json: str
try:
function_response = await function_to_call(**function_args)
function_response_json = json.dumps(function_response)
except Exception as e:
function_response_json = json.dumps(
{
"error": str(e),
}
)
logger.info("tool %s responded with %s", function_name, function_response_json[:200])
messages.append(
{
"tool_call_id": tool_call.id,
"role": "tool",
"name": function_name,
"content": function_response_json,
}
)
response = await client.chat.completions.create(
model="gpt-4",
temperature=0.3,
messages=messages,
tools=tools_openai,
)
return response.choices[0].message.content
@cl.on_chat_start
def start_chat():
cl.user_session.set(
"message_history",
[
{
"role": "system",
"content": "You are a helpful assistant. If the user asks for anything that requires order information, you should use the fetch_active_orders tool first.",
}
],
)
@cl.on_message
async def on_message(message: cl.Message):
message_history = cl.user_session.get("message_history")
message_history.append({"role": "user", "content": message.content})
logging.basicConfig(level=logging.INFO)
result = await run_chain(message_history, math_tools_openai, math_tools_map)
await cl.Message(content=result).send()
```
## Configuration
```bash
# .env
OPENAI_API_KEY=your-openai-key
HUMANLAYER_API_KEY=your-humanlayer-key
```
## Key Features
1. **Human Oversight**
- Require approval for sensitive operations (reimburse_order)
- Safe read-only operations (fetch_active_orders)
- Native async/await support with AsyncHumanLayer
2. **Tool Integration**
- OpenAI function calling
- Chainlit steps for visibility
- Error handling and logging
3. **State Management**
- Message history tracking
- User session management
- Async/await support
## Running the Example
1. Set up environment variables in `.env`
2. Run the app:
```bash
chainlit run app.py
```
## Next Steps
- [Configure contact channels](/channels/introduction)
- [Customize response options](/core/customize-response-options)
- [See complete example](https://github.com/humanlayer/humanlayer/tree/main/examples/chainlit)

View File

@@ -0,0 +1,269 @@
---
title: "Vercel AI SDK"
description: "Use Humanlayer with Vercel AI SDK"
icon: "bolt"
---
## Overview
The [Vercel AI SDK](https://sdk.vercel.ai/docs) enables streaming AI responses in Next.js applications. HumanLayer adds human oversight to your AI features.
## Installation
```bash
npm install humanlayer ai
```
## Basic Example
Here's a simple example showing how to use HumanLayer with the Vercel AI SDK:
```typescript
import { tool, generateText } from "ai";
import { createOpenAI } from "@ai-sdk/openai";
import { humanlayer } from "humanlayer-vercel-ai-sdk";
import { z } from "zod";
const hl = humanlayer({
verbose: true,
});
// Simple math operations
const add = tool({
parameters: z.object({
a: z.number(),
b: z.number(),
}),
execute: async (args) => {
return args.a + args.b;
},
});
// Multiply requires approval
const multiplyTool = tool({
parameters: z.object({
a: z.number(),
b: z.number(),
}),
execute: async (args) => {
return args.a * args.b;
},
});
// Wrap multiply with approval requirement
const multiply = hl.requireApproval({ multiplyTool });
// Human consultation tool
const contactHuman = hl.humanAsTool();
const openai = createOpenAI({
compatibility: "strict",
});
// Generate text with tool access
const { text, steps } = await generateText({
model: openai("gpt-4"),
tools: {
add,
multiply,
contactHuman,
},
maxSteps: 5,
prompt: "multiply 2 and 5, then add 32 to the result",
});
```
## Complete Next.js Example
### API Route (app/api/chat/route.ts)
```typescript
import { humanlayer } from "humanlayer";
import { StreamingTextResponse, LangChainStream } from "ai";
import { ChatOpenAI } from "langchain/chat_models/openai";
import { AIMessage, HumanMessage, SystemMessage } from "langchain/schema";
// Initialize with contact channel
const hl = humanlayer({
runId: "support-chat",
contactChannel: {
slack: {
channelOrUserId: "C123456",
contextAboutChannelOrUser: "the support team",
},
},
});
// Functions requiring approval
const updateSubscription = hl.requireApproval(
async (userId: string, plan: string) => {
// Subscription logic here
return `Updated ${userId} to ${plan}`;
},
);
const issueCredit = hl.requireApproval(
async (userId: string, amount: number) => {
// Credit logic here
return `Issued $${amount} credit to ${userId}`;
},
);
// Support team consultation
const askSupport = hl.humanAsTool({
responseOptions: [
{
name: "approve",
title: "Approve Request",
description: "Grant the customer's request",
},
{
name: "deny",
title: "Deny Request",
description: "Deny with explanation",
},
{
name: "escalate",
title: "Escalate",
description: "Escalate to management",
},
],
});
export async function POST(req: Request) {
const { messages, userId } = await req.json();
const { stream, handlers } = LangChainStream();
const llm = new ChatOpenAI({
streaming: true,
callbacks: [handlers],
});
llm.call(
[
new SystemMessage("You are a helpful customer support assistant."),
...messages.map((m: any) =>
m.role === "user"
? new HumanMessage(m.content)
: new AIMessage(m.content),
),
],
{
tools: [
{
name: "updateSubscription",
description: "Update a user's subscription plan",
parameters: {
type: "object",
properties: {
userId: { type: "string" },
plan: { type: "string" },
},
required: ["userId", "plan"],
},
},
{
name: "issueCredit",
description: "Issue account credit to user",
parameters: {
type: "object",
properties: {
userId: { type: "string" },
amount: { type: "number" },
},
required: ["userId", "amount"],
},
},
{
name: "askSupport",
description: "Ask support team for help",
parameters: {
type: "object",
properties: {
message: { type: "string" },
},
required: ["message"],
},
},
],
},
);
return new StreamingTextResponse(stream);
}
```
### Client Component (app/page.tsx)
```typescript
"use client";
import { useChat } from "ai/react";
export default function Chat() {
const { messages, input, handleInputChange, handleSubmit } = useChat();
return (
<div className="p-4 max-w-lg mx-auto">
<div className="space-y-4">
{messages.map(m => (
<div key={m.id} className={`p-4 rounded-lg ${
m.role === "user" ? "bg-blue-100" : "bg-gray-100"
}`}>
<p className="font-semibold">{m.role}</p>
<p>{m.content}</p>
</div>
))}
</div>
<form onSubmit={handleSubmit} className="mt-4">
<input
value={input}
onChange={handleInputChange}
placeholder="How can I help?"
className="w-full p-2 border rounded"
/>
<button
type="submit"
className="mt-2 px-4 py-2 bg-blue-500 text-white rounded"
>
Send
</button>
</form>
</div>
);
}
```
## Key Features
1. **Streaming Responses**
- Real-time AI output
- Responsive UI
- Error handling
2. **Human Oversight**
- Approval workflows
- Support team consultation
- Structured responses
3. **Type Safety**
- Full TypeScript support
- Validated parameters
- Error boundaries
## Configuration
```bash
# .env.local
OPENAI_API_KEY=your-openai-key
HUMANLAYER_API_KEY=your-humanlayer-key
```
## Next Steps
- [Configure contact channels](/channels/introduction)
- [Customize response options](/core/customize-response-options)
- [See complete example](https://github.com/humanlayer/humanlayer/tree/main/examples/ts_vercel_ai_sdk)

View File

@@ -64,7 +64,8 @@
"core/run-ids-and-call-ids",
"core/agent-webhooks",
"core/response-webhooks",
"core/state-management"
"core/state-management",
"core/classifications"
]
},
{
@@ -83,7 +84,9 @@
"frameworks/open-ai",
"frameworks/langchain",
"frameworks/crewai",
"frameworks/controlflow"
"frameworks/controlflow",
"frameworks/chainlit",
"frameworks/vercel-ai-sdk"
]
},
{
@@ -91,8 +94,7 @@
"pages": [
"api-reference/introduction",
"api-reference/function-calls",
"api-reference/human-contacts",
"api-reference/classifications"
"api-reference/human-contacts"
]
}
],

View File

@@ -6,6 +6,111 @@ icon: "npm"
HumanLayer has [a dedicated client for TypeScript](https://github.com/humanlayer/humanlayer/tree/main/humanlayer-ts).
## 🚧 Guide Coming Soon 🚧
## Installation
In lieu of an in-depth guide, please explore one of the [Typescript examples in the humanlayer repo](https://github.com/humanlayer/humanlayer/tree/main/examples#typescript-examples)
Install the HumanLayer TypeScript SDK:
```bash
npm install humanlayer
```
## Basic Example
Here's a minimal example using HumanLayer with OpenAI - the full code is available in the [humanlayer-ts repo](https://github.com/humanlayer/humanlayer/tree/main/examples/ts_openai_client).
### Configuration
Set your API keys in your environment:
```bash
export OPENAI_API_KEY=your-openai-key
export HUMANLAYER_API_KEY=your-humanlayer-key
```
### Basic TS Agent
This basic example shows how to use HumanLayer as a tool in an OpenAI chat completion loop.
```typescript
import { humanlayer } from "humanlayer";
import OpenAI from "openai";
// Initialize clients
const hl = humanlayer({ runId: "quickstart", verbose: true });
const openai = new OpenAI();
// Define a function that requires approval
const sendEmail = hl.requireApproval(
async (to: string, subject: string, body: string) => {
// Your email sending logic here
return `Email sent to ${to}`;
},
);
// Use in an OpenAI chat completion
const messages = [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "Send a welcome email to new@example.com" },
];
const completion = await openai.chat.completions.create({
messages,
model: "gpt-3.5-turbo",
tools: [
{
type: "function",
function: {
name: "sendEmail",
description: "Send an email to a user",
parameters: {
type: "object",
properties: {
to: { type: "string", description: "Recipient email" },
subject: { type: "string", description: "Subject line" },
body: { type: "string", description: "Email content" },
},
required: ["to", "subject", "body"],
},
},
},
],
});
// Handle tool calls
const message = completion.choices[0].message;
if (message.tool_calls) {
for (const toolCall of message.tool_calls) {
if (toolCall.type === "function") {
const args = JSON.parse(toolCall.function.arguments);
await sendEmail(args.to, args.subject, args.body);
}
}
}
```
## Contact Channels
Configure how approvals are requested:
```typescript
import { humanlayer, ContactChannel, SlackContactChannel } from "humanlayer";
const hl = humanlayer({
runId: "quickstart",
contactChannel: new ContactChannel({
slack: new SlackContactChannel({
channelOrUserId: "C123456",
contextAboutChannelOrUser: "the compliance team",
}),
}),
});
```
## Next Steps
- Explore some of the [Typescript examples in the humanlayer repo](https://github.com/humanlayer/humanlayer/tree/main/examples#typescript-examples)
- Learn about [require_approval](/core/require-approval)
- Configure [contact channels](/channels/slack)
- Explore [response options](/core/customize-response-options)
- See more [TypeScript examples](https://github.com/humanlayer/humanlayer/tree/main/examples#typescript-examples)
- Try the [complete quickstart example](https://github.com/humanlayer/humanlayer/tree/main/examples/ts_openai_client/02-human-as-tool.ts)

View File

@@ -1,31 +1,42 @@
import json
import logging
from openai import AsyncOpenAI
from humanlayer import HumanLayer, FunctionCallSpec
from humanlayer import AsyncHumanLayer, FunctionCallSpec
import chainlit as cl
from dotenv import load_dotenv
import os, time
load_dotenv()
hl = HumanLayer.cloud(verbose=True)
hl = AsyncHumanLayer(verbose=True)
# add can be called without approval
# Functions that don't require approval
@cl.step(type="tool")
async def fetch_active_orders(email: str) -> int:
async def fetch_active_orders(email: str) -> list[dict]:
"""Fetch active orders."""
return [1]
return [
{
"order_id": "123",
"status": "active",
"amount": 100,
"created_at": "2021-01-01",
"updated_at": "2021-01-01",
},
{
"order_id": "456",
"status": "cancelled",
"amount": 200,
"created_at": "2021-01-01",
"updated_at": "2021-01-01",
},
]
# Functions requiring approval
@cl.step(type="tool")
async def reimburse_order(order_id, reason) -> int:
import uuid
call = await cl.make_async(hl.create_function_call)(
async def reimburse_order(order_id, reason) -> str:
"""Process a refund with human approval"""
call = await hl.create_function_call(
spec=FunctionCallSpec(
fn="reimburse_order",
kwargs={"order_id": order_id, "reason": reason},
@@ -34,29 +45,33 @@ async def reimburse_order(order_id, reason) -> int:
with cl.Step(name="Checking with Human") as step:
while (not call.status) or (call.status.approved is None):
await cl.sleep(3)
call = hl.get_function_call(call_id=call.call_id)
call = await hl.get_function_call(call_id=call.call_id)
if call.status.approved:
# some reimbursement logic here
function_response_json = json.dumps(True)
# Reimbursement logic here
function_response_json = json.dumps("Reimbursement approved")
step.output = function_response_json
return function_response_json
else:
function_response_json = json.dumps(
{"error": f"call {call.spec.fn} not approved, comment was {call.status.comment}"}
)
step.output = function_response_json
return function_response_json
step.output = "Reimbursement not approved"
return function_response_json
math_tools_map = {
# Define available tools
tools_map = {
"reimburse_order": reimburse_order,
"fetch_active_orders": fetch_active_orders,
}
math_tools_openai = [
# Define OpenAI function schemas
tools = [
{
"type": "function",
"function": {
"name": "reimburse_order",
"description": "Reimburses an order.",
"description": "Reimburses an order, ensure you get a reason from the human before calling.",
"parameters": {
"type": "object",
"properties": {
@@ -88,7 +103,7 @@ logger = logging.getLogger(__name__)
async def run_chain(messages: list[dict], tools_openai: list[dict], tools_map: dict) -> str:
client = AsyncOpenAI()
response = await client.chat.completions.create(
model="gpt-4o",
model="gpt-4",
messages=messages,
tools=tools_openai,
tool_choice="auto",
@@ -98,7 +113,7 @@ async def run_chain(messages: list[dict], tools_openai: list[dict], tools_map: d
response_message = response.choices[0].message
tool_calls = response_message.tool_calls
if tool_calls:
messages.append(response_message) # extend conversation with assistant's reply
messages.append(response_message)
logger.info("last message led to %s tool calls", len(tool_calls))
for tool_call in tool_calls:
function_name = tool_call.function.name
@@ -125,9 +140,9 @@ async def run_chain(messages: list[dict], tools_openai: list[dict], tools_map: d
"name": function_name,
"content": function_response_json,
}
) # extend conversation with function response
)
response = await client.chat.completions.create(
model="gpt-4o",
model="gpt-4",
temperature=0.3,
messages=messages,
tools=tools_openai,
@@ -140,11 +155,10 @@ async def run_chain(messages: list[dict], tools_openai: list[dict], tools_map: d
def start_chat():
cl.user_session.set(
"message_history",
# could pass user email as a parameter
[
{
"role": "system",
"content": f"You are a helpful assistant, helping john@gmail.com. If the user asks for anything that requires order information, you should use the fetch_active_orders tool first.",
"content": "You are a helpful assistant. If the user asks for anything that requires order information, you should use the fetch_active_orders tool first.",
}
],
)
@@ -157,5 +171,5 @@ async def on_message(message: cl.Message):
logging.basicConfig(level=logging.INFO)
result = await run_chain(message_history, math_tools_openai, math_tools_map)
result = await run_chain(message_history, tools, tools_map)
await cl.Message(content=result).send()

View File

@@ -0,0 +1,104 @@
aiofiles==23.2.1
aiohappyeyeballs==2.4.6
aiohttp==3.11.12
aiosignal==1.3.2
annotated-types==0.7.0
anyio==4.8.0
asyncer==0.0.7
attrs==25.1.0
backports.tarfile==1.2.0
bidict==0.23.1
build==1.2.2.post1
CacheControl==0.14.2
certifi==2025.1.31
cffi==1.17.1
chainlit==2.2.0
charset-normalizer==3.4.1
chevron==0.14.0
cleo==2.1.0
click==8.1.8
crashtest==0.4.1
dataclasses-json==0.6.7
Deprecated==1.2.18
distlib==0.3.9
distro==1.9.0
dulwich==0.22.7
fastapi==0.115.8
fastjsonschema==2.21.1
filelock==3.17.0
filetype==1.2.0
frozenlist==1.5.0
googleapis-common-protos==1.66.0
grpcio==1.70.0
h11==0.14.0
httpcore==1.0.7
httpx==0.28.1
humanlayer==0.7.4
idna==3.10
importlib_metadata==8.5.0
installer==0.7.0
jaraco.classes==3.4.0
jaraco.context==6.0.1
jaraco.functools==4.1.0
jiter==0.8.2
keyring==25.6.0
Lazify==0.4.0
literalai==0.1.103
marshmallow==3.26.1
more-itertools==10.6.0
msgpack==1.1.0
multidict==6.1.0
mypy-extensions==1.0.0
nest-asyncio==1.6.0
openai==1.61.1
opentelemetry-api==1.29.0
opentelemetry-exporter-otlp==1.29.0
opentelemetry-exporter-otlp-proto-common==1.29.0
opentelemetry-exporter-otlp-proto-grpc==1.29.0
opentelemetry-exporter-otlp-proto-http==1.29.0
opentelemetry-instrumentation==0.50b0
opentelemetry-proto==1.29.0
opentelemetry-sdk==1.29.0
opentelemetry-semantic-conventions==0.50b0
packaging==24.2
pkginfo==1.12.0
platformdirs==4.3.6
poetry==2.0.1
poetry-core==2.0.1
propcache==0.2.1
protobuf==5.29.3
pycparser==2.22
pydantic==2.10.6
pydantic_core==2.27.2
PyJWT==2.10.1
pyproject_hooks==1.2.0
python-dotenv==1.0.1
python-engineio==4.11.2
python-multipart==0.0.18
python-slugify==8.0.4
python-socketio==5.12.1
RapidFuzz==3.12.1
requests==2.32.3
requests-toolbelt==1.0.0
shellingham==1.5.4
simple-websocket==1.1.0
sniffio==1.3.1
starlette==0.41.3
syncer==2.0.3
text-unidecode==1.3
tomli==2.2.1
tomlkit==0.13.2
tqdm==4.67.1
trove-classifiers==2025.1.15.22
typing-inspect==0.9.0
typing_extensions==4.12.2
uptrace==1.29.0
urllib3==2.3.0
uvicorn==0.34.0
virtualenv==20.29.1
watchfiles==0.20.0
wrapt==1.17.2
wsproto==1.2.0
xattr==1.1.4
yarl==1.18.3
zipp==3.21.0

View File

@@ -20,12 +20,143 @@ hl = HumanLayer(
email=EmailContactChannel(
address=os.getenv("HL_EXAMPLE_CONTACT_EMAIL", "dexter@humanlayer.dev"),
context_about_user="the user you are helping",
template="""
<!DOCTYPE html>
<html>
<head>
<style>
body {
font-family: Arial, sans-serif;
line-height: 1.6;
color: #333;
max-width: 800px;
margin: 0 auto;
padding: 20px;
background-color: #faf5ff;
}
.greeting {
font-size: 1.2em;
color: #6b46c1;
border-bottom: 2px solid #9f7aea;
padding-bottom: 8px;
display: inline-block;
}
.content {
margin: 20px 0;
background-color: white;
padding: 20px;
border-radius: 8px;
border-left: 4px solid #805ad5;
}
.signature {
color: #553c9a;
font-style: italic;
text-shadow: 1px 1px 2px rgba(107, 70, 193, 0.1);
}
</style>
</head>
<body>
<div class="content">
{{event.spec.msg}}
</div>
</body>
</html>
""",
)
),
# run_id is optional -it can be used to identify the agent in approval history
run_id=run_id,
)
approval_channel = ContactChannel(
email=EmailContactChannel(
address=os.getenv("HL_EXAMPLE_CONTACT_EMAIL", "dexter@humanlayer.dev"),
context_about_user="the user you are helping",
template="""
<!DOCTYPE html>
<html>
<head>
<style>
body {
font-family: Arial, sans-serif;
line-height: 1.6;
color: #333;
max-width: 800px;
margin: 0 auto;
padding: 20px;
background-color: #f8f4ff;
}
.header {
background: linear-gradient(135deg, #6b46c1, #805ad5);
color: white;
padding: 20px;
border-radius: 8px;
margin-bottom: 30px;
box-shadow: 0 4px 6px rgba(107, 70, 193, 0.2);
}
.function-name {
font-size: 1.5em;
font-weight: bold;
margin-bottom: 10px;
text-shadow: 1px 1px 2px rgba(0,0,0,0.2);
}
.parameters {
background-color: white;
padding: 20px;
border-radius: 8px;
box-shadow: 0 2px 8px rgba(107, 70, 193, 0.15);
border-left: 4px solid #9f7aea;
}
.param-row {
display: flex;
padding: 10px 0;
border-bottom: 1px solid #e9d8fd;
transition: background-color 0.2s;
}
.param-row:hover {
background-color: #faf5ff;
}
.param-key {
font-weight: bold;
width: 200px;
color: #553c9a;
}
.param-value {
flex: 1;
color: #2d3748;
}
.signature {
margin-top: 30px;
color: #6b46c1;
font-style: italic;
border-top: 2px solid #9f7aea;
padding-top: 15px;
}
</style>
</head>
<body>
<div class="header">
<div class="function-name">Function Call: {{event.spec.fn}}</div>
</div>
<div class="parameters">
{% for key, value in event.spec.kwargs.items() %}
<div class="param-row">
<div class="param-key">{{key}}</div>
<div class="param-value">{{value}}</div>
</div>
{% endfor %}
</div>
<div class="signature">
Best regards,<br>
Your Assistant
</div>
</body>
</html>
""",
)
)
task_prompt = f"""
You are the ceo's assistant, he contacts you via email with various tasks.
@@ -84,7 +215,7 @@ def get_linear_assignees() -> Any:
]
@hl.require_approval()
@hl.require_approval(contact_channel=approval_channel)
def create_linear_ticket(title: str, assignee: str, description: str, project: str, due_date: str) -> str:
"""create a ticket in linear"""
if project != "operations":

View File

@@ -0,0 +1,8 @@
# OpenAI API Key - https://platform.openai.com/account/api-keys
OPENAI_API_KEY=
# HumanLayer API Key - https://app.humanlayer.dev
HUMANLAYER_API_KEY=
# Optional: Override HumanLayer API Base URL
# HUMANLAYER_API_BASE=https://api.humanlayer.dev/humanlayer/v1

30
examples/ts_vercel_ai_nextjs/.gitignore vendored Normal file
View File

@@ -0,0 +1,30 @@
# dependencies
/node_modules
/.pnp
.pnp.js
# testing
/coverage
# next.js
/.next/
/out/
# production
/build
# misc
.DS_Store
*.pem
# debug
npm-debug.log*
yarn-debug.log*
yarn-error.log*
# local env files
.env*.local
# typescript
*.tsbuildinfo
next-env.d.ts

View File

@@ -0,0 +1,50 @@
# HumanLayer Next.js Chat Example
This example shows how to use HumanLayer with the Vercel AI SDK in a Next.js application to create a chat interface with human oversight.
## Features
- Real-time streaming responses
- Human approval for sensitive operations
- Support team consultation
- Subscription management
- Credit issuance
## Getting Started
1. Install dependencies:
```bash
npm install
```
2. Configure environment variables:
```bash
cp .env.local.example .env.local
```
Then edit `.env.local` with your API keys:
- `OPENAI_API_KEY`: Your OpenAI API key
- `HUMANLAYER_API_KEY`: Your HumanLayer API key
3. Run the development server:
```bash
npm run dev
```
4. Open [http://localhost:3000](http://localhost:3000) in your browser.
## How It Works
- The chat interface uses the Vercel AI SDK's `useChat` hook for streaming responses
- Sensitive operations (subscription updates, credit issuance) require human approval
- Support team can be consulted through the `askSupport` tool
- All human interactions are managed through HumanLayer's contact channels
## Learn More
- [HumanLayer Documentation](https://docs.humanlayer.dev)
- [Vercel AI SDK Documentation](https://sdk.vercel.ai/docs)

View File

@@ -0,0 +1,77 @@
import { openai } from "@ai-sdk/openai";
import { Message as AIMessage, streamText } from "ai";
import { z } from "zod";
import { humanlayer } from "humanlayer-vercel-ai-sdk";
// Allow streaming responses up to 30 seconds
export const maxDuration = 30;
const hl = humanlayer({
verbose: true,
});
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai("gpt-4o"),
messages,
maxSteps: 10,
tools: {
// Functions that don't require approval
fetch_active_orders: {
description: "Fetch active orders using the user's email.",
parameters: z.object({
email: z.string().describe("The user's email address"),
}),
execute: async ({ email }: { email: string }) => {
console.log(`[API] Fetching active orders for ${email}`);
return [
{
order_id: "123",
status: "active",
amount: 100,
created_at: "2021-01-01",
updated_at: "2021-01-01",
},
{
order_id: "456",
status: "cancelled",
amount: 200,
created_at: "2021-01-01",
updated_at: "2021-01-01",
},
]; // Simulated active orders
},
},
// Functions requiring approval
reimburse_order: hl.requireApproval({
reimburse_order: {
description: "Process a refund with human approval",
parameters: z.object({
order_id: z.string().describe("The order ID to reimburse"),
reason: z.string().describe("The reason for the reimbursement"),
}),
execute: async ({
order_id,
reason,
}: {
order_id: string;
reason: string;
}) => {
console.log(
`[API] Processing refund for order ${order_id} with reason: ${reason}`,
);
// Reimbursement logic would go here
return "refund processed";
},
},
}),
},
system:
"You are a helpful assistant. If the user asks for anything that requires order information, you should use the fetch_active_orders tool first.",
});
return result.toDataStreamResponse();
}

View File

@@ -0,0 +1,187 @@
@tailwind base;
@tailwind components;
@tailwind utilities;
:root {
--foreground-rgb: 0, 0, 0;
--background-start-rgb: 250, 250, 250;
--background-end-rgb: 255, 255, 255;
--primary-glow: conic-gradient(
from 180deg at 50% 50%,
#42669933 0deg,
#42669933 55deg,
#42669933 120deg,
#42669933 160deg,
transparent 360deg
);
}
@media (prefers-color-scheme: dark) {
:root {
--foreground-rgb: 255, 255, 255;
--background-start-rgb: 17, 24, 39;
--background-end-rgb: 13, 17, 23;
--primary-glow: radial-gradient(
rgba(66, 102, 153, 0.4),
rgba(66, 102, 153, 0)
);
}
}
body {
color: rgb(var(--foreground-rgb));
background: linear-gradient(
to bottom,
rgb(var(--background-start-rgb)),
rgb(var(--background-end-rgb))
);
min-height: 100vh;
}
/* Smooth scrolling */
html {
scroll-behavior: smooth;
}
/* Custom scrollbar */
::-webkit-scrollbar {
width: 8px;
height: 8px;
}
::-webkit-scrollbar-track {
background: transparent;
}
::-webkit-scrollbar-thumb {
background: rgba(156, 163, 175, 0.5);
border-radius: 20px;
border: 2px solid transparent;
background-clip: content-box;
}
::-webkit-scrollbar-thumb:hover {
background: rgba(156, 163, 175, 0.7);
border: 2px solid transparent;
background-clip: content-box;
}
@media (prefers-color-scheme: dark) {
::-webkit-scrollbar-thumb {
background: rgba(75, 85, 99, 0.5);
border: 2px solid transparent;
background-clip: content-box;
}
::-webkit-scrollbar-thumb:hover {
background: rgba(75, 85, 99, 0.7);
border: 2px solid transparent;
background-clip: content-box;
}
}
/* Message animations */
@keyframes slideIn {
from {
opacity: 0;
transform: translateY(10px);
}
to {
opacity: 1;
transform: translateY(0);
}
}
.message-animate {
animation: slideIn 0.3s ease-out forwards;
}
/* Loading animation */
@keyframes typing {
0% {
transform: scale(0.8);
opacity: 0.5;
}
50% {
transform: scale(1.2);
opacity: 1;
}
100% {
transform: scale(0.8);
opacity: 0.5;
}
}
.typing-dot {
animation: typing 1.4s infinite;
animation-fill-mode: both;
}
.typing-dot:nth-child(2) {
animation-delay: 0.2s;
}
.typing-dot:nth-child(3) {
animation-delay: 0.4s;
}
/* Button and input styles */
.button-hover {
transition: all 0.2s cubic-bezier(0.4, 0, 0.2, 1);
}
.button-hover:hover:not(:disabled) {
transform: translateY(-1px);
box-shadow: 0 4px 12px rgba(0, 0, 0, 0.1);
}
.button-hover:active:not(:disabled) {
transform: translateY(0);
}
@media (prefers-color-scheme: dark) {
.button-hover:hover:not(:disabled) {
box-shadow: 0 4px 12px rgba(0, 0, 0, 0.3);
}
}
/* Focus styles */
input:focus,
button:focus {
outline: none;
ring-color: rgb(59, 130, 246);
ring-offset-color: rgb(255, 255, 255);
}
@media (prefers-color-scheme: dark) {
input:focus,
button:focus {
ring-offset-color: rgb(17, 24, 39);
}
}
/* Glass effect for containers */
.glass-effect {
background: rgba(255, 255, 255, 0.05);
backdrop-filter: blur(10px);
border: 1px solid rgba(255, 255, 255, 0.1);
}
@media (prefers-color-scheme: dark) {
.glass-effect {
background: rgba(0, 0, 0, 0.2);
}
}
/* Responsive font sizes */
@media screen and (max-width: 640px) {
html {
font-size: 14px;
}
}
@media screen and (min-width: 1280px) {
html {
font-size: 16px;
}
}

View File

@@ -0,0 +1,24 @@
import "./globals.css";
export const metadata = {
title: "HumanLayer + Vercel AI SDK + Next.js",
description: "Chat example with human oversight",
};
export default function RootLayout({
children,
}: {
children: React.ReactNode;
}) {
return (
<html lang="en">
<body>
<header className="bg-[#426699] text-white p-4 text-center">
<h1 className="text-2xl font-bold">
HumanLayer + Vercel AI SDK + Next.js
</h1>
</header>
{children}
</body>
</html>
);
}

View File

@@ -0,0 +1,110 @@
"use client";
import { useChat } from "ai/react";
import { useEffect, useRef } from "react";
export default function Chat() {
const { messages, input, handleInputChange, handleSubmit, isLoading } =
useChat({
onToolCall: async ({ toolCall }) => {
// Just log when we see the tool call
console.log("Tool call:", toolCall);
},
});
// Auto-scroll to bottom
const messagesEndRef = useRef<HTMLDivElement>(null);
const scrollToBottom = () => {
messagesEndRef.current?.scrollIntoView({ behavior: "smooth" });
};
useEffect(() => {
scrollToBottom();
}, [messages]);
return (
<div className="flex flex-col w-full max-w-4xl mx-auto h-[90vh] p-4">
<div className="flex-1 bg-white dark:bg-gray-800 rounded-lg shadow-xl overflow-hidden flex flex-col">
{/* Messages Container */}
<div className="flex-1 overflow-y-auto p-6 space-y-4">
{messages.map((m) => (
<div
key={m.id}
className={`flex ${
m.role === "user" ? "justify-end" : "justify-start"
}`}
>
<div
className={`p-4 rounded-lg max-w-[80%] ${
m.role === "user"
? "bg-[#426699] text-white"
: "bg-gray-100 dark:bg-gray-700 text-gray-800 dark:text-gray-100"
}`}
>
<div className="text-sm opacity-75 mb-1">
{m.role === "user" ? "You" : "Assistant"}
</div>
<p className="whitespace-pre-wrap">{m.content}</p>
</div>
</div>
))}
{isLoading && (
<div className="flex justify-start">
<div className="bg-gray-100 dark:bg-gray-700 text-gray-800 dark:text-gray-100 p-4 rounded-lg max-w-[80%]">
<div className="text-sm opacity-75 mb-1">Assistant</div>
<div className="flex items-center h-6 space-x-2">
<div className="w-2 h-2 bg-gray-500 rounded-full animate-bounce [animation-delay:-0.3s]" />
<div className="w-2 h-2 bg-gray-500 rounded-full animate-bounce [animation-delay:-0.15s]" />
<div className="w-2 h-2 bg-gray-500 rounded-full animate-bounce" />
</div>
</div>
</div>
)}
<div ref={messagesEndRef} /> {/* Auto-scroll anchor */}
</div>
{/* Input Form */}
<div className="p-4 bg-gray-50 dark:bg-gray-900 border-t dark:border-gray-700">
<form onSubmit={handleSubmit} className="flex gap-2">
<input
value={input}
onChange={handleInputChange}
placeholder="Ask about orders or request a refund..."
className="flex-1 p-4 rounded-lg border border-gray-200 dark:border-gray-700 focus:outline-none focus:ring-2 focus:ring-[#426699] dark:bg-gray-800 dark:text-gray-100 transition-colors"
disabled={isLoading}
/>
<button
type="submit"
disabled={isLoading || !input.trim()}
className={`px-6 py-4 rounded-lg font-medium transition-colors ${
isLoading || !input.trim()
? "bg-gray-300 dark:bg-gray-600 cursor-not-allowed"
: "bg-[#426699] hover:bg-[#35547d] text-white shadow-lg hover:shadow-xl"
}`}
>
Send
</button>
</form>
{/* Example Queries */}
<div className="mt-4 flex flex-wrap gap-2">
{[
"Show me my active orders",
"I need a refund for order #123",
"Can you help me get a refund? The product was damaged",
].map((query) => (
<button
key={query}
onClick={() => {
handleInputChange({ target: { value: query } } as any);
}}
className="text-sm px-3 py-1.5 bg-gray-100 dark:bg-gray-800 text-gray-700 dark:text-gray-300 rounded-full hover:bg-gray-200 dark:hover:bg-gray-700 transition-colors"
>
{query}
</button>
))}
</div>
</div>
</div>
</div>
);
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,28 @@
{
"name": "ts-vercel-ai-nextjs",
"version": "0.1.0",
"private": true,
"scripts": {
"dev": "next dev",
"build": "next build",
"start": "next start"
},
"dependencies": {
"@ai-sdk/openai": "^1.1.9",
"ai": "^4",
"humanlayer-vercel-ai-sdk": "^0.1.0-alpha1",
"next": "^14.0.0",
"react": "^18.2.0",
"react-dom": "^18.2.0",
"zod": "^3.22.0"
},
"devDependencies": {
"@types/node": "^20.0.0",
"@types/react": "^18.0.0",
"@types/react-dom": "^18.0.0",
"autoprefixer": "^10.4.14",
"postcss": "^8.4.24",
"tailwindcss": "^3.3.2",
"typescript": "^5.0.4"
}
}

View File

@@ -0,0 +1,6 @@
module.exports = {
plugins: {
tailwindcss: {},
autoprefixer: {},
},
};

View File

@@ -0,0 +1,13 @@
/** @type {import('tailwindcss').Config} */
module.exports = {
content: [
"./pages/**/*.{js,ts,jsx,tsx,mdx}",
"./components/**/*.{js,ts,jsx,tsx,mdx}",
"./app/**/*.{js,ts,jsx,tsx,mdx}",
],
darkMode: "media", // This enables dark mode based on system preferences
theme: {
extend: {},
},
plugins: [],
};

View File

@@ -0,0 +1,25 @@
{
"compilerOptions": {
"target": "ES6",
"lib": ["dom", "dom.iterable", "esnext"],
"allowJs": true,
"skipLibCheck": true,
"strict": true,
"forceConsistentCasingInFileNames": true,
"noEmit": true,
"esModuleInterop": true,
"module": "esnext",
"moduleResolution": "node",
"resolveJsonModule": true,
"isolatedModules": true,
"jsx": "preserve",
"incremental": true,
"plugins": [
{
"name": "next"
}
]
},
"include": ["next-env.d.ts", "**/*.ts", "**/*.tsx", ".next/types/**/*.ts"],
"exclude": ["node_modules"]
}

View File

@@ -1,3 +1,161 @@
# humanlayer-ts
# HumanLayer TypeScript SDK
HumanLayer typescript client.
The official TypeScript SDK for [HumanLayer](https://humanlayer.dev), providing human oversight for AI applications.
## Installation
```bash
npm install humanlayer
```
## Key Features
- Human approval workflows for sensitive operations
- Structured feedback collection from humans
- Multiple contact channels (Slack, Email, etc.)
- Full TypeScript support
- Async/await API
- Framework integrations
## Basic Usage
```typescript
import { humanlayer } from 'humanlayer'
const hl = humanlayer({
runId: 'my-agent',
contactChannel: {
slack: {
channelOrUserId: 'C123456',
contextAboutChannelOrUser: 'the compliance team',
},
},
})
// Require approval for sensitive functions
const sendEmail = hl.requireApproval(async (to: string, subject: string) => {
// Email sending logic here
})
// Get human input during execution
const support = hl.humanAsTool({
responseOptions: [
{ name: 'approve', title: 'Approve' },
{ name: 'deny', title: 'Deny' },
],
})
```
## Framework Support
- OpenAI function calling
- LangChain.js
- Vercel AI SDK
## Contact Channels
Configure how humans are contacted:
```typescript
// Slack
const slackChannel = {
slack: {
channelOrUserId: 'C123456',
contextAboutChannelOrUser: 'the support team',
},
}
// Email
const emailChannel = {
email: {
address: 'support@company.com',
contextAboutUser: 'the support team',
},
}
// Multiple channels
const multiChannel = {
all_of: [slackChannel, emailChannel],
}
```
## Response Options
Structure human responses:
```typescript
const options = [
{
name: 'approve',
title: 'Approve',
description: 'Approve the action',
},
{
name: 'deny',
title: 'Deny',
description: 'Deny with feedback',
promptFill: 'Denied because...',
},
]
const approval = await hl.requireApproval(myFunction, {
responseOptions: options,
})
```
## Error Handling
The SDK provides detailed error types:
```typescript
try {
await hl.requireApproval(myFunction)()
} catch (error) {
if (error instanceof HumanLayerException) {
// Handle HumanLayer-specific errors
console.error('HumanLayer error:', error.message)
} else {
// Handle other errors
console.error('Unexpected error:', error)
}
}
```
## Environment Variables
Required:
- `HUMANLAYER_API_KEY`: Your HumanLayer API key
Optional:
- `HUMANLAYER_API_BASE`: API base URL (default: https://api.humanlayer.dev/humanlayer/v1)
- `HUMANLAYER_HTTP_TIMEOUT_SECONDS`: HTTP timeout in seconds (default: 30)
## Examples
See the [examples directory](https://github.com/humanlayer/humanlayer/tree/main/examples#typescript-examples) for complete working examples:
- [OpenAI function calling](https://github.com/humanlayer/humanlayer/tree/main/examples/ts_openai_client)
- [Email classification](https://github.com/humanlayer/humanlayer/tree/main/examples/ts_email_classifier)
- [Vercel AI SDK integration](https://github.com/humanlayer/humanlayer/tree/main/examples/ts_vercel_ai_sdk)
## Development
```bash
# Install dependencies
npm install
# Run tests
npm test
# Build
npm run build
# Type check
npm run check
```
## License
Apache 2.0 - see [LICENSE](../LICENSE)