Merge branch 'main' into patch-1

This commit is contained in:
Philipp Schmid
2025-06-18 15:40:00 +02:00
committed by GitHub
15 changed files with 148 additions and 69 deletions

View File

@@ -2,7 +2,7 @@
This project demonstrates a fullstack application using a React frontend and a LangGraph-powered backend agent. The agent is designed to perform comprehensive research on a user's query by dynamically generating search terms, querying the web using Google Search, reflecting on the results to identify knowledge gaps, and iteratively refining its search until it can provide a well-supported answer with citations. This application serves as an example of building research-augmented conversational AI using LangGraph and Google's Gemini models.
![Gemini Fullstack LangGraph](./app.png)
<img src="./app.png" title="Gemini Fullstack LangGraph" alt="Gemini Fullstack LangGraph" width="90%">
## Features
@@ -12,7 +12,7 @@ This project demonstrates a fullstack application using a React frontend and a L
- 🌐 Integrated web research via Google Search API.
- 🤔 Reflective reasoning to identify knowledge gaps and refine searches.
- 📄 Generates answers with citations from gathered sources.
- 🔄 Hot-reloading for both frontend and backend development.
- 🔄 Hot-reloading for both frontend and backend during development.
## Project Structure
@@ -28,7 +28,7 @@ Follow these steps to get the application running locally for development and te
**1. Prerequisites:**
- Node.js and npm (or yarn/pnpm)
- Python 3.8+
- Python 3.11+
- **`GEMINI_API_KEY`**: The backend agent requires a Google Gemini API key.
1. Navigate to the `backend/` directory.
2. Create a file named `.env` by copying the `backend/.env.example` file.
@@ -65,7 +65,7 @@ _Alternatively, you can run the backend and frontend development servers separat
The core of the backend is a LangGraph agent defined in `backend/src/agent/graph.py`. It follows these steps:
![Agent Flow](./agent.png)
<img src="./agent.png" title="Agent Flow" alt="Agent Flow" width="50%">
1. **Generate Initial Queries:** Based on your input, it generates a set of initial search queries using a Gemini model.
2. **Web Research:** For each query, it uses the Gemini model with the Google Search API to find relevant web pages.
@@ -73,13 +73,25 @@ The core of the backend is a LangGraph agent defined in `backend/src/agent/graph
4. **Iterative Refinement:** If gaps are found or the information is insufficient, it generates follow-up queries and repeats the web research and reflection steps (up to a configured maximum number of loops).
5. **Finalize Answer:** Once the research is deemed sufficient, the agent synthesizes the gathered information into a coherent answer, including citations from the web sources, using a Gemini model.
## CLI Example
For quick one-off questions you can execute the agent from the command line. The
script `backend/examples/cli_research.py` runs the LangGraph agent and prints the
final answer:
```bash
cd backend
python examples/cli_research.py "What are the latest trends in renewable energy?"
```
## Deployment
In production, the backend server serves the optimized static frontend build. LangGraph requires a Redis instance and a Postgres database. Redis is used as a pub-sub broker to enable streaming real time output from background runs. Postgres is used to store assistants, threads, runs, persist thread state and long term memory, and to manage the state of the background task queue with 'exactly once' semantics. For more details on how to deploy the backend server, take a look at the [LangGraph Documentation](https://langchain-ai.github.io/langgraph/concepts/deployment_options/). Below is an example of how to build a Docker image that includes the optimized frontend build and the backend server and run it via `docker-compose`.
_Note: For the docker-compose.yml example you need a LangSmith API key, you can get one from [LangSmith](https://smith.langchain.com/settings)._
_Note: If you are not running the docker-compose.yml example or exposing the backend server to the public internet, you update the `apiUrl` in the `frontend/src/App.tsx` file your host. Currently the `apiUrl` is set to `http://localhost:8123` for docker-compose or `http://localhost:2024` for development._
_Note: If you are not running the docker-compose.yml example or exposing the backend server to the public internet, you should update the `apiUrl` in the `frontend/src/App.tsx` file to your host. Currently the `apiUrl` is set to `http://localhost:8123` for docker-compose or `http://localhost:2024` for development._
**1. Build the Docker Image:**

View File

@@ -0,0 +1,43 @@
import argparse
from langchain_core.messages import HumanMessage
from agent.graph import graph
def main() -> None:
"""Run the research agent from the command line."""
parser = argparse.ArgumentParser(description="Run the LangGraph research agent")
parser.add_argument("question", help="Research question")
parser.add_argument(
"--initial-queries",
type=int,
default=3,
help="Number of initial search queries",
)
parser.add_argument(
"--max-loops",
type=int,
default=2,
help="Maximum number of research loops",
)
parser.add_argument(
"--reasoning-model",
default="gemini-2.5-pro-preview-05-06",
help="Model for the final answer",
)
args = parser.parse_args()
state = {
"messages": [HumanMessage(content=args.question)],
"initial_search_query_count": args.initial_queries,
"max_research_loops": args.max_loops,
"reasoning_model": args.reasoning_model,
}
result = graph.invoke(state)
messages = result.get("messages", [])
if messages:
print(messages[-1].content)
if __name__ == "__main__":
main()

View File

@@ -1,8 +1,7 @@
# mypy: disable - error - code = "no-untyped-def,misc"
import pathlib
from fastapi import FastAPI, Request, Response
from fastapi import FastAPI, Response
from fastapi.staticfiles import StaticFiles
import fastapi.exceptions
# Define the FastAPI app
app = FastAPI()
@@ -18,7 +17,6 @@ def create_frontend_router(build_dir="../frontend/dist"):
A Starlette application serving the frontend.
"""
build_path = pathlib.Path(__file__).parent.parent.parent / build_dir
static_files_path = build_path / "assets" # Vite uses 'assets' subdir
if not build_path.is_dir() or not (build_path / "index.html").is_file():
print(
@@ -36,21 +34,7 @@ def create_frontend_router(build_dir="../frontend/dist"):
return Route("/{path:path}", endpoint=dummy_frontend)
build_dir = pathlib.Path(build_dir)
react = FastAPI(openapi_url="")
react.mount(
"/assets", StaticFiles(directory=static_files_path), name="static_assets"
)
@react.get("/{path:path}")
async def handle_catch_all(request: Request, path: str):
fp = build_path / path
if not fp.exists() or not fp.is_file():
fp = build_path / "index.html"
return fastapi.responses.FileResponse(fp)
return react
return StaticFiles(directory=build_path, html=True)
# Mount the frontend under /app to not conflict with the LangGraph API routes

View File

@@ -16,14 +16,14 @@ class Configuration(BaseModel):
)
reflection_model: str = Field(
default="gemini-2.5-flash-preview-04-17",
default="gemini-2.5-flash",
metadata={
"description": "The name of the language model to use for the agent's reflection."
},
)
answer_model: str = Field(
default="gemini-2.5-pro-preview-05-06",
default="gemini-2.5-pro",
metadata={
"description": "The name of the language model to use for the agent's answer."
},

View File

@@ -42,9 +42,9 @@ genai_client = Client(api_key=os.getenv("GEMINI_API_KEY"))
# Nodes
def generate_query(state: OverallState, config: RunnableConfig) -> QueryGenerationState:
"""LangGraph node that generates a search queries based on the User's question.
"""LangGraph node that generates search queries based on the User's question.
Uses Gemini 2.0 Flash to create an optimized search query for web research based on
Uses Gemini 2.0 Flash to create an optimized search queries for web research based on
the User's question.
Args:
@@ -52,7 +52,7 @@ def generate_query(state: OverallState, config: RunnableConfig) -> QueryGenerati
config: Configuration for the runnable, including LLM provider settings
Returns:
Dictionary with state update, including search_query key containing the generated query
Dictionary with state update, including search_query key containing the generated queries
"""
configurable = Configuration.from_runnable_config(config)
@@ -78,7 +78,7 @@ def generate_query(state: OverallState, config: RunnableConfig) -> QueryGenerati
)
# Generate the search queries
result = structured_llm.invoke(formatted_prompt)
return {"query_list": result.query}
return {"search_query": result.query}
def continue_to_web_research(state: QueryGenerationState):
@@ -88,7 +88,7 @@ def continue_to_web_research(state: QueryGenerationState):
"""
return [
Send("web_research", {"search_query": search_query, "id": int(idx)})
for idx, search_query in enumerate(state["query_list"])
for idx, search_query in enumerate(state["search_query"])
]
@@ -153,7 +153,7 @@ def reflection(state: OverallState, config: RunnableConfig) -> ReflectionState:
configurable = Configuration.from_runnable_config(config)
# Increment the research loop count and get the reasoning model
state["research_loop_count"] = state.get("research_loop_count", 0) + 1
reasoning_model = state.get("reasoning_model") or configurable.reasoning_model
reasoning_model = state.get("reasoning_model", configurable.reflection_model)
# Format the prompt
current_date = get_current_date()
@@ -231,7 +231,7 @@ def finalize_answer(state: OverallState, config: RunnableConfig):
Dictionary with state update, including running_summary key containing the formatted final summary with sources
"""
configurable = Configuration.from_runnable_config(config)
reasoning_model = state.get("reasoning_model") or configurable.reasoning_model
reasoning_model = state.get("reasoning_model") or configurable.answer_model
# Format the prompt
current_date = get_current_date()

View File

@@ -17,7 +17,7 @@ Instructions:
- Query should ensure that the most current information is gathered. The current date is {current_date}.
Format:
- Format your response as a JSON object with ALL three of these exact keys:
- Format your response as a JSON object with ALL two of these exact keys:
- "rationale": Brief explanation of why these queries are relevant
- "query": A list of search queries
@@ -87,7 +87,7 @@ Instructions:
- You have access to all the information gathered from the previous steps.
- You have access to the user's question.
- Generate a high-quality answer to the user's question based on the provided summaries and the user's question.
- you MUST include all the citations from the summaries in the answer correctly.
- You MUST include all the citations from the summaries in the answer correctly.
User Context:
- {research_topic}

View File

@@ -8,8 +8,6 @@ from typing_extensions import Annotated
import operator
from dataclasses import dataclass, field
from typing_extensions import Annotated
class OverallState(TypedDict):
@@ -37,7 +35,7 @@ class Query(TypedDict):
class QueryGenerationState(TypedDict):
query_list: list[Query]
search_query: list[Query]
class WebSearchState(TypedDict):

View File

@@ -4,6 +4,7 @@ volumes:
services:
langgraph-redis:
image: docker.io/redis:6
container_name: langgraph-redis
healthcheck:
test: redis-cli ping
interval: 5s
@@ -11,6 +12,7 @@ services:
retries: 5
langgraph-postgres:
image: docker.io/postgres:16
container_name: langgraph-postgres
ports:
- "5433:5432"
environment:
@@ -27,6 +29,7 @@ services:
interval: 5s
langgraph-api:
image: gemini-fullstack-langgraph
container_name: langgraph-api
ports:
- "8123:8000"
depends_on:

View File

@@ -4,6 +4,7 @@ import { useState, useEffect, useRef, useCallback } from "react";
import { ProcessedEvent } from "@/components/ActivityTimeline";
import { WelcomeScreen } from "@/components/WelcomeScreen";
import { ChatMessagesView } from "@/components/ChatMessagesView";
import { Button } from "@/components/ui/button";
export default function App() {
const [processedEventsTimeline, setProcessedEventsTimeline] = useState<
@@ -14,7 +15,7 @@ export default function App() {
>({});
const scrollAreaRef = useRef<HTMLDivElement>(null);
const hasFinalizeEventOccurredRef = useRef(false);
const [error, setError] = useState<string | null>(null);
const thread = useStream<{
messages: Message[];
initial_search_query_count: number;
@@ -26,15 +27,12 @@ export default function App() {
: "http://localhost:8123",
assistantId: "agent",
messagesKey: "messages",
onFinish: (event: any) => {
console.log(event);
},
onUpdateEvent: (event: any) => {
let processedEvent: ProcessedEvent | null = null;
if (event.generate_query) {
processedEvent = {
title: "Generating Search Queries",
data: event.generate_query.query_list.join(", "),
data: event.generate_query?.search_query?.join(", ") || "",
};
} else if (event.web_research) {
const sources = event.web_research.sources_gathered || [];
@@ -52,11 +50,7 @@ export default function App() {
} else if (event.reflection) {
processedEvent = {
title: "Reflection",
data: event.reflection.is_sufficient
? "Search successful, generating final answer."
: `Need more information, searching for ${event.reflection.follow_up_queries.join(
", "
)}`,
data: "Analysing Web Research Results",
};
} else if (event.finalize_answer) {
processedEvent = {
@@ -72,6 +66,9 @@ export default function App() {
]);
}
},
onError: (error: any) => {
setError(error.message);
},
});
useEffect(() => {
@@ -154,18 +151,27 @@ export default function App() {
return (
<div className="flex h-screen bg-neutral-800 text-neutral-100 font-sans antialiased">
<main className="flex-1 flex flex-col overflow-hidden max-w-4xl mx-auto w-full">
<div
className={`flex-1 overflow-y-auto ${
thread.messages.length === 0 ? "flex" : ""
}`}
>
<main className="h-full w-full max-w-4xl mx-auto">
{thread.messages.length === 0 ? (
<WelcomeScreen
handleSubmit={handleSubmit}
isLoading={thread.isLoading}
onCancel={handleCancel}
/>
) : error ? (
<div className="flex flex-col items-center justify-center h-full">
<div className="flex flex-col items-center justify-center gap-4">
<h1 className="text-2xl text-red-400 font-bold">Error</h1>
<p className="text-red-400">{JSON.stringify(error)}</p>
<Button
variant="destructive"
onClick={() => window.location.reload()}
>
Retry
</Button>
</div>
</div>
) : (
<ChatMessagesView
messages={thread.messages}
@@ -177,7 +183,6 @@ export default function App() {
historicalActivities={historicalActivities}
/>
)}
</div>
</main>
</div>
);

View File

@@ -203,7 +203,9 @@ const AiMessageBubble: React.FC<AiMessageBubbleProps> = ({
</ReactMarkdown>
<Button
variant="default"
className="cursor-pointer bg-neutral-700 border-neutral-600 text-neutral-300 self-end"
className={`cursor-pointer bg-neutral-700 border-neutral-600 text-neutral-300 self-end ${
message.content.length > 0 ? "visible" : "hidden"
}`}
onClick={() =>
handleCopy(
typeof message.content === "string"
@@ -250,10 +252,9 @@ export function ChatMessagesView({
console.error("Failed to copy text: ", err);
}
};
return (
<div className="flex flex-col h-full">
<ScrollArea className="flex-grow" ref={scrollAreaRef}>
<ScrollArea className="flex-1 overflow-y-auto" ref={scrollAreaRef}>
<div className="p-4 md:p-6 space-y-2 max-w-4xl mx-auto pt-16">
{messages.map((message, index) => {
const isLast = index === messages.length - 1;

View File

@@ -35,10 +35,9 @@ export const InputForm: React.FC<InputFormProps> = ({
setInternalInputValue("");
};
const handleInternalKeyDown = (
e: React.KeyboardEvent<HTMLTextAreaElement>
) => {
if (e.key === "Enter" && !e.shiftKey) {
const handleKeyDown = (e: React.KeyboardEvent<HTMLTextAreaElement>) => {
// Submit with Ctrl+Enter (Windows/Linux) or Cmd+Enter (Mac)
if (e.key === "Enter" && (e.ctrlKey || e.metaKey)) {
e.preventDefault();
handleInternalSubmit();
}
@@ -49,7 +48,7 @@ export const InputForm: React.FC<InputFormProps> = ({
return (
<form
onSubmit={handleInternalSubmit}
className={`flex flex-col gap-2 p-3 `}
className={`flex flex-col gap-2 p-3 pb-4`}
>
<div
className={`flex flex-row items-center justify-between text-white rounded-3xl rounded-bl-sm ${
@@ -59,9 +58,9 @@ export const InputForm: React.FC<InputFormProps> = ({
<Textarea
value={internalInputValue}
onChange={(e) => setInternalInputValue(e.target.value)}
onKeyDown={handleInternalKeyDown}
onKeyDown={handleKeyDown}
placeholder="Who won the Euro 2024 and scored the most goals?"
className={`w-full text-neutral-100 placeholder-neutral-500 resize-none border-0 focus:outline-none focus:ring-0 outline-none focus-visible:ring-0 shadow-none
className={`w-full text-neutral-100 placeholder-neutral-500 resize-none border-0 focus:outline-none focus:ring-0 outline-none focus-visible:ring-0 shadow-none
md:text-base min-h-[56px] max-h-[200px]`}
rows={1}
/>

View File

@@ -15,7 +15,7 @@ export const WelcomeScreen: React.FC<WelcomeScreenProps> = ({
onCancel,
isLoading,
}) => (
<div className="flex flex-col items-center justify-center text-center px-4 flex-1 w-full max-w-3xl mx-auto gap-4">
<div className="h-full flex flex-col items-center justify-center text-center px-4 flex-1 w-full max-w-3xl mx-auto gap-4">
<div>
<h1 className="text-5xl md:text-6xl font-semibold text-neutral-100 mb-3">
Welcome.

View File

@@ -17,6 +17,7 @@ function ScrollArea({
<ScrollAreaPrimitive.Viewport
data-slot="scroll-area-viewport"
className="focus-visible:ring-ring/50 size-full rounded-[inherit] transition-[color,box-shadow] outline-none focus-visible:ring-[3px] focus-visible:outline-1"
style={{ overscrollBehavior: 'none' }}
>
{children}
</ScrollAreaPrimitive.Viewport>
@@ -38,16 +39,16 @@ function ScrollBar({
className={cn(
"flex touch-none p-px transition-colors select-none",
orientation === "vertical" &&
"h-full w-2.5 border-l border-l-transparent",
"h-full w-1.5 border-l border-l-transparent",
orientation === "horizontal" &&
"h-2.5 flex-col border-t border-t-transparent",
"h-1.5 flex-col border-t border-t-transparent",
className
)}
{...props}
>
<ScrollAreaPrimitive.ScrollAreaThumb
data-slot="scroll-area-thumb"
className="bg-border relative flex-1 rounded-full"
className="bg-neutral-600/30 relative flex-1 rounded-full"
/>
</ScrollAreaPrimitive.ScrollAreaScrollbar>
)

View File

@@ -116,6 +116,13 @@
}
body {
@apply bg-background text-foreground;
/* Prevent scroll bounce/overscroll on mobile */
overscroll-behavior: none;
-webkit-overflow-scrolling: touch;
}
html {
/* Prevent scroll bounce on the entire page */
overscroll-behavior: none;
}
}
@@ -150,5 +157,31 @@
animation: fadeInUpSmooth 0.3s ease-out forwards;
}
/* Prevent scroll bounce on scroll areas */
[data-radix-scroll-area-viewport] {
overscroll-behavior: none !important;
-webkit-overflow-scrolling: touch;
}
/* Hide any white space that might appear during scroll bounce */
[data-radix-scroll-area-viewport]::-webkit-scrollbar {
width: 0px;
background: transparent;
}
/* Subtle scroll bar styling */
[data-slot="scroll-area-scrollbar"] {
opacity: 0.3;
transition: opacity 0.2s ease;
}
[data-slot="scroll-area"]:hover [data-slot="scroll-area-scrollbar"] {
opacity: 0.6;
}
[data-slot="scroll-area-thumb"] {
background-color: rgb(115 115 115 / 0.2) !important;
}
/* Ensure your body or html has a dark background if not already set, e.g.: */
/* body { background-color: #0c0c0d; } */ /* This is similar to neutral-950 */

View File

@@ -9,7 +9,7 @@ export default defineConfig({
base: "/app/",
resolve: {
alias: {
"@": path.resolve(new URL(".", import.meta.url).pathname, "./src"),
"@": path.resolve(__dirname, "./src"),
},
},
server: {