Add Live Web Search to a LangGraph Agent with Prismfy
ai agentsweb search apitool callingfresh web contextagent workflowprismfy

Add Live Web Search to a LangGraph Agent with Prismfy

Add Live Web Search to a LangGraph Agent with Prismfy for fresh public-web evidence.

P

Prismfy Team

May 7, 2026

4 min read

Add Live Web Search to a LangGraph Agent with Prismfy

This guide shows a practical way to use Prismfy as a live web search API inside an agent workflow so your assistant can fetch fresh public evidence instead of relying on stale model memory.

Problem framing

LangGraph is a good fit when you want deterministic control over an agent's state and routing. The graph can decide whether to answer directly, search the web, or ask for clarification. The problem is that many graph examples stop at static tools or local documents, which makes the whole flow stale for public-web tasks.

Prismfy gives the graph a simple live search node. The graph can route a current question to POST /v1/search, transform the response into evidence, and feed that evidence into the next reasoning step.

Why this matters now

Graph-based agents are increasingly used for workflows where the answer must depend on the latest public data: competitor pages, documentation, release notes, and web research. These are not archive problems. They are routing problems.

If the graph cannot distinguish between "answer from memory" and "look it up now," it will eventually produce confident but outdated answers. A live search node is the cleanest fix because it keeps freshness as an explicit branch in the graph.

Step-by-step solution

  1. Model the state. Include question, needs_search, search_query, evidence, and final_answer.
  2. Add a router node. Let the router decide whether the question is time-sensitive or domain-specific enough to require live search.
  3. Add a Prismfy search node. Call POST /v1/search with a focused query and optional domain or timeRange.
  4. Add an evidence formatting node. Convert the response into a compact structure that the answer node can read.
  5. Add the answer node. Generate the final response from evidence, not from the graph state alone.

Code example

This example keeps the graph small on purpose. It shows the routing idea without tying you to any private architecture or unsupported assumptions.

import os
import requests
from typing import TypedDict, Annotated
from operator import add

from langgraph.graph import StateGraph, END

class GraphState(TypedDict, total=False):
    question: str
    needs_search: bool
    search_query: str
    evidence: list[dict]
    final_answer: str

def route_question(state: GraphState):
    q = state["question"].lower()
    needs_search = any(word in q for word in ["latest", "current", "pricing", "docs", "launch", "change"])
    return {"needs_search": needs_search}

def build_query(state: GraphState):
    return {"search_query": state["question"]}

def prismfy_search_node(state: GraphState):
    res = requests.post(
        "https://api.prismfy.io/v1/search",
        headers={
            "Authorization": f"Bearer {os.environ['PRISMFY_API_KEY']}",
            "Content-Type": "application/json",
        },
        json={
            "query": state["search_query"],
            "timeRange": "month",
            "page": 1,
        },
        timeout=30,
    )
    res.raise_for_status()
    data = res.json()

    evidence = []
    for item in data["results"][:5]:
        evidence.append({
            "title": item["title"],
            "url": item["url"],
            "snippet": item["content"][:240],
            "engine": item["engine"],
        })

    return {"evidence": evidence}

def answer_node(state: GraphState):
    if not state.get("evidence"):
        return {"final_answer": "No live search needed. Answer directly from the graph state."}

    lines = []
    for item in state["evidence"]:
        lines.append(f"- {item['title']} ({item['url']}): {item['snippet']}")

    return {
        "final_answer": (
            "Use the following live public-web evidence and answer conservatively:\n"
            + "\n".join(lines)
        )
    }

graph = StateGraph(GraphState)
graph.add_node("route", route_question)
graph.add_node("build_query", build_query)
graph.add_node("search", prismfy_search_node)
graph.add_node("answer", answer_node)

graph.set_entry_point("route")
graph.add_conditional_edges("route", lambda s: "build_query" if s["needs_search"] else "answer")
graph.add_edge("build_query", "search")
graph.add_edge("search", "answer")
graph.add_edge("answer", END)

app = graph.compile()

The key design choice is the conditional edge. If the question does not depend on current public data, the graph skips search. If it does, the graph calls Prismfy and turns the response into evidence. That keeps the search behavior explicit and auditable.

Practical notes and caveats

Do not let the graph turn every question into a search query. That is expensive and unnecessary. Route only when freshness or domain scope matters.

Also keep the evidence format stable. In graph workflows, downstream nodes are easier to maintain when they always receive the same shape: title, URL, short snippet, engine, and maybe a freshness hint such as timeRange.

For docs or product pages, add a domain filter. For newsy or launch-related questions, keep timeRange tight. The node should be a live lookup, not a broad internet dump.

Why Prismfy fits this workflow

Prismfy fits LangGraph because it behaves like a clean retrieval node in the graph rather than a separate subsystem. You call POST /v1/search, capture structured evidence, and route the state forward.

That makes the graph easier to reason about: the moment the answer needs current public data, the graph branches into a live search path and comes back with sources the model can trust more than its memory.

FAQ

When should an agent call web search instead of answering from memory?

Use live search when the task depends on current public information such as pricing pages, release notes, launch posts, or documentation updates. That is the safest time to route the agent workflow through Prismfy instead of relying on model memory alone.

Can Prismfy work as a tool inside an agent loop?

Yes. Prismfy is easiest to use as a runtime search step: the agent decides a question is time-sensitive, calls POST /v1/search, trims the returned evidence, and answers from those public sources.

Related Prismfy guides

Try Prismfy

Create a Prismfy key, test POST /v1/search, and wire the search step into the workflow you care about first.

Try it free

Add real-time web search to your AI

Free tier includes 3,000 requests per 30 days. No credit card required.