Skip to main content
A LangChain ReAct agent that autonomously decides when to check gates, pay for content, and review payment history.

Setup

pip install xenarch[agent,langchain] langchain>=0.3.0 langchain-openai>=0.3.0 python-dotenv
Create .env:
XENARCH_PRIVATE_KEY=0xYOUR_PRIVATE_KEY
OPENAI_API_KEY=sk-...

Code

main.py
"""LangChain agent with Xenarch payment tools."""

from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langgraph.prebuilt import create_react_agent

from xenarch.tools.langchain import CheckGateTool, GetHistoryTool, PayTool

load_dotenv()

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
tools = [CheckGateTool(), PayTool(), GetHistoryTool()]

agent = create_react_agent(
    llm,
    tools,
    prompt=(
        "You are a helpful assistant with access to Xenarch payment tools. "
        "You can check if URLs have payment gates, pay for gated content "
        "using USDC micropayments on Base, and review payment history. "
        "When asked to access gated content, check the gate first, then pay if needed."
    ),
)

if __name__ == "__main__":
    result = agent.invoke(
        {"messages": [{"role": "user", "content": "Check if https://gate.xenarch.dev/sample-page/ has a paywall. If it does, pay for access."}]}
    )
    for msg in result["messages"]:
        print(f"[{msg.type}] {msg.content if isinstance(msg.content, str) else msg.content}")

Run

python main.py

Using other LLMs

Swap the LLM by changing the import:
# Anthropic Claude
from langchain_anthropic import ChatAnthropic
llm = ChatAnthropic(model="claude-sonnet-4-20250514")

# Local model via Ollama
from langchain_community.llms import Ollama
llm = Ollama(model="llama3")