This directory contains ready-to-run Python scripts demonstrating various GraphBit workflow patterns using both local and cloud LLMs.
- Set up your environment:
- For local models (Ollama):
ollama serve
ollama pull llama3.2- For Perplexity (cloud):
export PERPLEXITY_API_KEY="your-api-key"- Run an example:
python examples/tasks_examples/simple_task_local_model.pyAll example scripts in this directory use the GraphBit Python API to build and run AI workflows. Here’s a minimal step-by-step guide to the core GraphBit workflow pattern, as seen in these scripts:
from graphbit import LlmConfig, Executor, Workflow, NodeThis sets up the GraphBit runtime and logging.
- openai (cloud):
llm_config = LlmConfig.openai(model=gpt-3.5-turbo, api_key=api_key)
- Ollama (local):
llm_config = LlmConfig.ollama("llama3.2")
- Perplexity (cloud):
llm_config = LlmConfig.perplexity(api_key, "sonar")
Choose the executor type based on your use case:
executor = Executor(llm_config, lightweight_mode=True)
# or for high-throughput pipelines:
executor = Executor(llm_config, timeout_seconds=60)
# or for memory-intensive tasks:
executor = Executor(llm_config, timeout_seconds=300)
# Configure additional settings for memory-intensive tasks if needed
executor.configure(timeout_seconds=300, max_retries=3, enable_metrics=True, debug=False)Create a workflow and add agent nodes:
workflow = Workflow("My Example Workflow")
agent_id1 = str(uuid.uuid4())
node1 = Node.agent(
name="Summarizer",
prompt="Summarize: {input}",
agent_id=agent_id1
)
agent_id2 = str(uuid.uuid4())
node2 = Node.agent(
name="Task Executor",
prompt="Summarize this text: {input}",
agent_id=agent_id2
)
workflow.add_node(node1)
workflow.add_node(node2)
workflow.connect(node1,node2)
workflow.validate()For multi-step or complex workflows, add multiple nodes and connect them as needed.
result = executor.execute(workflow)
if result.is_failed():
print("Workflow failed:", result.state())
else:
print("Output:", result.variables())from graphbit import LlmConfig, Executor, Workflow, Node
import uuid
llm_config = LlmConfig.ollama("llama3.2")
executor = Executor.new_low_latency(llm_config)
workflow = Workflow("Simple Task")
agent_id1 = str(uuid.uuid4())
node1 = Node.agent(
name="Summarizer",
prompt="Summarize: {input}",
agent_id=agent_id1
)
agent_id2 = str(uuid.uuid4())
node2 = Node.agent(
name="Task Executor",
prompt="Summarize this text: {input}",
agent_id=agent_id2
)
workflow.add_node(node1)
workflow.add_node(node2)
workflow.connect(node1,node2)
workflow.validate()
result = executor.execute(workflow)
print("Result:", result.variables())Explore the scripts in this folder for more advanced patterns:
- Real-time web search with Perplexity (
simple_task_perplexity.py) - Memory-optimized large prompt tasks (
memory_task_local_model.py) - Multi-step and dependency-based workflows (
sequential_task_local_model.py,complex_workflow_local_model.py) - Conditional branching — personal financial advisor (balance intake + tiered advisors) (
conditional_branch_local_model.py)
For more details, see the GraphBit Python API documentation.
simple_task_local_model.py
Single-agent workflow using the local Llama 3.2 model via Ollama. Summarizes a fictional journal entry.
Requires Ollama running locally.
sequential_task_local_model.py
Sequential multi-step pipeline using Llama 3.2. Each step addresses a different aspect of software IP protection, with outputs chained stepwise.
Requires Ollama running locally.
complex_workflow_local_model.py
Complex, multi-step workflow with explicit dependencies between tasks, covering a comprehensive IP protection strategy.
Requires Ollama running locally.
conditional_branch_local_model.py
Condition node after an intake agent calls get_financial_details() (stub balance DEMO_ACCOUNT_BALANCE in the script); a Python handler routes by balance so exactly one of three advisors runs — Budget Planner (<$1k), Small Investment Advisor ($1k–$50k), Large Investment Advisor (>$50k) — each with different tools and tone. Loads .env for API keys; OpenAI if OPENAI_API_KEY else Ollama (tool calling is easier on OpenAI).
Requires either OPENAI_API_KEY or Ollama.
memory_task_local_model.py
Memory-intensive, single-agent task with a large prompt, using Llama 3.2 via Ollama. Provides a deep legal/technical analysis.
Requires Ollama running locally.
simple_task_perplexity.py
Single-agent workflow using Perplexity’s cloud models (with real-time web search). Summarizes recent AI/ML developments.
Requires PERPLEXITY_API_KEY environment variable.
chatbot
A conversational AI chatbot with vector database integration for context retrieval and memory storage. Includes a FastAPI backend and Streamlit frontend.
Requires OpenAI API key and ChromaDB.
llm_guided_browser_automation.py
Automates browser interactions using LLMs to guide actions. Demonstrates how to use GraphBit for real-time decision-making in web automation tasks.
Requires Selenium and a configured LLM provider.