← LOGBOOK LOG-410
WORKING · SOFTWARE ·
MARIMOPYTHONSTRANDSAGENTSANTHROPICSECOND-BRAIN

Building a Second-Brain Agent in Marimo with Strands

Wiring the Strands agent framework into a marimo notebook — custom tools for querying a local content library, Claude Haiku, and mo.ui.chat.

What This Is

A marimo notebook that runs a local second-brain agent using the Strands framework. The interface is mo.ui.chat; the agent loop is handled by Strands rather than raw API calls. Claude Haiku is wired to two custom tools that query a local content library of markdown notes.

Stack

  • Strands — AWS open-source agent loop framework. Handles tool dispatch, message history, and model calls.
  • AnthropicModel from strands.models.anthropic — wrapper around the Anthropic SDK.
  • strands_tools — built-in tools; file_read used here for reading note files.
  • marimo — reactive notebook runtime; mo.ui.chat provides the chat UI.

The Two Custom Tools

@tool
def list_notes(collection: str = "bench") -> str:
    """Count notes in a content collection. Valid collections: bench, ideas, signals, library, engine."""
    folder = CONTENT_PATH / collection
    count = len(list(folder.glob("*.md")))
    return f"{collection}: {count} notes"

@tool
def search_notes(query: str, collection: str = "bench") -> str:
    """Search notes by keyword. Returns matching slugs."""
    folder = CONTENT_PATH / collection
    matches = [f.stem for f in folder.glob("*.md") if query.lower() in f.read_text().lower()]
    return "\n".join(matches[:10]) if matches else "No matches found."

CONTENT_PATH comes from a .env variable — decouples the notebook from the repo location. The @tool decorator is all Strands needs to register a function as a tool; the docstring becomes the description the model sees.

Wiring the Agent

model = AnthropicModel(
    client_args={"api_key": ANTHROPIC_KEY},
    max_tokens=1028,
    model_id="claude-haiku-4-5-20251001",
    params={"temperature": 0.7}
)

agent = Agent(
    model=model,
    tools=[list_notes, search_notes, file_read],
    system_prompt="""You are a second brain assistant.
When the user asks about notes, topics, or content — always use search_notes.
For summaries or if asked to give the content, look for it in the bench folder and use file_read.
Never make up note contents. Only return what the tools give you."""
)

Strands manages the tool call / observe / respond loop. You pass the agent a message, it decides whether to call tools, collects results, and returns the final response — all in one call.

Chat Integration

def chat_respond(messages):
    last = messages[-1].content
    response = agent(last)
    return str(response)

chat = mo.ui.chat(chat_respond)

mo.ui.chat expects a callback that receives the full message list and returns a string. The agent call is synchronous — one agent(last) call covers however many tool invocations the model decides to make before answering.

Example Session

A real three-turn exchange:

Turn 1

hey, how many bench notes do we have?

list_notes("bench")bench: 417 notes

Turn 2

do we have any notes on butterflies?

search_notes("butterflies", "bench")extended-evolutionary-synthesis

Turn 3

yes, give a summary

file_read(...) on the matched file → full summary of the Extended Evolutionary Synthesis note, including the butterfly wing pattern example under developmental plasticity.

Each turn used exactly the right tool for the job — count, search, read — without any explicit instruction per turn. The system prompt’s routing rules (“always use search_notes”, “use file_read for content”) handled the dispatch.

Observations

Strands vs raw API: The main win is the tool loop. Without a framework, you’d write the while tool_calls_pending loop yourself — check for tool use blocks, dispatch, append results, call again. Strands collapses that to a single function call.

Haiku is fast enough: For search-and-retrieve tasks like this, Haiku’s latency is imperceptible. Heavier reasoning (summarising a note, synthesising across multiple) would warrant Sonnet.

Marimo as shell: The reactive model means you can tweak the system prompt or swap the model in one cell and the agent cell updates — no restart, no re-import. Faster iteration than a script during prototyping.

file_read from strands_tools: Built-in tool from the Strands standard library. The agent can call it to read any file by path — useful when search returns a slug and the user wants the full content.

What stuck: The @tool decorator pattern is the right abstraction — a function with a docstring is already self-documenting, and the same description serves both the human reader and the model.