Search Knowledge
Query the project-scoped reusable answer store before you escalate.
Search is the cheapest path and should usually happen first. Results are
scoped to the configured project, so separate projects can keep similar
questions and answers isolated. In the Admin UI this is the search box on
the Knowledge page; the SDK calls it context.search.
When to use search
- At the start of a support, ops, or assistant workflow.
- Before creating a new knowledge request.
- After previous requests have already produced reusable answers.
Basic search
Inspect the returned project-scoped items and decide whether your app already has enough evidence to answer.
results = client.context.search("deployment process")
for item in results.items:
print(f"{item.title} ({item.confidence})")
print(item.content_md)const results = await valmar.context.search({ query: "deployment process" });
for (const item of results.items) {
console.log(`${item.title} (${item.confidence})`);
console.log(item.contentMd);
}With optional parameters
Tighten the result set when you need fewer hits, only certain knowledge types, or answers that came from specific people.
results = client.context.search(
"how do we roll back a failed migration",
limit=3,
types=["runbook", "incident"],
related_member_ids=["b3f4...", "c1a8..."],
)
print(f"{results.total_count} match(es)")
for item in results.items:
print(f"- {item.title} (confidence={item.confidence})")const results = await valmar.context.search({
query: "how do we roll back a failed migration",
limit: 3,
});
console.log(`${results.totalCount} match(es)`);
for (const item of results.items) {
console.log(`- ${item.title} (confidence=${item.confidence})`);
}| Parameter | Default | What it does |
|---|---|---|
limit | 10 | Maximum number of items returned. Lower it when you only want the top hit. |
types (Python) | all | Filter by knowledge type tag (e.g. runbook, policy, incident). |
related_member_ids (Python) | none | Only return items associated with these people. |
Use Valmar as a tool in your agent
Search is well-suited to be exposed as an agent tool: the model decides when context is missing and calls Valmar before continuing. There are two ways to do this.
As an MCP server
If your agent runtime already speaks MCP, point it at the Valmar MCP endpoint. See MCP Integration. Nothing else to write.
As a custom tool (LangChain example)
When you want explicit control over how the model sees the tool — its
description, schema, return formatting — wrap the SDK call yourself. The
example below uses LangChain's @tool decorator
and create_agent, but the same shape works for any agent library
(LlamaIndex FunctionTool, OpenAI's function-calling API, the Anthropic
SDK's tools=[...] parameter, CrewAI tools, etc.) — wrap client.context.search
in whatever the library calls a tool.
import os
from langchain.agents import create_agent
from langchain.tools import tool
from valmar import ValmarClient
valmar = ValmarClient(
api_key=os.environ["VALMAR_API_KEY"],
organization_id=os.environ["VALMAR_ORGANIZATION_ID"],
project_id=os.environ["VALMAR_PROJECT_ID"],
base_url=os.environ["VALMAR_BASE_URL"],
)
@tool
def search_company_knowledge(query: str, limit: int = 5) -> str:
"""Search the company's reusable knowledge for an existing answer.
Use this BEFORE asking the user clarifying questions or before
creating a new knowledge request. Returns a short list of titles
and answers with a confidence score.
Args:
query: Natural-language question to look up.
limit: Maximum number of results (default 5).
"""
results = valmar.context.search(query, limit=limit)
if not results.items:
return "No matching knowledge found."
return "\n\n".join(
f"## {item.title} (confidence={item.confidence})\n{item.content_md}"
for item in results.items
)
agent = create_agent(
"openai:gpt-5",
tools=[search_company_knowledge],
)Pair with gather
This is half of the loop. Pair search_company_knowledge with a
create_knowledge_request tool from
Create Knowledge Requests so the agent can fall back to
a real person when search comes up empty.