Create Knowledge Requests
Open a knowledge request when the answer does not exist yet.
This is the handoff path for genuinely missing knowledge. Your app stays in
charge of the workflow, but Valmar handles routing the question to the right
people. In the Admin UI these appear as Knowledge Requests; the SDK
calls the entity ContextRequest.
What to include
- A concrete question the assigned person can answer directly.
- Enough background context for the human to understand the situation.
- The narrowest workflow scope you can start with.
Basic create-and-poll
Start with polling while you prove the request loop end to end.
handle = client.context.gather(
"How do we handle database migrations in production?",
background_context="Planning a schema change for the orders table",
)
print(f"Request created: {handle.context_request_id}")
print(f"Status: {handle.status}")
request = client.context.get(handle.context_request_id)
if request.status == "completed":
print(request.result_summary)const handle = await valmar.context.gather({
question: "How do we handle database migrations in production?",
backgroundContext: "Planning a schema change for the orders table",
});
console.log(`Request created: ${handle.contextRequestId}`);
console.log(`Status: ${handle.status}`);
const request = await valmar.context.get(handle.contextRequestId);
if (request.status === "completed") {
console.log(request.resultSummary);
}With optional parameters
Give the assigned person more to work with: what your app already tried, which agent inside your product is asking, and how the request should be attributed.
handle = client.context.gather(
question="What's the policy on issuing refunds above $500?",
background_context=(
"Customer #4821 is requesting a $750 refund for a duplicate charge. "
"The duplicate charge is confirmed in Stripe."
),
already_tried=(
"Searched the knowledge base for 'refund policy' and 'high value refund'; "
"only found the standard <$200 policy."
),
requesting_application="support-copilot",
source_agent_config_id="0d7e...e3b1",
)
# Status values: pending, deferred, waiting_for_reply, completed, timed_out, failed
print(f"Status: {handle.status}")const handle = await valmar.context.gather({
question: "What's the policy on issuing refunds above $500?",
backgroundContext:
"Customer #4821 is requesting a $750 refund for a duplicate charge. " +
"The duplicate charge is confirmed in Stripe.",
alreadyTried:
"Searched the knowledge base for 'refund policy' and 'high value refund'; " +
"only found the standard <$200 policy.",
requestingApplication: "support-copilot",
});
// Status values: pending, deferred, waiting_for_reply, completed, timed_out, failed
console.log(`Status: ${handle.status}`);| Parameter | What it does |
|---|---|
background_context / backgroundContext | Free-form context the human reviewer needs to make sense of the question. |
already_tried / alreadyTried | What your system has already attempted. Stops the assignee from suggesting the same things. |
requesting_application / requestingApplication | A label for which of your apps or agents asked. Surfaces in the Admin UI. |
source_agent_config_id (Python) | Link the request to a specific configured agent for analytics and routing. |
Status values
A knowledge request moves through pending → waiting_for_reply →
completed. Other terminal states are deferred, timed_out, and
failed. Read result_summary and answer once the status is
completed.
Use Valmar as a tool in your agent
Creating knowledge requests is well-suited to be exposed as an agent tool: the model decides when it has exhausted its options and hands off to a real person. There are two ways to do this.
As an MCP server
If your agent runtime already speaks MCP, point it at the Valmar MCP endpoint. See MCP Integration. Nothing else to write.
As a custom tool (LangChain example)
When you want explicit control over how the model sees the tool — its
description, schema, return formatting — wrap the SDK call yourself. The
example below uses LangChain's @tool decorator
and create_agent, but the same shape works for any agent library
(LlamaIndex FunctionTool, OpenAI's function-calling API, the Anthropic
SDK's tools=[...] parameter, CrewAI tools, etc.) — wrap
client.context.gather in whatever the library calls a tool.
import os
from langchain.agents import create_agent
from langchain.tools import tool
from valmar import ValmarClient
valmar = ValmarClient(
api_key=os.environ["VALMAR_API_KEY"],
organization_id=os.environ["VALMAR_ORGANIZATION_ID"],
project_id=os.environ["VALMAR_PROJECT_ID"],
base_url=os.environ["VALMAR_BASE_URL"],
)
@tool
def create_knowledge_request(
question: str,
background_context: str,
already_tried: str = "",
) -> str:
"""Hand off a question to a real person at the company.
Use this ONLY after `search_company_knowledge` returns no useful
result. Be specific in the question and include enough background
that a human can answer without further clarification.
Args:
question: A concrete question the assignee can answer directly.
background_context: What you already know about the situation.
already_tried: What you've already attempted (so the assignee doesn't repeat it).
"""
handle = valmar.context.gather(
question=question,
background_context=background_context,
already_tried=already_tried or None,
requesting_application="my-agent",
)
return (
f"Knowledge request created (id={handle.context_request_id}, "
f"status={handle.status}). A teammate will respond; you can stop here."
)
agent = create_agent(
"openai:gpt-5",
tools=[create_knowledge_request],
)The full loop
For the canonical pattern, give the agent both
search_company_knowledge and
create_knowledge_request as tools. The model will search first and only
escalate when it has to.