What the API is for.
Each Ontoic user has a knowledge graph — a set of interconnected nodes representing what they've read, researched, and saved. The API exposes that graph as a programmable context layer for AI agents.
Instead of building RAG pipelines from scratch, you point your agent at the Ontoic API and it gets a curated, maintained, user-specific knowledge base that already exists and grows continuously.
Research agents
Agent queries the graph before answering, surfaces gaps, triggers web research to fill them, saves results back.
Context-aware assistants
Give your AI assistant access to a user's domain knowledge without managing vector stores or chunking pipelines.
Knowledge pipelines
Automate ingestion — pipe articles, papers, transcripts through /cmd/create and they're in the graph immediately.
Multi-agent networks
Use agent user accounts so each agent in a system has its own graph context, own keys, own scope.
https://api.ontoic.comAPI keys.
All requests require a Bearer token in the Authorization header. Keys are prefixed ak_ and stored as SHA-256 hashes — the plaintext is shown once on creation.
Authorization: Bearer ak_YOUR_KEY
Key types
Scoped to the authenticated user's graph. Carry per-operation permissions: can_read, can_write, can_research. Grant only what the integration needs.
Authenticate as a named agent user — a separate graph context for autonomous pipelines. Agent keys have full access (read, write, research) within the agent's scope.
Permission model
/cmd, /searchQuery and search the graph. Required for all read operations./cmd/createAnalyse content and persist nodes. Not included by default./research/fireTrigger external web research via Exa. Incurs token costs.Reference.
All endpoints accept JSON and return JSON (or NDJSON for streaming). Agent keys always have full access.
Consuming NDJSON from /cmd.
/cmd streams newline-delimited JSON. Each line is a discrete event with a type field. Always read line-by-line — don't buffer and parse the whole response as one object.
First event. Contains the nodes retrieved from the graph before generation begins. Use this to show citations or debug retrieval.
One event per token. Stream these to a text buffer for real-time output. Each has a single "token" string field.
Final event. Contains the complete answer, any identified gaps in the graph, and the session_id for follow-up turns.
{"type":"sources","referenced_nodes":[
{"id":"abc","node_type":"thesis","content":"…"}
],"session_id":"sess_xyz"}
{"type":"token","token":"Based on your graph, "}
{"type":"token","token":"retrieval-augmented generation appears in three nodes…"}
{"type":"done","answer":"Full synthesised answer…","gaps":["sparse attention"],"session_id":"sess_xyz"}Agent user accounts.
For autonomous pipelines, create a named agent user from the Ontoic canvas. Each agent gets its own graph context — queries, writes, and research are scoped to that agent rather than the human user's graph.
Issue an agent key for each agent account. In multi-agent systems, each agent can have independent context while still operating within the same workspace.
Code.
Replace ak_YOUR_KEY with a key from the Developer Portal.
Query the graph
curl -X POST https://api.ontoic.com/cmd \
-H "Authorization: Bearer ak_YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{"query": "what do I know about RAG pipelines?"}'Semantic search
curl -X POST "https://api.ontoic.com/search?contextual=true" \
-H "Authorization: Bearer ak_YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{"query": "attention and long-range dependencies"}'Ingest content
curl -X POST https://api.ontoic.com/cmd/create \
-H "Authorization: Bearer ak_YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{
"mode": "analyse",
"content": "Sparse attention reduces transformer compute from O(n²) to O(n√n).",
"url": "https://example.com/paper"
}'Ready to build?
The Developer Portal has a live playground, key management, usage stats, and full troubleshooting reference — all in one place.