Agent Context API
Deze inhoud is nog niet vertaald.
Manage your Agent Context server programmatically. Requires a Pro subscription and an API key.
Base URL
Section titled “Base URL”https://api.rebyte.ai/v1/context-lakeAuthentication
Section titled “Authentication”Same as the Agent Computer API — use the API_KEY header:
curl https://api.rebyte.ai/v1/context-lake/datasets \ -H "API_KEY: rbk_your_key_here"Endpoints
Section titled “Endpoints”| Method | Path | Description |
|---|---|---|
| GET | /config | Get full YAML config |
| PATCH | /config | Partial config update |
| GET | /datasets | List datasets |
| POST | /datasets | Add a dataset |
| PUT | /datasets/:name | Update a dataset |
| DELETE | /datasets/:name | Remove a dataset |
| GET | /views | List views |
| POST | /views | Add a view |
| PUT | /views/:name | Update a view |
| DELETE | /views/:name | Remove a view |
| POST | /sql | Run a SQL query |
| GET | /status | Get VM and dataset status |
| POST | /start | Start the VM |
| POST | /stop | Hibernate the VM |
| POST | /redeploy | Redeploy configuration |
Config
Section titled “Config”Get Config
Section titled “Get Config”GET /configReturns the full YAML configuration (SpiceD spicepod format with extensions).
{ "yaml": "version: v1\nkind: Spicepod\n...", "config": { "version": "v1", "kind": "Spicepod", "datasets": [...], "views": [...] }}Partial Config Update
Section titled “Partial Config Update”PATCH /configAdd, remove, or update datasets and views atomically. All operations are applied in order (removes first, then adds, then updates). If validation fails, nothing is applied.
{ "addDatasets": [{ "from": "s3://bucket/data.parquet", "name": "sales", "params": {...}, "notifications": true }], "removeDatasets": ["old_data"], "updateDatasets": { "sales": { "params": { "file_format": "csv" } } }, "addViews": [{ "name": "summary", "sql": "SELECT region, SUM(amount) FROM sales GROUP BY region" }], "removeViews": ["old_view"]}All fields are optional. updateDatasets replaces the specified fields entirely (not merged).
Datasets
Section titled “Datasets”GET /datasets — List all datasetsPOST /datasets — Add a datasetPUT /datasets/:name — Update a datasetDELETE /datasets/:name — Remove a datasetPOST/PUT body:
{ "from": "s3://bucket/path/data.parquet", "name": "sales", "params": { "s3_auth": "key", "s3_key": "AKIA...", "s3_secret": "...", "s3_region": "us-east-1", "path": "bucket/path/data.parquet", "file_format": "parquet" }, "notifications": true}All parameters are validated per connector type. Invalid params return 400 with specific error messages:
{ "error": "Validation failed", "errors": [ { "path": "s3_key", "message": "s3_key is required when auth mode is key" } ]}GET /views — List all viewsPOST /views — Add a viewPUT /views/:name — Update a viewDELETE /views/:name — Remove a viewPOST/PUT body:
{ "name": "monthly_summary", "sql": "SELECT month, SUM(amount) FROM sales GROUP BY month"}SQL Query
Section titled “SQL Query”POST /sqlRun a SQL query against your datasets and views. Auto-starts the VM if it’s paused or not provisioned — the call blocks until the VM is ready and the query completes (up to 3 minutes).
Request:
{ "query": "SELECT * FROM sales WHERE region = 'us-east' LIMIT 10" }Response:
{ "rows": [{ "id": 1, "name": "Alice", "amount": 100.5, "region": "us-east" }, ...] }If the VM is cold-starting, this call may take 1-3 minutes. Returns 504 if the timeout is exceeded.
Status
Section titled “Status”GET /status{ "vmStatus": "running", "datasets": { "sales": { "ok": true, "status": "ready", "connectorId": "s3", "notifications": true, "s3LastEventAt": "2026-04-01T08:30:00Z", "s3LastRefreshAt": "2026-04-01T08:31:00Z", "s3EventCount": 5 } }}vmStatus values: running, paused, provisioning, error, not_provisioned.
When the VM is paused, dataset status shows "status": "unknown".
VM Lifecycle
Section titled “VM Lifecycle”POST /start — Start or provision the VMPOST /stop — Hibernate the VMPOST /redeploy — Redeploy configuration to VMS3 Change Notifications
Section titled “S3 Change Notifications”Enable automatic data refresh when files change in S3. Set "notifications": true on any S3 dataset.
When enabled, the API response includes the SQS queue ARN:
{ "ok": true, "notifications": { "queueArn": "arn:aws:sqs:us-east-1:...", "region": "us-east-1", "instructions": [ "Go to S3 → your bucket → Properties → Event notifications", "Create notification with events: ObjectCreated:*, ObjectRemoved:*", "Destination = SQS queue, paste the ARN above" ] }}After configuring the S3 notification, file changes are detected automatically. Track status via GET /status.