UserClient
How product clients read the model directory and call text, stream, image, tts, and other services.
UserClient is the runtime client for product-facing calls.
It binds one user inside one product:
base_urlproduct_iduser_token
You can use it inside browsers, extensions, mobile apps, desktop apps, or your own backend acting on behalf of a user.
Minimal example
import { UserClient } from "@visiblebase/client";
import type { UIMessageChunk } from "ai";
const client = new UserClient({
base_url: "https://base.example.com",
product_id: "prod_xxx",
user_token: "ub_xxx",
});
const models = await client.models();
const result = await client.text({
model: models.primary(),
prompt: "Write a welcome message",
});Why models() comes first
client.models() returns a callable directory, not just a plain array:
const models = await client.models();
models("gpt-5.4");
models.primary();
models.all();Recommended usage:
const models = await client.models();
const model = models("gpt-5.4") ?? models.primary();This keeps raw model IDs from scattering across your product code.
text()
const result = await client.text({
model: models.primary(),
prompt: "Write a welcome message",
});text() returns an AI SDK UIMessage: a complete message that UI code can store and render directly.
The input object is still intentionally open:
modelis optional- other fields are defined by your own
base.text()handler - the handler result should be a
UIMessage
If no model is provided, Base tries to fill the final query with the service-level default() fallback.
If you call a custom service with a non-UIMessage result shape, use invoke<T>().
stream()
const body = await client.stream({
model: models("gpt-5.4"),
prompt: "Stream a short paragraph",
});stream() returns an AI SDK UIMessageChunk stream:
const stream: ReadableStream<UIMessageChunk> = await client.stream({
prompt: "Stream a short paragraph",
});It is not the raw HTTP byte stream. The SDK parses the AI SDK UIMessage SSE body returned by Base into chunk objects.
You can consume it chunk by chunk:
const reader = stream.getReader();
const first = await reader.read();The Base-side stream handler should return the result of AI SDK createUIMessageStreamResponse() or streamText().toUIMessageStreamResponse().
If you want a single JSON result, use text() instead of stream().
image() / video()
image() and video() return AI SDK UIMessage. Use file parts inside parts to represent generated image or video files:
const imageMessage = await client.image({
prompt: "A fox standing in the snow",
model: models("image-basic"),
});The Base-side base.image() / base.video() handlers are also typed to return UIMessage.
tts() / asr()
tts() and asr() keep open return types because audio input and output transport shapes vary more across products:
await client.tts({
text: "Hello",
voice: "alloy",
});If you need a stricter result shape, wrap a custom service with invoke<T>().
invoke()
If the service name is dynamic, call the generic method:
await client.invoke("rewrite", {
prompt: "Rewrite this in a more professional tone",
tone: "formal",
});This is useful when:
- the frontend picks a service from configuration
- you added custom services and do not want to wrap each of them manually
service()
client.service() has two forms.
Read the registered service list
const services = await client.service();Create a service-scoped invoker
await client.service("text").invoke({
prompt: "Hello",
});That is equivalent to:
await client.invoke("text", {
prompt: "Hello",
});Common errors
When UserClient receives a non-2xx HTTP response, it throws an Error with two extra fields:
status: the HTTP status code.body: the raw response body from Base, usually{"error":"..."}.
try {
await client.text({
model: "gpt-5.4",
prompt: "Hello",
});
} catch (error) {
const status = error instanceof Error && "status" in error ? error.status : undefined;
const body = error instanceof Error && "body" in error ? error.body : undefined;
console.log(status, body);
}client.stream() can fail in two stages: when HTTP returns a non-2xx status, it throws the same status/body error; when HTTP succeeds but the body is empty or is not an AI SDK UIMessage stream, the stream parser throws a normal parsing error.
401 / 403
Usually one of these:
user_tokenis missing- the token expired
- the token signature is invalid
product_iddoes not match the token
422
Usually one of these:
- the final
query.modelis empty - the request references a model that does not exist
- the current service did not hit any
match()handler
When not to use UserClient
Do not use UserClient for:
- creating products
- issuing
user_token - modifying runtime env
Those are trusted-side actions and belong in AdminClient or your own backend.