← Back to docs

OpenAI / AI

Language: EN | EN | SV

OpenAI / AI

Tools can use an internal AI engine ("OpenAI Engine") to analyze content and power platform features — without ever exposing API keys in the frontend.

This is a user manual that describes:

  • The OpenAI Engine admin UI
  • The URL analysis API endpoint
  • Permissions and authentication

Web: OpenAI Engine (admin)

URL:

  • /admin/openai

Requirements:

  • Logged-in user (web session)
  • Permission: openai.manage

In the web UI you can:

  • Enable/disable the engine (Enabled)
  • Set global policy (default model, allowlist, token caps, rate limits)
  • Create/update Prompt profiles
  • Run Test prompt to verify that the provider + config works (server-side)

Dynamic model dropdowns

The model dropdowns in /admin/openai are no longer hardcoded.

  • Tools fetches the provider catalog from OpenAI GET /v1/models server-side
  • the result is filtered down to chat-usable model ids
  • if allowed_models is configured, the dropdown is intersected with that allowlist
  • if live discovery fails, Tools falls back to the configured/default models so the UI stays usable

This means the admin model picker is based on what the current provider key can actually use, without exposing any key in the browser.

What does “Enabled” mean?

  • Enabled = off: AI feature endpoints should be unavailable (often 503, depending on endpoint).
  • Enabled = on: the engine can be used (as long as a global provider key exists).

Note: provider keys are managed under API Keys and are never shown in plain text.

Personal Bearer Token (Tools AI)

Users who are allowed to use Tools AI can create a personal bearer token:

  • Go to My API Keys: /keys/mine
  • Use Tools AI Bearer token to generate/rotate a token
  • The token is shown once and stored server-side

Use it like this:

Authorization: Bearer <token>

This token works for AI feature endpoints (e.g. /api/ai/url/analyze) and is tied to your user account + permissions.

API: URL Analyze

Endpoint:

  • POST /api/ai/url/analyze

Purpose:

  • You send a URL (and optionally a question)
  • Tools fetches and sanitizes the content server-side
  • OpenAI Engine analyzes the text using a selected prompt profile

Request

Form data or JSON:

  • url (required) — URL to analyze
  • question (optional) — analysis focus/question
  • profile (optional) — prompt profile name (default: URL Analyzer if it exists, otherwise the engine falls back to a minimal default profile)

Response

JSON:

  • ok — true/false
  • request_id — internal request id
  • latency_ms — approximate latency
  • model — model used
  • response — model output (if ok)
  • error — error message (if ok=false)

Auth & Permissions

To use the endpoint you need:

  • Authentication: if no user exists — 401 Unauthenticated
  • Admin (is_admin=1) — always allowed
  • Non-admin — requires permission: provider_openai

If the OpenAI provider isn't configured (missing global provider_openai API key), the endpoint typically returns 503.

Example

curl -X POST "https://tools.tornevall.net/api/ai/url/analyze" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer <TOKEN>" \
  -d '{
    "url": "https://example.com",
    "question": "What is this page about?",
    "profile": "URL Analyzer"
  }'

Security

  • No keys are exposed in the frontend.
  • URL content is fetched server-side with SSRF protections (private ranges blocked, size limits, timeouts, redirect limits).
  • System/developer instructions are defined by prompt profiles, not by the client.

API: Social Media extension model catalog

Endpoint:

  • GET /api/social-media-tools/extension/models

Purpose:

  • returns the backend-discovered model list for the authenticated Tools bearer token
  • gives the Chrome extension a dynamic model dropdown without calling OpenAI directly from the extension
  • reuses the same provider key resolution rules as the rest of Tools (personal OpenAI key if present, otherwise global)

Response fields include:

  • models — array of available model options
  • default_model — effective default model for this user/context
  • source — whether the list came from live provider discovery or a configured fallback
  • warning — optional fallback/discovery message