Skip to main content
Vision-style requests send a user message whose content mixes text with image parts (URL or base64). SUPA follows the same message shapes as OpenAI Chat Completions.

OpenAI SDK

Use content as an array of parts. For a remote image, use image_url with an https URL. For a local file, read bytes and send a data:image/...;base64,... URL.
import OpenAI from 'openai';
import fs from 'node:fs';

const openai = new OpenAI({
  apiKey: process.env.SUPA_API_KEY,
  baseURL: 'https://api.supa.works/openai',
});

const imagePath = './photo.png';
const base64 = fs.readFileSync(imagePath).toString('base64');
const dataUrl = `data:image/png;base64,${base64}`;

const response = await openai.chat.completions.create({
  model: 'google/gemma-3-27b-it',
  messages: [
    {
      role: 'user',
      content: [
        {
          type: 'text',
          text: 'Describe this image in one sentence.',
        },
        {
          type: 'image_url',
          image_url: { url: dataUrl },
        },
      ],
    },
  ],
});

console.log(response.choices[0]?.message?.content);
You can use a public URL instead of base64:
{
  type: 'image_url',
  image_url: { url: 'https://example.com/photo.png' },
}

Vercel AI SDK

Pass multimodal UserModelMessage content: text and image parts. Images accept URLs, base64 data, or Uint8Array depending on the AI SDK version; URLs are the simplest.
import { createOpenAI } from '@ai-sdk/openai';
import { generateText } from 'ai';

const supa = createOpenAI({
  apiKey: process.env.SUPA_API_KEY,
  baseURL: 'https://api.supa.works/openai/v1',
});

const { text } = await generateText({
  model: supa('google/gemma-3-27b-it'),
  messages: [
    {
      role: 'user',
      content: [
        { type: 'text', text: 'What is in this image?' },
        {
          type: 'image',
          image: 'https://example.com/photo.png',
        },
      ],
    },
  ],
});

console.log(text);

TanStack AI

Use ImagePart with a url or data source alongside TextPart entries.
import { chat } from '@tanstack/ai';
import { createOpenaiChat } from '@tanstack/ai-openai';
import type { OpenAIChatModel } from '@tanstack/ai-openai';

const adapter = createOpenaiChat(
  'google/gemma-3-27b-it' as OpenAIChatModel,
  process.env.SUPA_API_KEY!,
  { baseURL: 'https://api.supa.works/openai/v1' },
);

const text = await chat({
  adapter,
  stream: false,
  messages: [
    {
      role: 'user',
      content: [
        { type: 'text', content: 'What is in this image?' },
        {
          type: 'image',
          source: { type: 'url', value: 'https://example.com/photo.png' },
        },
      ],
    },
  ],
});

console.log(text);
For inline data, use a data source with base64 and mimeType:
{
  type: 'image',
  source: {
    type: 'data',
    value: base64String,
    mimeType: 'image/png',
  },
}