The ad layer for GenAI apps

Every free prompt costs you money. wavebird turns that compute cost into ad revenue with one SDK.

You're building an app, not an ad platform.

You're running a GenAI app. Every prompt costs money, and conversion to paid is in the single digits. You need revenue that scales with usage without turning your product into a traffic source for someone else.

Built for teams shipping GenAI products with meaningful free-tier usage: chat apps, coding assistants, vertical copilots, and consumer AI surfaces that need revenue before paid conversion catches up.

One integration.
Programmatic demand.
wavebird connects your app to the ad market through established programmatic standards. Billing, proof, and rollout controls sit in the infrastructure layer. You keep the product surface. wavebird handles the ad path.
Rollout stays with you.
Start with one surface, allow only banner or video, and block industries that do not fit your product. You go live step by step instead of rebuilding ad tech.
Model stays
separate.
The ad path runs separately from the model path, in parallel to your app. That lets you monetize usage without rewiring answer logic or UX.
Your UX,
your setup.
Run ads before, during, or after inference. Choose the slot, the format, and the behavior. Monetize usage without hijacking the session, compromising trust, or pulling users away from the product they came for.

AN OPEN STANDARD

Compute Sponsoring v1.0 keeps the rules explicit.

Compute Sponsoring v1.0 is the open standard behind wavebird. It defines how sponsored placements can fund AI compute without touching model I/O, and keeps consent, proof, and delivery behavior legible across implementations.

Current operating profile: situational relevance, no persistence

wavebird's default operating profile uses situational relevance from the active request, with no persistence across requests. Semantic targeting only runs with explicit user consent, and raw prompt text stays inside your system.

We don't touch your model.

Here's what happens when your app sends a prompt.

Your user sends a prompt
Your app handles the API call as usual.
Data is filtered before it leaves
You decide which signals may leave your system for ad matching (for example topic category and language). Everything else is blocked by default.
you control egress
An ad is matched while the model thinks
Decisioning runs during the existing inference wait window. Timing sources are published as engineering evidence.
15.28ms until an ad can be placed source
The ad is delivered and proven
Your app renders the placement. wavebird emits proof events so delivery is independently auditable.
Automated proof per impression source
Your user sees the response
Your UI renders the model response and the placement based on the rules you configure.

Built on Compute Sponsoring v1.0. Measured timings and proof sources are linked in engineering evidence.

Turn every free prompt into revenue.

Slide to see what your free tier could earn.

500,000
YOUR COST
-$2,500
Your API cost per month
At $0.005 per prompt avg.
ONE AD BRINGS
+$4,750
Ad revenue per month
At CPM $9.50, programmatic avg.
YOUR PROFIT
+$2,250
Estimated net gain after compute
Your free tier can pay for itself

API cost assumes roughly 900 tokens per prompt and uses the public pricing pages from OpenAI and Anthropic.
CPM uses the Programmatic Transparency Benchmark Q3 2025.

Status: The SDK is released and fully functional. The first live SSP connection is still in progress, so this revenue math reflects current programmatic market rates rather than live settlement data.

How it works in your app.

A small config surface. You render placements in the formats you allow.

See it live. Right now.

This is chat.wavebird.ai - a live GenAI app running wavebird's ad layer.It shows the end-user experience after a wavebird integration.Real formats, real delivery.

Try it live

Ad Banner

Appears above the chat, clearly labeled as ad. In the default inference-time setup, it clears when the response arrives.

Ad Clip

A short clip plays in the configured ad slot. Non-intrusive, skippable, and fully verified by wavebird.

Three steps. Any app.

wavebird works with any app that runs AI inference.Chat app, coding assistant, vertical copilot, or consumer AI app.

// Connect wavebird to your GenAI app

Step 1

Connect

One SDK and a small configuration surface. Start with Node direct in the SDK documentation. The tested browser entry and proxy compatibility come after that.

Step 2

Configure

Set data policy, ad formats, blocked industries. wavebird enforces your rules on every prompt.

Step 3

Earn

wavebird handles delivery, proof, and settlement. You get paid per verified impression.

The team behind wavebird.

wavebird combines infrastructure engineering and commercial rollout for GenAI products that need a viable free tier before paid conversion can carry the business.

Mario von Bassen and Constantin Keller, founders of wavebird

Mario von Bassen

CEO & Technical Lead

LinkedIn

TU Wien (Visual Computing, M.Sc.) and B.Sc. in E-Commerce. Years of hands-on work in software security, decentralized systems, and IT infrastructure. Then built the entire wavebird stack alone: privacy firewall, proof engine, SSP connector, and SDK.

Constantin Keller

Commercial Lead

LinkedIn

TU Darmstadt (Industrial Engineering, M.Sc.). Sales & strategy at Bosch, before early commercial team at Tvarit (AI/manufacturing). Bridges enterprise ad buyers and app teams - knows how both sides think.

TRUST AND HANDOFFS

External references for developers, and a path for advertisers.

Use the SDK as the main evaluation path, open the public chat to inspect one live end-user surface, and hand advertisers to a dedicated sponsoring path when the campaign side becomes relevant.

Frequently asked questions

Roughly $0.0095 per prompt at current ad market rates. Outside the dedicated economics guide we keep this to the short version. The full calculation lives in the free tier economics guide.
No extra roundtrip is added in the recommended path. Your model call and wavebird run in parallel, and internal measurements put ad matching under 20 ms.
Not in the default integration. The model request runs as usual and the sponsored placement is rendered separately by your app. If you choose a tighter product integration, that is an explicit app decision rather than a hidden side effect of the ad system.
That depends on the profile you configure, but the standard path shares only broad topic category and language for matching. Raw prompt text, conversation history, and personal data do not need to leave the app in the default setup.
Yes. Formats, blocked industries, relevance mode, and no-fill behavior are app-side configuration decisions. wavebird handles market access while product-facing rules stay with your team.
On your side, you integrate the SDK and define rules for data, formats, and blocking. wavebird translates that into OpenRTB, connects to ad exchanges, runs the auction, and returns the winning ad to your app.
Compute Sponsoring turns the model wait window into a clearly labeled revenue moment. A brand can sponsor the compute behind the response while your app keeps control over timing, formats, and disclosure.