Deploy event-driven serverless functions for stream orchestration. Multi-language support with JavaScript, Python, Go, and Rust. Sub-50ms cold starts, stateful processing, and seamless integration with EDGE, MESH, and VAULT.
Complete these steps before deploying your first function
npm install -g @wave/cliwave auth:loginwave runtime:init my-functionswave runtime:whoamiRUNTIME supports four languages, each optimized for different use cases. Select based on your performance requirements and team expertise.
// handler.js - Content moderation with AI
import { WaveRuntime } from '@wave/runtime';
export const config = {
trigger: 'stream.started',
timeout: 30000,
memory: '256MB',
retries: 3
};
export async function handler(event, ctx) {
const { streamId, creatorId, settings } = event.data;
ctx.logger.info(`Processing stream ${streamId} for creator ${creatorId}`);
// Get creator preferences from VAULT
const preferences = await ctx.vault.get(`creator:${creatorId}:preferences`);
// Enable AI moderation if configured
if (preferences?.autoModeration) {
await ctx.edge.process({
streamId,
effects: ['content-detection', 'profanity-filter'],
sensitivity: preferences.moderationLevel || 'medium',
webhook: `${process.env.APP_URL}/api/webhooks/moderation`
});
}
// Initialize viewer analytics
await ctx.state.set(`stream:${streamId}:stats`, {
startedAt: new Date().toISOString(),
peakViewers: 0,
totalViews: 0,
chatMessages: 0,
donations: 0
});
// Record to PULSE analytics
await ctx.analytics.record({
event: 'stream_initialized',
streamId,
creatorId,
settings,
timestamp: Date.now()
});
return {
success: true,
moderationEnabled: preferences?.autoModeration ?? false,
streamId
};
}# Deploy JavaScript function wave runtime:deploy handler.js \ --name content-moderator \ --trigger stream.started \ --memory 256MB \ --timeout 30s \ --env-file .env.production # View deployment status wave runtime:status content-moderator # Test function locally wave runtime:dev handler.js --port 3001
Functions are triggered by events from the WAVE platform. Here are the available event types:
| Event | Description | Payload |
|---|---|---|
stream.started | Stream goes live | streamId, creatorId, settings |
stream.ended | Stream ends | streamId, duration, stats |
viewer.joined | Viewer connects | viewerId, streamId, region |
viewer.left | Viewer disconnects | viewerId, watchTime, engagement |
chat.message | Chat message sent | messageId, userId, content |
donation.received | Donation processed | amount, currency, donorId |
moderation.flagged | Content flagged | streamId, reason, severity |
milestone.reached | Viewer milestone | streamId, milestone, type |
Configure secrets and environment variables securely. Sensitive values are stored in VAULT and injected at runtime.
# .env.runtime - Environment configuration
# Required
WAVE_API_KEY=wave_live_xxxxxxxxxxxx
WAVE_PROJECT_ID=proj_xxxxxxxxxxxx
# Database connections (auto-injected)
DATABASE_URL=${WAVE_DATABASE_URL}
REDIS_URL=${WAVE_REDIS_URL}
# Custom secrets (stored in VAULT)
STRIPE_SECRET_KEY=${vault:stripe_secret}
OPENAI_API_KEY=${vault:openai_key}
TWILIO_AUTH_TOKEN=${vault:twilio_token}
# Feature flags
ENABLE_ML_MODERATION=true
MODERATION_THRESHOLD=0.85
MAX_CONCURRENT_STREAMS=100
# Monitoring
SENTRY_DSN=${vault:sentry_dsn}
ENABLE_TRACING=truewave vault:set SECRET_NAME to store sensitive values and reference them with ${vault:secret_name}.Test functions locally before deploying. The development server simulates the RUNTIME environment with hot reloading.
# Local development server
wave runtime:dev handler.js --port 3001 --watch
# Send test events
curl -X POST http://localhost:3001/invoke \
-H "Content-Type: application/json" \
-d '{
"event": "stream.started",
"data": {
"streamId": "test-stream-123",
"creatorId": "creator-456",
"settings": { "quality": "1080p" }
}
}'
# Run unit tests
wave runtime:test handler.test.js
# Generate test coverage
wave runtime:test --coverage --threshold 80
# Mock external services
wave runtime:dev handler.js \
--mock-edge \
--mock-vault \
--mock-analyticsBuild complex workflows with these battle-tested patterns for distributed function orchestration.
Sequential execution with data passing
// Chain multiple functions
const result = await ctx.runtime.chain([
{ fn: 'validate-input', data: event.data },
{ fn: 'process-payment', map: r => r.validated },
{ fn: 'send-notification', map: r => r.paymentId }
]);Parallel execution with aggregation
// Fan-out to multiple functions
const results = await ctx.runtime.fanOut([
{ fn: 'analyze-audio', data: { streamId } },
{ fn: 'analyze-video', data: { streamId } },
{ fn: 'analyze-chat', data: { streamId } }
]);
// Aggregate results
const combined = aggregateAnalysis(results);Distributed transactions with rollback
// Saga with automatic rollback
const saga = ctx.runtime.saga([
{
execute: 'reserve-inventory',
compensate: 'release-inventory'
},
{
execute: 'charge-payment',
compensate: 'refund-payment'
},
{
execute: 'fulfill-order',
compensate: 'cancel-fulfillment'
}
]);
await saga.execute(orderData);Automatic failure isolation
// Configure circuit breaker
export const config = {
circuitBreaker: {
failureThreshold: 5,
resetTimeout: 30000,
halfOpenRequests: 3
}
};
// Function automatically protected
// Opens circuit after 5 failures
// Resets after 30 secondsCommon issues and their solutions. Click to expand for detailed diagnostics and fixes.
Processing 50,000+ chat messages per second across 2,000 concurrent streams with <100ms latency
Deployed Rust-based moderation functions at the edge with ML model inference
"RUNTIME's edge execution reduced our moderation latency from 200ms to 23ms. We can now catch toxic content before it reaches viewers.
Inserting personalized mid-roll ads in live streams without buffering or quality degradation
Go functions for sub-50ms ad decision making integrated with EDGE transcoding
"The combination of RUNTIME and EDGE gave us seamless ad insertion. Viewers don't even notice the transition.
Processing 100M+ viewer events daily with real-time aggregation and alerting
Python functions for ML-based viewer segmentation with fan-out aggregation pattern
"We replaced our entire Spark cluster with RUNTIME functions. Same capabilities, 80% cost reduction.
| Tier | Invocations | Compute (GB-s) | Concurrency | Support |
|---|---|---|---|---|
| Free | 100K/month | 400K GB-s | 10 | Community |
| Pro | 5M/month | 10M GB-s | 100 | |
| Business | 50M/month | 100M GB-s | 1,000 | Priority |
| Enterprise | Unlimited | Unlimited | 10,000 | 24/7 Dedicated |
Complete RUNTIME API documentation with all methods and types
View API docsCombine RUNTIME with EDGE for AI processing at the edge
EDGE quickstartMonitor function performance and business metrics
Analytics guideSecure secrets management and encryption
VAULT docs