Skip to main content
By WAVE Engineering Team8 min read

How to Stream Live Events with Sub-Second Latency: Complete Guide

Sub-second latency isn't just a technical achievement—it's a competitive necessity. Learn how to implement WebRTC, SRT, and OMT protocols to deliver real-time experiences that keep viewers engaged.

Key Takeaways

  • WebRTC achieves <500ms latency but requires complex infrastructure
  • SRT provides reliable low-latency streaming over unpredictable networks
  • OMT (WAVE's protocol) delivers <16ms latency for mission-critical applications
  • Protocol choice depends on your use case, infrastructure, and viewer scale

Why Sub-Second Latency Matters

In today's digital landscape, viewers expect instant gratification. Traditional streaming protocols like HLS introduce 20-30 seconds of latency, making real-time interaction impossible. This delay kills engagement for:

  • Sports betting platforms - where odds change in milliseconds
  • Live auctions - where timing determines winners
  • Esports tournaments - where audience engagement drives revenue
  • Telemedicine - where patient outcomes depend on real-time communication
  • Remote collaboration - where latency disrupts natural conversation flow

The difference between 30-second latency and sub-second latency isn't incremental—it's transformational. It shifts streaming from broadcast to true real-time interaction.

Protocol Comparison: Choosing Your Weapon

WebRTC: The Industry Standard

Latency: 200-500ms
Best For: Video conferencing, interactive live streaming, real-time collaboration
Pros:
  • • Native browser support (no plugins required)
  • • Peer-to-peer capable (reduces server costs)
  • • Built-in encryption (DTLS/SRTP)
  • • Adaptive bitrate streaming
Cons:
  • • Complex NAT traversal (requires TURN servers)
  • • Challenging to scale beyond 10K viewers per stream
  • • Browser inconsistencies require careful testing
  • • Higher CDN costs compared to HLS/DASH

SRT: Reliable Low-Latency Transport

Latency: 1-3 seconds
Best For: Contribution feeds, first-mile ingest, unpredictable networks
Pros:
  • • Excellent packet loss recovery (handles 15%+ loss)
  • • Works over challenging network conditions
  • • Open-source implementation available
  • • Growing encoder/decoder support
Cons:
  • • Not browser-native (requires player plugins)
  • • Higher latency than WebRTC for viewer playback
  • • Less mature ecosystem than RTMP/HLS

OMT (WAVE): Ultra-Low Latency Champion

Latency: <16ms
Best For: Mission-critical applications requiring glass-to-glass sub-frame latency
Pros:
  • • Industry-leading <16ms latency
  • • Scales to 100M+ concurrent viewers
  • • Built-in multi-region failover
  • • Adaptive bitrate without latency spikes
  • • Enterprise SLA with 99.99% uptime guarantee
Cons:
  • • Proprietary protocol (WAVE platform only)
  • • Higher cost than commodity CDN streaming

Implementation Guide: Getting Started with WebRTC

WebRTC is the most accessible low-latency protocol for teams getting started. Here's a production-ready implementation:

Step 1: Client-Side Setup

// Initialize WebRTC peer connection
const peerConnection = new RTCPeerConnection({
  iceServers: [
    { urls: 'stun:stun.l.google.com:19302' },
    {
      urls: 'turn:your-turn-server.com:3478',
      username: 'user',
      credential: 'pass'
    }
  ],
  iceTransportPolicy: 'all',
  bundlePolicy: 'max-bundle',
  rtcpMuxPolicy: 'require'
});

// Add video track
navigator.mediaDevices.getUserMedia({
  video: {
    width: { ideal: 1920 },
    height: { ideal: 1080 },
    frameRate: { ideal: 30 }
  },
  audio: true
}).then(stream => {
  stream.getTracks().forEach(track => {
    peerConnection.addTrack(track, stream);
  });
});

Step 2: Signaling Server

// WebSocket signaling (Node.js)
const WebSocket = require('ws');
const wss = new WebSocket.Server({ port: 8080 });

wss.on('connection', (ws) => {
  ws.on('message', (message) => {
    const data = JSON.parse(message);

    // Broadcast to all other clients
    wss.clients.forEach((client) => {
      if (client !== ws && client.readyState === WebSocket.OPEN) {
        client.send(JSON.stringify(data));
      }
    });
  });
});

Optimization Strategies: Achieving Production Quality

1. Network Optimization

  • Use TURN servers strategically: Deploy TURN servers in multiple regions to minimize relay distances
  • Enable UDP prioritization: Configure QoS rules to prioritize UDP traffic on your network
  • Implement connection quality monitoring: Track RTT, jitter, and packet loss to trigger adaptive bitrate changes

2. Encoding Configuration

  • GOP size: Set keyframe interval to 2 seconds maximum (lower for ultra-low latency)
  • B-frames: Disable B-frames entirely—they add latency
  • Hardware acceleration: Use GPU encoding (NVENC, QuickSync, VideoToolbox) to reduce CPU load
  • Rate control: Use CBR (Constant Bitrate) for predictable bandwidth usage

3. Buffering Strategy

  • Minimize player buffer: Set buffer to 0.5-1 second for WebRTC
  • Implement buffer-based ABR: Adjust quality based on buffer health, not bandwidth estimates
  • Preload strategy: Preload only the next segment to minimize memory usage

When to Use WAVE's OMT Protocol

OMT (Optimized Media Transport) is WAVE's proprietary protocol designed for scenarios where WebRTC and SRT fall short:

  • Sports betting: Where every millisecond affects odds accuracy and user trust
  • Financial trading: Where latency directly impacts transaction success
  • Remote surgery: Where sub-frame latency is life-critical
  • Massive-scale events: Where 1M+ concurrent viewers need consistent sub-second latency

OMT achieves <16ms glass-to-glass latency while maintaining 99.99% uptime at scale. This is accomplished through:

  • • Custom UDP-based transport with optimized congestion control
  • • Zero-copy video processing pipeline
  • • Hardware-accelerated encoding/decoding on edge nodes
  • • Intelligent multi-region routing with sub-50ms failover

Measuring Success: Key Metrics

Track these metrics to ensure your low-latency streaming is performing optimally:

Glass-to-Glass Latency

Total time from camera capture to viewer display

Target: <1 second

Rebuffer Rate

Percentage of playback time spent rebuffering

Target: <1%

Video Start Time

Time from player initialization to first frame

Target: <2 seconds

Error Rate

Percentage of sessions with playback failures

Target: <0.5%

Conclusion

Sub-second latency transforms streaming from passive viewing to interactive experiences. Whether you choose WebRTC for its browser support, SRT for its reliability, or OMT for its unmatched performance, the key is understanding your requirements:

  • Viewer scale: How many concurrent viewers do you need to support?
  • Latency requirements: Is 500ms acceptable or do you need <100ms?
  • Infrastructure complexity: Can you manage WebRTC's complexity or do you need a managed solution?
  • Budget: What's the cost per viewer you can sustain?

For most applications, WebRTC provides an excellent balance of latency, cost, and browser compatibility. For mission-critical applications where latency is non-negotiable, WAVE's OMT protocol delivers unmatched performance at scale.

About the Author

The WAVE Engineering Team consists of streaming infrastructure specialists with decades of combined experience building low-latency video platforms. We've designed systems serving 100M+ concurrent users with 99.99% uptime.

Learn more about WAVE

Ready to Implement Sub-Second Streaming?

WAVE's OMT protocol delivers <16ms latency with 99.99% uptime. Start streaming in 5 minutes with our free tier.

How to Stream Live Events with Sub-Second Latency: Complete Guide | WAVE Blog