Inserting dynamic ads and graphics into live streams is one of the most powerful opportunities for driving viewer engagement and maximizing monetization during live sporting events. That’s because it enables live sports broadcasters to deliver hyper-relevant, personalized ad experiences and region-specific localized graphics to a captive audience–in real-time.
However, many live sports broadcasters are missing out due to the complexity, costs, and latency typically associated with executing dynamic ads and graphic overlays in a live stream. In fact, for typical broadcasts, latency can reach 30-45 seconds beyond the live broadcast, even with the most robust cloud and on-premises solutions on the market.
With emerging hybrid:cloud architectures, utilizing edge computing at the video source, you can recapture this opportunity. Today, these new architectures make it easier, faster, and less expensive to insert personalized ads and graphics at speed and scale by augmenting the cloud and reducing the need for costly equipment on-premises.
Here’s a brief overview of how dynamic ad and graphics insertion works with and without the power of a hybrid:cloud architecture.
Inserting ad markers typically requires expensive broadcast equipment and signaling support. Ad markers, represented by cue tones, designate where ads will be spliced in the feed. From there, insertion and packaging occur in the cloud, increasing latency, cost, and complexity, hindering the ability to deliver an efficient, personalized ad experience.
Dynamic graphic insertion generally requires multiple devices and steps, creating inefficient operations, latency, and cost. There are two ways this is currently done:
The challenge with both of the above scenarios is that video production workflows always start with video that originates from a camera. The video output of the camera is routed to an encoder which usually outputs an RTMP or SRT stream to the cloud.
At this point, there are three disconnected systems, as the camera only interfaces to the contribution encoder, typically through baseband lightly compressed video, and the encoder only interfaces to the cloud through a compressed representation of the video.
As a result of these disconnects, many additional steps must be performed in the cloud to fill in the gaps. The result is a hodgepodge of devices, interfaces, wiring, and homegrown software that create complexity, reliability issues, and costs from both an implementation and an operational perspective.
This architecture shifts critical functions from the cloud to the Videon Compute Platform, which resides at the point of video origination. This increases control and significantly reduces latency, complexity, cost, and enables the delivery of personalized advertising and localized or targeted graphics, based on interactivity, into live streams.
When the graphic insertion function is moved to the Videon Compute Platform, overlays are added to the feed before it is encoded. The platform also has the option for both "clean" (no graphics) and "dirty" (graphics included) feeds. The approach reduces cloud and device costs as well as complexity and latency.
Instead of inserting ad markers in the cloud, that function happens in the Videon Compute Platform, leveraging its encode feature. Ad marker insertion can be triggered by an API command from the cloud or an operator at the event hitting the proverbial “big red button", enabling operational cost efficiency.
With graphics overlay performed at the video source, simultaneously with contribution encoding, the cost of a dedicated device on-premises for graphics overlay is eliminated. Plus, in the cloud, multiple steps are eliminated, which can drive significant cost savings.
The number of devices, vendors, and functional steps is dramatically reduced by inserting graphics and ad markers at the point of production. This results in far simpler workflow deployment, improved reliability, and reduced support costs.
By inserting graphics in advance of encoding, a second decode/encode step is removed that typically takes 5 seconds or more. Now, with the Videon Compute Platform, the combination of a single encode and package step takes only 150ms, allowing for HTTP-based end-to-end latency of sub-3 seconds.
The Videon Compute Platform runs on Qualcomm SnapDragon ARM processors, which have power/watt/sq inch ratios that are an order of magnitude more efficient than Intel or AMD CPUs that power traditional video servers.
This architecture is rapidly becoming the go-to method for fast, cost-effective, and scalable live video production. Contact us today to see just how easy it can be to deliver powerful ad experiences in your live sports broadcasts.
Three reasons why you need LiveEdge® Cloud to scale your live video business and the features that make it possible
More >