re:Invent 2019 Recap: The Year of AI/ML
re:Invent may have happened over a month ago, but what better time to touch on Amazon Web Service’s inaugural conference and our key takeaways than at the start of the new year… Or halfway through the second month of the new year…
re:Invent was an amazing way to see just how far-reaching AWS is in terms of not only available services but also the huge variety of users that they have. Odds are, if you’re doing anything on the internet, AWS is somewhere either at the forefront or behind the scenes.
I went into re:Invent intentionally seeking out AI/ML sessions. I wanted to understand how AI/ML can be useful to live video streaming, but also to various applications such as satellite imaging and utility surveys. I also wanted to learn a lot about how AWS views live video productions and the approaches that they take to ensuring a successful live event. What I learned has greatly improved my understanding of how to enable our customers to succeed when setting up a live event and how our product can help with that.
- MediaConnect enables the high-quality distribution of mezzanine-type content. MediaConnect is a great ingest point that allows delivery to whatever workflow can be connected easily to any other AWS service while still being flexible enough to fulfill various workflows.
- Resiliency in live video workflows
- Resiliency = Redundancy + Failover. Having resiliency allows the viewer to have uninterrupted playback, even if you have failures in your workflow.
- Simple Resiliency = Duplication + Manual Failover.
- Better Resiliency = Cloud-Native Redundancy with autoscale and Self-Healing + Auto-Failover.
- Performing audio transcriptions using Machine Learning is up and coming, if not already in use in a number of institutions (think medical field to reduce charting and data entry workload on doctors). In support of Machine Learning, AWS Sagemaker Studio was released and allows anyone to develop Machine Learning applications with a full set of pre-made, easy to use tools.
- Users should think about their live streaming workflow from the ground up and do it right from the start. The main pillars of the AWS Well-Architected Framework are Reliability, Performance, Security, Cost Efficiency, and Operational Excellence.
- AWS Rekognition: AI/ML is used for image and video analysis to help AWS customers apply their custom label sets to a huge variety of applications.
- The main demonstration was using custom labels for finding specific moments in video content (ex: Interviews where a golden record was in the frame). Having custom label sets that users can train the AI/ML systems to use greatly reduces the manual effort needed to find and process video content.
- Because AI/ML can process much faster, valuable data is provided in real-time.
Biggest Takeaway: AI/ML is huge and AWS is driving it full-throttle. The number of AI/ML tools and services AWS is going to provide in the next two to three years is going to grow drastically.
Tip for Future Attendees of re:Invent: Take the time to plan your day in advance. I ended up running from place to place to try and see everything rather than planning an efficient route between the various sessions. There is just so much to see.
About the Author
Tristan Avelis is Videon’s Product Manager responsible for translating input from the live streaming market, individual users, and tech industry leaders into implemented product features that meet the needs of a wide variety of live streaming applications.