After we looked at how VOD works, let’s look at a live workflow for example broadcasters use when they are streaming a live video channel 24/7
So in this case, you could have different parts of the pipeline so you can have a camera here recording the event. Then you can have also files, for example, and the drive, which can be made available to live as VOD to life. Or, for example, you could have the streaming software like OBS. That’s the input source. The next step is to then actually transmit these files through as a file or as a live stream, for example, from the software that you just go to the encoding part. But let’s say we have the static files and we want to make into a live channel. We can use something like a SDI stream and then pass it to the encoding box, this is at a very high bit rate. For that live channel, we would have, for example, as the uncompressed input going into encoders and we’re looking at 1.0+ GBit/s.
So the next step, once we have this source configured, is to actually perform the same operation that we do for VOD workflows, which is to maximize the video quality while minimizing the file size. So in this case, we’re going to have hardware or cloud encoder. So for example commercial solutions like Bitmovin and Mux and these can run then on k8s clusters that make it scalable and allow for parallel processing of a lot of videos and channels. Or you could have hardware encoders, for example by Harmonic, and these can ingest SDI. So 2022-6 which is SDI over RTP, which has a very high bitrate. We take the input from previous step and then have an hardware solution that can handle this much data rate. And for example, we have to have back up encoders, which function in a way that with one encoder goes down, then the other one can substitute it. In this case, if we have a cloud example, it would be another Kubernetes cluster doing that job. It increases complexity, but this allows for for scalability in the cloud. Then the next step is to actually use the encoding profiles like we saw, for example, in the VOD case. As well, we a different resolution and bit rates depending on where the user we consume the media from.
So, for example, as we said before, we’re not going to send a 5 megabit video to a cell phone. We might send a 640 kilobytes one kilobits one because it’s much lighter and can be consumed on a smartphone with a small screen and probably not such a stable connection. So we create these versions either for DASH or HLS. And we have multiple bitrates to serve all the different clients. So basically the video file gets split into chunks of a couple of seconds long and these segments allow for the quality switching to player level and also at the context level. So, for example, if the user is a high connectivity, initially, they’re going to get higher resolution bitrate file and then if the user moved with their cell phone to an area of lower connectivity, then the player automatically take the version at a lower bitrate and resolution so the video can keep playing. Also, these lower resolutions might be picked up first so that the video can start up quickly and then switch to a higher quality. So once we have done the different versions of the video file or of the channel that we’re transcoding live, we’re going to place the resulting manifest and video and audio files on a Content Delivery Network or CDN, for example, we use Akami MSL4 for which then basically copy the files across a different number of servers across the globe. So users that are accessing the live stream from different parts of the world on different devices can get the files faster. So, for example, if a user is connecting to the channel from Italy on a cell phone, it will be faster to deliver the video if the server holding the files is in Italy than if it’s in another, farther location.
So how does this all come together? In the workflow there are different tuning options that then use the different parts of the workflow for automation testing, for example, and visual quality analysis and encoding to make sure that the resulting video files are good enough to be given to users. Also, there are KPIs at the CDN level and also analytics information there to see how the files are consumed. Also you can have analytics at the player level and at the user level to see which files are being consumed most which were solutions and if they have had any playback problems or how long it took for the video stream to start. All of this information together allows the videodeveloper to then adjust the encoding profile and the visual quality analysis and reduce costs and increase the user experience by providing always the best uninterrupting quality. So as you can see, there are a lot of figures needed for this software engineer, dev operations and architects because it is a very complex workflow. But at the end, it makes sure that users can get to their videos quickly, without buffering and enjoy.