visit
I've written a few posts lately about multi-host live streams with Amazon Interactive Video Service (Amazon IVS). It's an exciting feature that opens up worlds of possibilities that simply weren't available until recently. We first looked at how to create a multi-host live chat application.
When we looked at adding chat participants to the broadcast client in that last post, you probably noticed that I cheated a bit and hardcoded the VideoComposition
values that tell the broadcast client the size and position of the participant's video on the client.
In the last post, we listened for an event called STAGE_PARTICIPANT_STREAMS_ADDED
. In the event handler for that event, we added our participants to the DOM and rendered the audio and video to the IVSBroadcastClient
instance.
In order to render a dynamic layout, we'll need to track how many participants are currently in the session, so we'll add an array called participantIds
as a global variable. Let's modify the event handler to push the current participant id to that array.
stage.on(StageEvents.STAGE_PARTICIPANT_STREAMS_ADDED, (participant, streams) => {
//add participant id to array
participantIds.push(participant.id);
renderParticipant(participant, streams);
renderVideosToClient(participant, streams.find(s => s.streamType === StreamType.VIDEO));
renderAudioToClient(participant, streams.find(s => s.streamType === StreamType.AUDIO));
updateVideoCompositions();
});
In the last post, I mentioned that the updateVideoCompositions()
method was not shown because the implementation would vary. We'll talk about one possible implementation in just a bit.
One way to obtain a dynamic size and position is to loop over the participant array and calculate them based on the number of participants, the size of the <canvas>
, and the desired amount of rows, columns, and padding. But, why?
That sounds like a lot of difficult code and unnecessary work when you realize that these values never change. If you have one participant, the video will be a fixed size and centered in the <canvas>
.
For my demo, I spent some time determining the best values with an intensive 30 minutes with a pen, paper, and calculator to determine the composition values for each possible layout. Please note: I was not a maths or art major as evidenced by the following sketch.
const layouts = [
[{ height: 720, width: 1280, x: 320, y: 180 }],
[{ height: 450, width: 800, x: 80, y: 315 }, { height: 450, width: 800, x: 1040, y: 315 }],
[{ height: 450, width: 800, x: 80, y: 45 }, { height: 450, width: 800, x: 1040, y: 45 }, { height: 450, width: 800, x: 560, y: 585 }],
[{ height: 450, width: 800, x: 80, y: 45 }, { height: 450, width: 800, x: 1040, y: 45 }, { height: 450, width: 800, x: 80, y: 585 }, { height: 450, width: 800, x: 1040, y: 585 }],
[{ height: 337, width: 600, x: 20, y: 100 }, { height: 337, width: 600, x: 650, y: 100 }, { height: 337, width: 600, x: 1280, y: 100 }, { height: 337, width: 600, x: 340, y: 640 }, { height: 337, width: 600, x: 980, y: 640 }],
[{ height: 337, width: 600, x: 20, y: 100 }, { height: 337, width: 600, x: 650, y: 100 }, { height: 337, width: 600, x: 1280, y: 100 }, { height: 337, width: 600, x: 20, y: 640 }, { height: 337, width: 600, x: 650, y: 640 }, { height: 337, width: 600, x: 1280, y: 640 }]
];
If there are 3 participants, we can reference the third element in the outer array, and the position of the participant id in the participantIds
array will determine which composition will apply to that video.
We can modify our renderVideosToClient()
function to grab the proper composition and use those values when we add the video to the broadcast client.
const renderVideosToClient = async (participant, stream) => {
const participantId = participant.id;
const videoId = `video-${participantId}`;
// get the index of this participantId
const pIdx = participantIds.indexOf(participantId);
let composition = layouts[participantIds.length - 1][pIdx];
config.index = 2;
const mediaStream = new MediaStream();
mediaStream.addTrack(stream.mediaStreamTrack);
broadcastClient.addVideoInputDevice(mediaStream, videoId, composition);
};
But remember - if we only do this when a participant is added, the previous video compositions will still reflect the composition that was applied when they were added. That is where the updateVideoCompositions()
function comes into the picture.
Here we loop over the participantIds
array, grab the proper composition from layouts
, and use the updateVideoDeviceComposition()
(broadcastClient
.
const updateVideoCompositions = async () => {
let idx = 0;
for (const p of participantIds) {
const videoId = `video-${p}`;
let config = layouts[filteredParticipantIds.length - 1][idx];
config.index = 2;
broadcastClient.updateVideoDeviceComposition(videoId, config);
idx = idx + 1;
}
};
stage.on(StageEvents.STAGE_PARTICIPANT_STREAMS_REMOVED, (participant, streams) => {
const participantId = participant.id;
// remove participant id from array
const pIdx = participantIds.findIndex(id => id === participantId);
participantIds.splice(pIdx, 1);
const videoTrackId = `video-${participantId}`;
const audioTrackId = `audio-${participantId}`;
if (broadcastClient.getVideoInputDevice(videoTrackId)) broadcastClient.removeVideoInputDevice(videoTrackId);
if (broadcastClient.getAudioInputDevice(audioTrackId)) broadcastClient.removeAudioInputDevice(audioTrackId);
const videoId = `${participantId}-video`;
document.getElementById(videoId).closest('.participant-col').remove();
updateVideoCompositions();
});
Also published