In fact, while building the service for taking screenshots, we have done most of the work for screencast recording. We already have the MediaStream object delivered by webkitGetUserMedia. We just need a way to define the start and end of recording and save the collected frames in a video file. That is where we can benefit from the MediaStream Recording API, which captures the data produced by MedaStream or HTMLMediaElement (for example, <video>) so that we can save it. So, we modify the service again:
./js/Service/Capturer.js
//... const toBuffer = require( "blob-to-buffer" ); //... start( desktopStreamId ){ navigator.webkitGetUserMedia(/* constaints */, ( stream ) => { let chunks = []; this.dom.video.srcObject ...