Chapter 18. Media
Introduction
Modern browsers have rich APIs for working with video and audio streams. The WebRTC API supports creating these streams from devices such as cameras.
A video stream can be played live inside of a <video>
element, and from there you can capture a frame of the video to save as an image or upload to an API. A <video>
element can also be used to play back video that was recorded from a stream.
Before these APIs were available, you would have needed browser plug-ins to access the user’s camera. Today, you can use the Media Capture and Streams API to start reading data from the camera and microphone with just a small amount of code.
Recording the Screen
Problem
You want to capture a video of the user’s screen.
Solution
Use the Screen Capture API to capture a video of the screen, then set it as the source of a <video>
element (see Example 18-1).
Example 18-1. Capturing a video of the screen
async
function
captureScreen
()
{
const
stream
=
await
navigator
.
mediaDevices
.
getDisplayMedia
();
const
mediaRecorder
=
new
MediaRecorder
(
stream
,
{
mimeType
:
'video/webm'
});
mediaRecorder
.
addEventListener
(
'dataavailable'
,
event
=>
{
const
blob
=
new
Blob
([
event
.
data
],
{
type
:
'video/webm'
,
});
const
url
=
URL
.
createObjectURL
(
blob
);
video
.
src
=
url
;
});
mediaRecorder
.
start
();
}
Note
The screen contents are not streamed live to the <video>
element. Rather, the screen share is captured into memory. Once you’ve finished capturing the screen, the recorded video will ...
Get Web API Cookbook now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.