Accessing Local Devices through Browser

Chrome – WebRTC media-devices

//Query Media Devices
async function getConnectedDevices(type) {
    const devices = await navigator.mediaDevices.enumerateDevices();
    return devices.filter(device => device.kind === type)
}

const videoCameras = getConnectedDevices('videoinput');
console.log('Cameras found:', videoCameras);

Listening for devices changes:

// Updates the select element with the provided set of cameras
function updateCameraList(cameras) {
    const listElement = document.querySelector('select#availableCameras');
    listElement.innerHTML = '';
    cameras.map(camera => {
        const cameraOption = document.createElement('option');
        cameraOption.label = camera.label;
        cameraOption.value = camera.deviceId;
    }).forEach(cameraOption => listElement.add(cameraOption));
}

// Fetch an array of devices of a certain type
async function getConnectedDevices(type) {
    const devices = await navigator.mediaDevices.enumerateDevices();
    return devices.filter(device => device.kind === type)
}

// Get the initial set of cameras connected
const videoCameras = getConnectedDevices('videoinput');
updateCameraList(videoCameras);

// Listen for changes to media devices and update the list accordingly
navigator.mediaDevices.addEventListener('devicechange', event => {
    const newCameraList = getConnectedDevices('video');
    updateCameraList(newCameraList);
});

WebRTC samples site

Stopping a stream

function stopStreamedVideo(videoElem) {
  const stream = videoElem.srcObject;
  const tracks = stream.getTracks();

  tracks.forEach((track) => {
    track.stop();
  });
  //stop just first track
  //tracks.getTracks()[0].stop();

  videoElem.srcObject = null;
}

Establish a stream

const constraints = window.constraints = {
  audio: false,
  video: true
};
sync function init(e) {
  try {
    const stream = await navigator.mediaDevices.getUserMedia(constraints);
    handleSuccess(stream);
    e.target.disabled = true;
  } catch (e) {
    handleError(e);
  }
}

WebRTC Status Report

Monitor using chrome://webrtc-internals/

The Chrome WebRTC internal tool is the ability to view real-time information about the media streams in a WebRTC call. Details regarding the video and audio tracks, the codecs utilized, and the stream’s general quality are all included in this data. This knowledge can be very helpful for resolving problems with poor audio and video quality.

Caller origin: https://flex.twilio.com
Caller process id: 35408
getUserMedia call
Time: 11:00:11 GMT-0700 (Mountain Standard Time)
Audio constraints: {deviceId: {exact: ["default"]}}
Error
Time: 11:00:11 GMT-0700 (Mountain Standard Time)
Error: NotAllowedError
Error message: Permission denied

A diagnostic packet and event recording can be used for analyzing various issues related to thread starvation, jitter buffers or bandwidth estimation. Two types of data are logged. First, incoming and outgoing RTP headers and RTCP packets are logged. These do not include any audio or video information, nor any other types of personally identifiable information (so no IP addresses or URLs). Checking this box will enable the recording for ongoing WebRTC calls and for future WebRTC calls. When the box is unchecked or this page is closed, all ongoing recordings will be stopped and this recording functionality will be disabled for future WebRTC calls. Recording in multiple tabs or multiple recordings in the same tab will cause multiple log files to be created. When enabling, a filename for the recording can be entered. The entered filename is used as a base, to which the following suffixes will be appended.

How do you find the current active connection in webrtc-internals?

Simple webRTC example – Node.js server that communicates with clients via websockets.

github webRTC page