Skip to content

IMPORTANT NOTE

This is not the latest version of our SDKs. 3.X versions are going to be deprecated and our recommendation is to update to a 4.X version. Bear in mind that 4.X versions represent a big change from the previous major version, providing more capabilities and potential. At the same time, It means a break in backward compatibility that must be managed using the provided migration guide. Please make sure to perform proper tests in development environments to ensure that everything is working as expected.

Introduction

VDAlive captures selfie images of users. VDAlive takes into account the type of device being used and attempts to launch the camera that will provide the user with the best experience:

  • Front camera is used on mobile devices (smartphones, tablets…)
  • Front camera is used on computers (desktops, laptops…)

The following permission is required for the framework to work:

  • Camera.

To improve user experience, VDAlive attempts to capture a frontal face image automatically. VDAlive uses several algorithms to detect the human face in the camera’s field of vision and asks the user for a proof of life verification. This proof of life consists in a mask that moves in different directions so implies the user to rotate his/her head in random directions. The video challenge size is about 600KB - 1MB.

Specifications

The SDK supports the following devices and browsers:

Desktop Browsers

Browser Name Minimum version Current version
Chrome 57 114
Firefox 52 111
Safari 11.2 16.4
Opera 44 95
Edge 16 111

Mobile and Tablets Browsers

Browser Name Platform Minimum version Current version
Chrome Android 57 111
Firefox Android 52 110
Edge Android 42 112
Opera Mobile Android 46 73
Samsung Internet Android 7.2 20
Safari iOS 11.2 16.4

Current version at 2023/03/31 GetUserMedia WebAssembly

The size of the SDK is 3.1 MB

Additionally, some dependencies will be needed. These are:

  • cv_alive.js: 7 MB
  • gifs: 700KB
  • tensorflow: 630KB
  • cuantized_model: 380KB
  • workers: 3KB

Integration

VDAlive is built as a stand-alone module:

  • VDAlive.js file contains all functionality.
  • opencv folder contains opencv library.
  • workers folder contains the workers necessary for the detection process.
  • models folder contains the tensorflow wasm files and veridas cuantized model.
  • gifs folder contains head movements gifs.

Place VDAlive.js with the other static assets of your website.

In your HTML, right before the closing tag of the body element, place a script tag:

<script
  type='application/javascript'
  src='path_to_static_assets/VDAlive.js'
  charset='UTF-8'
></script>

At the location in your HTML where VDAlive should mount, place a div element with a unique id or class name. The id "target" is used here as an example:

<div id='target'></div>

The target must have defined width and height sizes values; here is an example taking relative values from the parent node:

#target {
  width: 100%,
  height: 100%,
}

VDAlive can be launched at any point in your JavaScript code, the only requirement being that the HTML document be totally loaded. The following code demonstrates how to mount the SDK with the required targetSelector setting and some recommended configuration:

function sdkStart() {
  const target = document.querySelector('#target');
  const VDAlive = makeVDAliveWidget();
  VDAlive({
    targetSelector: '#target',
    pathModels: '/public/models/',
    infoAlertShow: true,
    reviewImage: true,
    logEventsToConsole: true,
    aliveChallenge: 'challenge token',
    ngas_images_path: '/public/gifs/',
    infoUserAliveHeader: '',
    infoUserAliveHeaderColor: '#000D44',
    infoUserAliveTitleColor: '#000D44',
    infoUserAliveSubTitleColor: '#000D44',
    infoUserAliveColorButton: '#000D44',
    infoUserAliveNextButtonColor: '#000D44',
    infoUserAlivePrevButtonColor: '#000D44',
    stepsChallengeColor: '#000D44',
    buttonBackgroundColorDark: '#000D44',
    buttonBackgroundColorLight: '#000D44',
    buttonBackgroundColorDarkRepeat: 'transparent',
    buttonBackgroundColorLightRepeat: 'transparent',
    repeatButtonColor: '#000D44',
    sdkBackgroundColorInactive: '#FFFFFF',
    borderColorCenteringAidDetecting: '#078B3C',
    outerGlowCenteringAidDefault: '#000D44',
    detectionMessageBackgroundColor: '#000D44',
    detectionMessageTextColor: '#000D44',
    detectionMessageTextColorSelfie: '#000D44',
    confirmationDialogTextColor: '#000D44',
    confirmationColorTick: '#078B3C',
    errorDisplayBackgroundColor: '#FFFFFF',
    errorDisplayTextColor: '#000D44',
    errorActionButtonBackgroundColor: '#FFFFFF',
    errorActionButtonTextColor: '#000D44',
    errorDisplayIconColor: '##000D44',
  });
}

window.onload = () => sdkStart();

Token generation

There are two selfie alive modes, Selfie Alive and Selfie Alive Pro. The goal of both is to verify the user identity and detect if you are alive with an active challenge.

The selfie alive pro will not work without a token provided by the Veridas endpoint /challenges/generation. If the token is not passed to the configuration it will only enable selfie alive.

To activate the Selfie Alive Pro process it is necessary configure "aliveChallenge" with a challenge token. This token must be retrieved from the backend (the call /challenges/generation gets a string with the "aliveChallenge" configuration) and introduced to the SDK before it is started.

Hierarchy In public folder opencv place the following files:

  • cv_alive.js

In public folder workers:

  • cv.worker.js
  • tfjs-backend-wasm-threaded-simd.worker.js

The hierarchy must be:

  • VDAlive.js
  • opencv
    • cv_alive.js
  • workers
    • cv.worker.js
    • tfjs-backend-wasm-threaded-simd.worker.js
  • models
    • veridas
      • cuantized_model.json
    • tfjs-backend-wasm-simd.wasm
    • tfjs-backend-wasm-threaded-simd.wasm
    • tfjs-backend-wasm.wasm
  • gifs

In case models files need to be stored in different paths, it needs to specify the path under the required configuration object pathModels.

In case gifs files need to be stored in different paths, it needs to specify the path under the required configuration object ngas_images_path.

Folders workers and opencv must be on the same directory than VDAlive.js file.

Output Files In case Selfie Alive process, the output files are:

  • image: base64 image selfie before the facial gesture.
  • image_alive: base64 image after the smile or serious gesture.
  • webVTT: null.

In case Selfie Alive Pro process, the output files are:

  • image: base64 image selfie before the challenge movements.
  • image_alive: Contains the data of the video captured.
  • webVTT: Contains the data of the WebVTT file.

image_alive parameter:

  • Standard Output: video → Blob type, contains the recorded video in webm format except safari browser version >= 14.8 that is in mp4 format. VDAliveCustomizeElementsMap9
  • Alternative Output: video → Array of recorded frames, includes base64 images and metadata for allowing the video creation through the Videoconverter. id → Unique identifier for every video to be created. Required by Videoconverter. VDAliveCustomizeElementsMap10

Use

The SDK gives one global method called getSDKversion, this method when called, it will return the version of the SDK.

Once VDAlive mounts, it can be unmounted programmatically by invoking the global function destroyVDAliveWidget.

VDAlive unmounts itself after completing a successful detection of the face, or after detection times out. You can control how long it takes for the process to time out to a certain extent. Refer to the configuration section.

Other events are available from SDK that are listed on Type Definitions section.

  1. To change the configuration you can look at the customization documentation