Skip to content

Onpremise Installation

System components

The service das-FaceBond requires a network of different services for running all the features of the system:

  • das-Face docker version >= 2.0.0.
    • It is recommended to run it with at least 3 workers with a GPU available. This will require ~5GB per worker.
  • Redis docker version >= 4.0.
    • No special configuration is required.
  • PostgreSQL docker version >= 12.
    • It is recommended to use an SSD disk for the database.
  • das-FaceBond docker version 1.15.0.
    • It may run with at least 500MB of memory per worker.
  • Celery running the image of das-FaceBond.
    • Depending on your parallel requirements, more than one celery instances may be required.
    • The should be executed with at least 1GB of memory per celery worker.
  • Nginx running the image of das-FaceBond.
    • It is used as a gateway between Gunicorn process in das-FaceBond and the outside world.
    • It serves static data of das-FaceBond.
    • Required to execute "admin" dashboard of Django.
  • das-FaceBondUI docker version 1.0.2.
    • It runs a web UI for exploitation of the system.
  • Nginx running the image das-FaceBondUI.
    • It is used as a gateway between Gunicorn process in das-FaceBondUI and the outside world.
    • It serves static data for das-FaceBondUI.

At the end of this document you may find a docker-compose YAML description which puts all these containers up on a single server. Use it as an example for your own deployments.

Base requirements

  • Ubuntu (18.04 LTS or higher). This is the OS which Veridas supports and validates with each release, but as the product is docker-based, it should work in any Linux environment, specially Debian based distros. We also support other distros based on Red Hat Enterprise Linux (7.9 or higher)
  • Docker and Logrotate applications installed
  • A domain name (optional, but recommended)
  • An SSL certificate associated to the domain which is used on the docker container deployment (optional, but recommended)
  • PSQL database with UTF-8 encoding (it may be deployed as a docker container or use an existing database)
  • Appropriate disk space depending on the system usage, considering a disk usage of around 30MB per validation. It is also convenient to take into account that when validations are exported from the boi-Das dashboard, the zip files containing the validations are stored in the corresponding mounted volume, so if they are not deleted periodically, it also requires additional disk space.
  • Ad-hoc boi-Das data backup service is out of the scope of this document, but its implementation highly recommended

Hardware requirements

The minimum requirements for production purposes of a machine running the das-FaceBond system:

  • CPU with at least 4 CPU cores (real, not virtual), at least 2.3GHz and 25M of cache.
  • GPU 1080 Ti with 12 GB of memory, or superior.
  • At least 21GB of RAM memory.
  • At least a disk with 40GB of space for 1.000.000 images in a gallery. Each gallery with 1M images supposes additional 40GB of space. These numbers depends on the images of the client. We take a mean estimation based on images of MegaFace benchmark.
  • Recommended 10Gbps Ethernet connection.

The database will require a PostgreSQL running in a "db.m5.xlarge" Amazon AWS instance:

  • CPU with 4 cores of 2.3GHz and 25M of cache.
  • At least 8GB of RAM memory for each gallery of 1M images, requiring additional 8GB per each additional 1M images gallery. If AES block ciphering is used, the RAM memory needed for each gallery of 1M images is 12GB.
  • SSD storage for high performance, with at least 120GB of free space for a gallery of 1M images, adding 120GB additional space per each additional gallery with 1M images. It is recommended an SSD unit with 300GB in order to accomodate scaling over time.
  • Recommended 10Gbps Ethernet connection.

Both machines are recommended to be in a dedicated network, in order to reduce latency and to ensure high bandwidth. Database may be deployed on the same machine as das-FaceBond, but if it is the case, the machine should be escalated to have the sum of the requirements of both machines (memory, CPU, storage, …), and we don’t recommend it.

Log configuration

  • ACTIVITY_ID_HTTP_HEADER=X-Request-Id: Indicates a header which may be used to trace requests coming from another system. This header may be logged using JSON format.
  • LOG_LEVEL=INFO: Default logging level, it can be CRITICAL, ERROR, WARNING, INFO, DEBUG.
  • LOG_FORMAT=console-simple: Configures the way logging lines will be structured before written to the corresponding handler. It accepts the values plain, console-simple, console, json.
  • LOG_HANDLER=stdout: Indicates where logging lines will be displayed. It can be** stdout** and file. When stdout is used, the following log variables will be ignored.
  • LOG_FOLDER=/tmp: The folder where log file will be created.
  • LOG_FILENAME=dasfacebond.log: The name of the log file where logging lines will be written.

Databases connections

  • DB_NAME=foo: Name of the database where all tables and namespaces will be created.
  • DB_USER=foo: Name of the user with granted privileges for creating tables and reading/writing data.
  • DB_PASS=foo: Password for authentication of the user.
  • DB_HOST=localhost: Host where the database server is located.
  • DB_SSLMODE=prefer: Indicates how SSL connection to the database will be handled. It accepts disable, allow, prefer, require, verify-ca, verify-full. ( For more information, look at table 32-1 at https://www.postgresql.org/docs/9.6/static/libpq-ssl.html)
  • DB_SSLROOTCERT: Location of the root certificate for SSL verification procedure.
  • LSH_DB_NAME=$DB_NAME: Database name used for face embedding vector indices. By default is the same as DB_NAME. Notice that LSH_DB_USER has to be granted with schema creation permission, because each gallery index requires a new schema in the database.
  • LSH_DB_USER=$DB_USER: Similar to DB_USER but for embedding vector indices.
  • LSH_DB_PASS=$DB_PASS: Similar to DB_PASS but for embedding vector indices.
  • LSH_DB_HOST=$DB_HOST: Similar to DB_HOST but for embedding vector indices.
  • LSH_DB_SSLMODE=$DB_SSLMODE: Similar to DB_SSLMODE. Currently, this option exists but it is not fully implemented.
  • LSH_DB_SSLROOTCERT=$DB_SSLROOTCERT: Similar to DB_SSLROOTCERT. Currently, this option exists but it is not fully implemented.

Email registration

  • SMTP_SERVER: SMTP server for self-registration of new users using an email address.
  • SMTP_PORT: Port for the connection with the SMTP server.
  • SMTP_FROM: SMTP user name used for authentication and as sender of the email.
  • SMTP_PWD: Password for authentication of SMTP_FROM user.

Redis connection

  • BROKER_HOST=localhost: Host name of the broker used to handle Celery tasks. It should be the host of a Redis server.
  • BROKER_PORT=6379: Port for connection with the Celery broker.
  • BROKER_SSL=False: Indicates to use SSL for secure connections.
  • BROKER_SSLROOTCERT: Path where SSL root certificate is located.
  • REDIS_HOST=localhost: Host name of the Redis server used to cache service operations.
  • REDIS_PORT=6379: Port for the connection with Redis server.
  • REDIS_DB=0: Database to use to store service cache.
  • REDIS_SSL=False: Indicates to use or not SSL for secured connections.
  • REDIS_SSLROOTCERT: Path where SSL root certificate is located.

Identification parameters

  • TOP_MATCHES=10: Number of match candidates to be persisted on the database for each identification.
  • IDENTIFICATION_ANNOTATIONS_LABOR_TIME=60: Number of seconds a human agent requires to annotate an identification operation.

das-Face connection

  • FACE_BIOMETRICS_URL=http://localhost:5031: Base URL to the server where das-Face API is available.
  • FACE_BIOMETRICS_MODE=SelfieMode: Default mode to communicate with das-Face. It can be SelfieMode, DocumentMode.
  • FACE_BIOMETRICS_LOCATE_MAX_SHAPE=1000: Any image with an edge greater than this value will be shrinked to this size on the larger edge. This parameter allows the user to control the performance when adding new faces to the server, but depending on this value, the system may fail to detect very small faces.
  • FACE_BIOMETRICS_LOCATE_ROTATIONS=False: Indicates to rotate face image in case the system doesn't locate a face in it. Activating this option will make the system slower when no face is located in the image, but it will be tolerant to rotated images.

Proxy configuration

If you are required to deploy the server behind a proxy, and connections from the server to Veri-SaaS cloud are required, you need to add the following environment variables:

  • HTTP_PROXY=http://127.0.0.1:3001
  • HTTPS_PROXY=https://127.0.0.1:3001
  • NO_PROXY=

Notice that such environment variables are necessary on-premises for das-Face product communication with Veri-SaaS cloud.

Images Installation

In order to install das-FaceBond system, it is required a machine with Ubuntu 18.04 LTS or RHEL 7.6 installed. In order to simplify the process, an Ansible playbook is given with the software delivery. Installation of Ansible and of all the required dependencies is performed by means of a shellscript (install-dependencies.sh).

This installation is run in two steps, installation of dependencies, and importation of docker images.

Depdendencies are installed by running the install-dependencies.sh -o $OS -g $IS_GPU script, where $OS can be ubuntu or rhel7.6 and $GPU can be yes or no, depending if the host used to deploy the product has a compatible GPU and its use is desired. This script will install Ansible with few additional dependencies, and executes an Ansible playbook which will install docker, nvidia-docker, docker-compose, nvidia-driver and all the required dependencies.

The second installation step is to import all docker images into the target machine, by running the import-docker-images.sh.

The following is an execution example of both scripts.

$ ./install-dependencies.sh -o rhel7.6 -g yes

The whole system is delivered as a set of docker images:

  • dasfacebond:1.8.1.tgz
  • dasface:3.12.4-onpremises-gpu.tgz
  • facebondui:1.0.2.tgz
  • postgres:12.tgz
  • redis:4.0.tgz

Deployment

The following is a description of how the deployment should be performed. A shellscript (run.sh) will be delivered, with the purpose of configuring and running the whole system. Configuration step blocks at the end, so it is required to press ctrl+C in order to continue the execution. In order to understand the process, this section explains all the steps involved in such a script.

This section is structured in two subsections, the first one explains the steps required to initialize the service the first time it is running (it migrates database and creates a new user and a new application), and a second section indicating how to deploy the system once everything has been previously configured.

Containers Deployment Diagram

Configuration

The following docker-compose script configures the docker container relying into another docker container for the database. You may ignore database container if you have one properly deployed on a server with SSD disks. The initial configuration is executed by a small script put into das-FaceBond docker image and in FaceBondUI docker image. This script has the following considerations:

  • The script waits 10 seconds for startup of the postgresql database.
  • /code/media should be a volume where media files (images and TAR/ZIP packages) will be stored for persistence. The script requires this volume to change the ownership and group to be www-data (uid=33, gid=33). Doing so, once the service is started (next subsection), it will have permissions to write incoming images and packages. * Root privileges are necessary by the docker container to successfully execute chown command. * Such a media volume must be stored in a UNIX-like filesystem, in order to allow uid and gid to be replaced by the correct ones. The system won’t work on SMB shared folders (or similar shared volumes where UNIX permissions and owner settings are not available).
  • The script creates an admin user without login credentials, with name and email equal to "admin@nodomain.com".
  • This script will perform the first database migration, which includes creation of tables and population of initial values on a few tables.
  • After running this configuration process, the dasfacebond container should exit with 0 code. Any value different than 0 means that something bad happens during the configuration. Similarly, the facebondui container should exit with 0. * After both containers exited properly, it is required to press ctrl+C in order to put down the configuration docker composition.
version: '2.3'
services:

  dasfacebond:
    image: registry.gitlab.com/dasgroup/veridas/back-team/dasfacebond/dasfacebond:1.6.3
    entrypoint: /code/initializer.sh
    container_name: dasfacebond
    environment:
      CLIENT_ID: client_id
      CLIENT_SECRET: client_secret
      APP_NAME: app_name
      DEBUG: False
      SECRET_KEY: secret_key
      LOG_LEVEL: INFO
      TZ: Europe/Madrid
      DB_NAME: dasfacebond_database
      DB_USER: dasfacebond
      DB_PASS: dasfacebond
      DB_HOST: pgsql
      DB_SSLMODE: prefer
      WORKERS: 6
      BROKER_HOST: redis
      BROKER_PORT: 6379
      REDIS_HOST: redis
      REDIS_PORT: 6379
      REDIS_DB: 0
      FACE_BIOMETRICS_URL: http://dasface:8000
      FACE_BIOMETRICS_MODE: SelfieMode
      TOP_MATCHES: 13
      DJANGO_SETTINGS_MODULE: app.settings
      ENABLE_SSL: TRUE
      FACE_EMBEDDINGS_CIPHER_KEY: 0123456789ABCDEF
    volumes:
      - ./media:/code/media
      - ./path_to_certs_folder:/etc/VERIDASsecurity/security/certs/

  facebondui:
    image: registry.gitlab.com/dasgroup/veridas/back-team/dasfacebond/facebondui:1.0.2
    entrypoint: /code/facebondui-initializer.sh
    volumes:
      - ./:/work:ro
    environment:
      CLIENT_ID: client_id
      CLIENT_SECRET: client_secret
      APP_NAME: app_name
      DEBUG: False
      SECRET_KEY: secret_key
      LOG_LEVEL: INFO
      TZ: Europe/Madrid
      OAUTH_CLIENT_ID: client_id
      OAUTH_CLIENT_SECRET: client_secret
      DB_NAME: dasfacebond_ui_database
      DB_USER: dasfacebond
      DB_PASS: dasfacebond
      DB_HOST: pgsql_ui
      DB_SSLMODE: prefer
      ENABLE_FACES: True
      SERVER_FACEBOND_API: http://dasfacebond:8820
      MAX_BATCH_FILE_SIZE: 1000 # 1GB zip/tar file
      ENABLE_PROBE_FACES: True
      ENABLE_LOBBY_PAGINATION: True #enable gallery pagination - will be default to true in future versions
      WORKERS: 2
      DJANGO_SETTINGS_MODULE: app.settings
      ENABLE_SSL: TRUE
  volumes:
     - ./path_to_certs_folder:/etc/VERIDASsecurity/security/certs/

  pgsql:
    image: postgres:12
    container_name: pgsql
    environment:
      TZ: Europe/Madrid
      POSTGRES_DB: dasfacebond_database
      POSTGRES_USER: dasfacebond
      POSTGRES_PASSWORD: dasfacebond
      PGDATA: /var/lib/postgresql/data/pgdata
    volumes:
      - ./pgdata:/var/lib/postgresql/data/pgdata

  pgsql_ui:
    image: postgres:12
    container_name: pgsql_ui
    environment:
      TZ: Europe/Madrid
      POSTGRES_DB: dasfacebond_ui_database
      POSTGRES_USER: dasfacebond
      POSTGRES_PASSWORD: dasfacebond
      PGDATA: /var/lib/postgresql/data/pgdata
    volumes:
      - ./pgdata-ui:/var/lib/postgresql/data/pgdata

Service start

Once everything is configured, the service is ready to be started. The following docker-compose script shows how to run the service side-by-side with das-Face, Redis, Celery and the database containers. The das-Face container of this example is configured to use the nvidia driver version by using a docker runtime installed by nvidia-docker v2.

version: '2.3'
services:

  facebondui:
    image: registry.gitlab.com/dasgroup/veridas/back-team/dasfacebond/facebondui:1.0.2
    container_name: facebondui
    environment:
      TZ: Europe/Madrid
      DEBUG: False
      SECRET_KEY: secret_key
      LOG_LEVEL: INFO
      OAUTH_CLIENT_ID: client_id
      OAUTH_CLIENT_SECRET: client_secret
      DB_NAME: dasfacebond_ui_database
      DB_USER: dasfacebond
      DB_PASS: dasfacebond
      DB_HOST: pgsql_ui
      DB_SSLMODE: prefer
      ENABLE_FACES: True
      SERVER_FACEBOND_API: http://dasfacebond:8820
      MAX_BATCH_FILE_SIZE: 1000 # 1GB zip/tar file
      ENABLE_PROBE_FACES: True
      ENABLE_LOBBY_PAGINATION: True #enable gallery pagination - will be default to true in future versions
      WORKERS: 2
      DJANGO_SETTINGS_MODULE: app.settings
      ENABLE_SSL: TRUE
    ports:
      - 10000:8850
    restart: always
    runtime: nvidia
    volumes:
      - ./path_to_certs_folder:/etc/VERIDASsecurity/security/certs/

  nginx_facebondui:
    image: registry.gitlab.com/dasgroup/veridas/back-team/dasfacebond/facebondui:1.0.2
    container_name: nginx_facebondui
    environment:
      TZ: Europe/Madrid
      DEBUG: False
      SECRET_KEY: secret_key
      LOG_LEVEL: INFO
      OAUTH_CLIENT_ID: client_id
      OAUTH_CLIENT_SECRET: client_secret
      DB_NAME: dasfacebond_ui_database
      DB_USER: dasfacebond
      DB_PASS: dasfacebond
      DB_HOST: pgsql_ui
      DB_SSLMODE: prefer
      ENABLE_FACES: True
      SERVER_FACEBOND_API: http://dasfacebond:8820
      MAX_BATCH_FILE_SIZE: 1000 # 1GB zip/tar file
      ENABLE_PROBE_FACES: True
      ENABLE_LOBBY_PAGINATION: True #enable gallery pagination - will be default to true in future versions
      WORKERS: 2
      DJANGO_SETTINGS_MODULE: app.settings
      SERVER_NAME: server
      NGINX_UPSTREAM: facebondui
    command: /code/run_nginx.sh
    depends_on:
      - facebondui
    ports:
      - 8080:80
      - 8443:443
    restart: always
    runtime: nvidia
    volumes:
      - ./path_to_certs_folder:/etc/VERIDASsecurity/security/certs

  dasfacebond:
    image: registry.gitlab.com/dasgroup/veridas/back-team/dasfacebond/dasfacebond:1.8.1
    container_name: dasfacebond
    environment:
      TZ: Europe/Madrid
      DEBUG: False
      SECRET_KEY: secret_key
      LOG_LEVEL: INFO
      DB_NAME: dasfacebond_database
      DB_USER: dasfacebond
      DB_PASS: dasfacebond
      DB_HOST: pgsql
      DB_SSLMODE: prefer
      WORKERS: 6
      BROKER_HOST: redis
      BROKER_PORT: 6379
      REDIS_HOST: redis
      REDIS_PORT: 6379
      REDIS_DB: 0
      FACE_BIOMETRICS_URL: http://dasface:8000
      FACE_BIOMETRICS_MODE: SelfieMode
      TOP_MATCHES: 13
      DJANGO_SETTINGS_MODULE: app.settings
      ENABLE_SSL: TRUE
      FACE_EMBEDDINGS_CIPHER_KEY: 0123456789ABCDEF
    ports:
      - 8820:8820
    volumes:
      - ./media:/code/media
      - ./path_to_certs_folder:/etc/VERIDASsecurity/security/certs/
    restart: always
    runtime: nvidia

  nginx_dasfacebond:
    image: registry.gitlab.com/dasgroup/veridas/back-team/dasfacebond/dasfacebond:1.8.1
    container_name: nginx_dasfacebond
    environment:
    TZ: Europe/Madrid
    DEBUG: False
    SECRET_KEY: secret_key
    LOG_LEVEL: INFO
    DB_NAME: dasfacebond_database
    DB_USER: dasfacebond
    DB_PASS: dasfacebond
    DB_HOST: pgsql
    DB_SSLMODE: prefer
    WORKERS: 6
    BROKER_HOST: redis
    BROKER_PORT: 6379
    REDIS_HOST: redis
    REDIS_PORT: 6379
    REDIS_DB: 0
    FACE_BIOMETRICS_URL: http://dasface:8000
    FACE_BIOMETRICS_MODE: SelfieMode
    TOP_MATCHES: 13
    DJANGO_SETTINGS_MODULE: app.settings
    SERVER_NAME: server
    NGINX_UPSTREAM: dasfacebond
    command: /code/run_nginx.sh
    depends_on:
      - dasfacebond
    ports:
      - 9999:80
      - 9443:443
    restart: always
    runtime: nvidia
    volumes:
      - ./path_to_certs_folder:/etc/VERIDASsecurity/security/certs

  dasface:
    image: registry.gitlab.com/dasgroup/veridas/face-team/products/dasface:3.12.4-onpremises-gpu
    container_name: dasface
    environment:
      TZ: Europe/Madrid
      WORKERS: 3
      PORT: 8000
      DEBUG: no
      FLASK_DEBUG: 0
      LOG_LEVEL: INFO
      DASFACES_DEFAULT_GPU_MEMORY_FRACTION: 0.2
      LD_LIBRARY_PATH: /usr/local/nvidia/lib64:/usr/local/cuda/lib64
      USAGE_TRACKER_DEPLOY_ENV: production
      USAGE_TRACKER_API_KEY: APIKEY
      OMP_NUM_THREADS: 1
    ports:
      - 2000:8000
    restart: always
    runtime: nvidia

  redis:
    image: redis:4.0
    container_name: redis
    command: redis-server
    environment:
      TZ: Europe/Madrid
    restart: always
    runtime: nvidia

  celery:
    image: registry.gitlab.com/dasgroup/veridas/back-team/dasfacebond/dasfacebond:1.8.1
    command: ./run-celery-worker.sh
    container_name: celery
    user: "www-data"
    environment:
      TZ: Europe/Madrid
      DEBUG: False
      SECRET_KEY: secret_key
      LOG_LEVEL: INFO
      DB_NAME: dasfacebond_database
      DB_USER: dasfacebond
      DB_PASS: dasfacebond
      DB_HOST: pgsql
      DB_SSLMODE: prefer
      WORKERS: 6
      BROKER_HOST: redis
      BROKER_PORT: 6379
      REDIS_HOST: redis
      REDIS_PORT: 6379
      REDIS_DB: 0
      FACE_BIOMETRICS_URL: http://dasface:8000
      FACE_BIOMETRICS_MODE: SelfieMode
      TOP_MATCHES: 13
      DJANGO_SETTINGS_MODULE: app.settings
    volumes:
      - ./media:/code/media
    depends_on:
      - redis
    restart: always
    runtime: nvidia

  pgsql:
    image: postgres:12
    container_name: pgsql
    environment:
      TZ: Europe/Madrid
      POSTGRES_DB: dasfacebond_database
      POSTGRES_USER: dasfacebond
      POSTGRES_PASSWORD: dasfacebond
      PGDATA: /var/lib/postgresql/data/pgdata
    volumes:
      - ./pgdata:/var/lib/postgresql/data/pgdata

  pgsql_ui:
    image: postgres:12
    container_name: pgsql_ui
    environment:
      TZ: Europe/Madrid
      POSTGRES_DB: dasfacebond_ui_database
      POSTGRES_USER: dasfacebond
      POSTGRES_PASSWORD: dasfacebond
      PGDATA: /var/lib/postgresql/data/pgdata
    volumes:
      - ./pgdata-ui:/var/lib/postgresql/data/pgdata

Users creation

Once the service is up and running, you may want to create new users, staff and/or superusers (admins). You may use the manage.py command to do that in the dasfacebond container. Something like the following will create new superusers:

docker exec -t -i dasfacebond python manage.py createsuperuser

You may ask the help of this command by running:

docker exec -t -i dasfacebond python manage.py createsuperuser --help

Once you have an account, it is possible to login into the system to create accounts for new users (admin or staff) and to register users in group agent, used for exploitation of manual review procedures incorporated in das-FaceBond, as explained in the appendix.

PostgreSQL upgrade

This section explains how to upgrade the databases' data from PostgreSQL release to a newer one.

For minor releases, the internal storage format never changes and is always compatible between them so there is no risk upgrading it and are always compatible between them with earlier and later minor releases of the same major version number.

For major releases, specifically to v12.0, there is a sample script to upgrade (upgrade-postgre.sh) which is provided in the folder deploy-with-local-pgsql. Be aware that this script will only work if dasFaceBond has been deployed locally, which is not the recommended approach. However, this sample script can also be used as a baseline to perform the upgrade.