Skip to content

Onpremise Instructions

Docker Deployment Instructions

As commented before, boi-Das interfaces are configurable docker solutions which can be deployed easily by using docker commands. Also, this allows the product to be upgraded with no or minimum loss of service. This delivery format also allows the deployment of the product on almost any kind of platform and OS from AWS ECS to Kubernetes, Linux, VMs and more.

First of all, to load the docker image from the provided tar file, the "docker load" command has to be run:

docker load -i boiDas-DockerImage-X.Y.Z.tar

Part of the boi-Das service configuration is hardcoded while other can be configured by means of environmental variables and logical data volumes. boi-Das service parts run on a Docker container which is built over:

  • Ubuntu 20.04 LTS.
  • Nginx proxy (with or without SSL).
  • PostgreSQL database.

The first time this container runs, it assumes the database access given in the environmental variables is valid, uses these credentials to build and prepare the database, and also populates the database with an initial dump. For this purpose, this image imports a few data volumes from the running localhost machine, using them as persistent storage for configuration files. Others, as output logs, should be kept isolated. By doing it this way, the next time this container is created, it will use the configuration of the previous one, unless the user erased all files in the data volumes.

In order to achieve a proper deployment, it is important to be in a clear environment the first time this container is created. This way, we ensure the database and other files and directories are synchronized.

boi-Das Host (server) Requirements

  • For a common usage of 2 Workers is required a server with 2x core processor with minimum of 2.5 GHz and 4GB RAM, so on adding an additional +1GB/RAM per extra Worker or 20,000 validations. That means in a 2 Worker deployment with a large number of validations (100,000) you would require 9GB of RAM.
  • Ubuntu (20.04 LTS or higher). This is the OS which Veridas supports and validates with each release, but as the product is docker-based, it should work in any Linux environment, specially Debian based distros. We also support other distros based on Red Hat Enterprise Linux (7.9 or higher)
  • Docker and Logrotate applications installed
  • A domain name (optional, but recommended)
  • An SSL certificate associated to the domain which is used on the docker container deployment (optional, but recommended)
  • PSQL database with UTF-8 encoding (it may be deployed as a docker container or use an existing database)
  • Appropriate disk space depending on the system usage, considering a disk usage of around 30MB per validation. It is also convenient to take into account that when validations are exported from the boi-Das dashboard, the zip files containing the validations are stored in the corresponding mounted volume, so if they are not deleted periodically, it also requires additional disk space.
  • Ad-hoc boi-Das data backup service is out of the scope of this document, but its implementation highly recommended

boi-Das docker connectivity requirements

This chapter describes the required network configuration for boi-Das docker for proper operation. boi-Das service requires connectivity to VeriSaaS APIs for retrieving the validations processed once they are confirmed.

Some of these network flows shall be secured by using regular firewall rules (IP-based) but for some others a URL-based rule device might be required (ie: WAF).

Also, the destination IPs/URLs differ depending on the Veridas region/environment targeted by boi-Das. If you are not fully aware of which region or environment are your contract covering, please reach out to Veridas Support or your Veridas sales representative for clarification.

Network requirements for EU-Sandbox (AWS-Ireland):

Rule type From allow To
URL-based boi-Das docker https://api-work.eu.veri-das.com/*

If URL filtering cannot be implemented for the two first network flows, the following firewall rules can be used as an alternative:

Rule type From allow To
IP-based boi-Das docker any IP on TCP/443

Network requirements for EU-LIVE (AWS-Ireland and AWS-Frankfurt as DR):

Rule type From allow To
URL-based boi-Das docker https://api.eu.veri-das.com/*

If URL filtering cannot be implemented for the two first network flows, the following firewall rules can be used as an alternative:

Rule type From allow To
IP-based boi-Das docker any IP on TCP/443

Network requirements for US-LIVE (AWS-NorthVirginia and AWS-Oregon as DR):

Rule type From allow To
URL-based boi-Das docker https://api.us.veri-das.com/*

If URL filtering cannot be implemented for the two first network flows, the following firewall rules can be used as an alternative:

Rule type From allow To
IP-based boi-Das docker any IP on TCP/443

boi-Das retrieves validations from Verisaas by using the validas API. This API is authenticated with both apikey and source IP (this last is mandatory for LIVE environments). So please ensure boi-Das’ public IPs used for reaching the internet are included in the validas service allowList at cloud. Should you have any questions, please feel free to reach out to Veridas Customer Support for clarifications.

boi-Das users connectivity requirements

This chapter describes the required network configuration for boi-Das users (ie: agents, supervisors, etc) to access boi-Das UI or API.

Some of these network flows shall be secured by using regular firewall rules (IP-based) but for some others a URL-based rule device might be required (ie: WAF).

If you are not fully aware of which region or environment are your contract covering, please reach out to Veridas Support or your Veridas sales representative for clarification.

Network requirements for EU-Sandbox (AWS-Ireland):

Rule type From allow To
URL-based boidas user https://BOIDAS_URL/*

If URL filtering cannot be implemented for the three first network flows, the following firewall rules can be used as an alternative:

Rule type From allow To
IP-based boidas user BOIDAS_HOST on TCP/BOIDAS_PORT

Network requirements for EU-LIVE (AWS-Ireland and AWS-Frankfurt as DR):

Rule type From allow To
URL-based boidas user https://BOIDAS_URL/*

If URL filtering cannot be implemented for the two first network flows, the following firewall rules can be used as an alternative:

Rule type From allow To
IP-based boidas user BOIDAS_HOST on TCP/BOIDAS_PORT

Network requirements for US-LIVE (AWS-NorthVirginia and AWS-Oregon as DR):

Rule type From allow To
URL-based boidas user https://BOIDAS_URL/*

If URL filtering cannot be implemented for the two first network flows, the following firewall rules can be used as an alternative:

Rule type From allow To
IP-based boidas user BOIDAS_HOST on TCP/BOIDAS_PORT

Some of the referred IPs belong to the Verisaas DR sites and should be included in the firewall and waf rules, to ensure proper operation in the unlikely event of a disaster which requires Veridas to switch the service over the DR sites of the affected region.

Database Host (server) Requirements for a Dedicated Machine Case

  • A server with 2x core processor with minimum of 2.5 GHz and 4GB RAM is recommended for a regular usage.
  • Extensions will be installed in PostgreSQL deployments, so the database user (DB_USER) requires superuser or database owner privileges. PostgreSQL Documentation Create Extension
  • For PostgreSQL deployments some configurations need to be tweaked and they should follow the recommendations given in PostgreSQL Configuration Builder
  • Appropriate disk space depending on the usage estimations and considering a growth of around 30 MB per validation process
  • Minimum Postgres version supported is 12.12, being the 15.5 the recommended one. Postgres version 12.12 will reach the end of life in November 2024. Veridas recommends upgrading to the recommended version or to a one higher than 12.12 before that date, because support will not be available for this version after that date.
  • Minimum Oracle version suported is 19c, being the 19c the recommended one. Oracle version 19c will reach the end of life in April 2027.
  • There is no condition for the type of disks. SATA, SSD, SAS and other are valid, obtaining small performance difference depending of its features but with little impact on the global system performance
  • RAID 1 at least is recommended

boi-Das end users Host (client) Requirements

Docker Container Configuration

boi-Das docker

General Configuration

The container creation procedure is configured by using many environmental variables which can be given to the docker_ run _command or in a docker-compose.yml file. The default value of these variables is given after equals sign.

  • VALIDAS_POLLING_FREQUENCY: 60 => This is the frequency in seconds which the validas polling process uses, by default 60.
  • VALIDAS_URL: ie. https://api.eu.veri-das.com/validas/v1 => This URL has to be set to the vali-Das URL which is deployed on the Veridas SaaS. If this is not correctly set, boi-Das will not be able to retrieve the validations. [Note: The URL will be different depending on it the environment is Production or Sandbox, and will be provided by Veridas]
  • DISABLE_POLLING: "yes" or "no" => This variable allows to disable the polling process. If this variable is set to "yes", the polling process is disabled. Default value is set to "no".
  • API_KEY: ’API_KEY’ => This has to be set to the API KEY provided by Veridas when vali-Das documentation was given.
  • TZ=Europe/Madrid => Desired Timezone.
  • ENABLE_SSL: TRUE => Indicates if the system requires SSL or not.
  • PORT: 5080 => the port that the service container is going to expose. If this variable is not set, the exposed PORT is 8850 by default.
  • BASE_URL => The URL conformed with the protocol + hostname + port (https://HOSTNAME:PORT), i.e. https://myawesomehostname:5081) where the API is deployed. This allows the service to know where it is being deployed, which is used for paths creation in API responses.
  • MIGRATIONS: "yes" or "no" => the container needs to do database migrations to prepare the database with the last models and configurations. It is required to do the first time or when upgrading a new version.
  • ENABLE_ACTIVITY_REGISTRY: "yes" or "no" => If this variable is set to "yes", the validation related events involving user actions are saved and also shown in the activity tab on the boidas UI. Default value is set to "yes".
  • MANDATORY_SEPBLAC_QUESTIONS: "yes" or "no" => If this variable is set to "yes", a set of questions related to both the document and the biometry verifications are shown in boidas validation detail screens, and must be answered to allow a validation to be approved. Answering these questions, allows to comply with the Sepblac regulation regarding the digital onboarding processes bor bank account oppening. Default value is set to "no".
  • MANDATORY_SEPBLAC_QUESTIONS_REJECTION: "yes" or "no" => If this variable is set to "yes", a set of questions related to both the document and the biometry verifications are shown in boidas validation detail screens, and must be answered to allow a validation to be rejected. Default value is set to "no".
  • SECONDS_TIMEOUT_SPA: "600" => This is the period of time in seconds used to automatically log out a user if there is no activity. So, if this amount of seconds passes without any user activity on the interface, the boi-Das service automatically logs out the user. The default value is set to "600".
  • SESSION_TIMEOUT: "43200" => This is the period of time in seconds used to automatically expire the session. So, if this amount of seconds passes, the boi-Das service automatically expires the user session and the log in will be required again. The default value is set to "43200" (12 hours).
  • PDF_GENERATION: “yes” or “no” => If this variable is set to "yes", a new option is available which consists of generating a pdf file for each validation with all its detailed information and data. The button to allow exporting the data as pdf appears when a validation is selected in the validation list dashboard. Default value is set to “no”.
  • MAX_VALIDATION_LOCKS: "10" => This is the maximum number of validations which a user can be viewing, so, can have locked, at a given time. The default value is set to "10" and the maximum value it can take is "50".
  • VALIDATION_LOCKS_LIMIT: "yes" => If this variable is set to "yes", it will allow to set a maximum validation lock per user. The default value is "yes".
  • CCN_REQUIREMENTS: "no" => Clients that are qualified trust service providers (QTSPs) must set this variable to "yes" to comply regulations. If this variable is set to "yes", the application must be configured to preserve logs integrity. Instructions can be found here.
  • CONTEXTUAL_DATA_FILTER_DEFAULT_CONDITION: "AND" or "OR" (Default: AND)=> This variable allows to configure how contextual filters concatenate for users, therefore an user can have different filters applied and the variable will control if they relate with an AND or an OR. The default value is set to "AND".
  • VERISAAS_CHECK_MAX_FAILED_ATTEMPTS: 3 => Number of failed VeriSaaS connection status check attempts to confirm connection is not available. The default value is set to "3".
  • VERISAAS_EMAILS_RECIPIENTS: None => Email addresses where notifications about the VeriSaaS connection status are sent. If this variable is not set, no email is sent. The email adresses must be separated by commas. For example: "example@mycompany.com, anotherexample@mycompany.com". To send notification emails is mandatory to configure the email sender configuration. Instructions can be found here.
  • VERISAAS_CHECK_INTERVAL: 5 => Interval in seconds between VeriSaaS connection status check attemp. The default value is set to "60".
  • MAX_REJECT_REASONS: 10 => Maximum number of reject reasons that can be configured. The default value is set to "10".
  • USE_TZ: "yes" or "no" => Variable only needs to be configured to "yes" during the database dump process to migrate data to oracle. Otherwise, variable should not be configured or set to "no".
Database configuration

Boidas uses a database to store all the application data. The database configuration is done by setting up some environment variables and depends on the database engine used by the application:

PostgreSQL database configuration
  • DB_HOST: pgsql-boidas => Here the host where DB is deployed. Can be the container name or the IP of the host where the DB is deployed.
  • DB_NAME: boidas => Name of the database for boi-Das.
  • DB_USER: boidas => Name of the user granted to DB_NAME with DB_PASS.
  • DB_PASS: boidas => A dummy default password.
  • DB_PORT: 5432 => The port where DB is exposed. Can be the container port in case DB_HOST is the container name or IP or the host mapped port in case it is the host IP.
  • DB_SCHEMA: public => Name of the database schema. When a schema other than "public" is selected, the schema must exist in the database before running the application.
  • DB_ENGINE: "postresql" => The database engine used by the application. If postgresql is selected, the application will use the PostgreSQL database. Otherwise, if oracle is selected, the application will use the Oracle database. The default value is set to "postgresql".

To upgrade boidas database PostgreSQL version from 12 to 15, instructions can be found here

Oracle Database configuration
  • Using Oracle SID:

    • DB_HOST: oracle-boidas => Here the host where DB is deployed. Can be the container name or the IP of the host where the DB is deployed.
    • DB_NAME: boidas => Name of the database for boi-Das.
    • DB_USER: boidas => Name of the user granted to DB_NAME with DB_PASS.
    • DB_PASS: boidas => A dummy default password.
    • DB_PORT: 5432 => The port where DB is exposed. Can be the container port in case DB_HOST is the container name or IP or the host mapped port in case it is the host IP.
    • DB_ENGINE: "oracle"=> The database engine used by the application. If postgresql is selected, the application will use the PostgreSQL database. Otherwise, if oracle is selected, the application will use the Oracle database. The default value is set to "postgresql".
  • Using Oracle Easy Connect or Oracle Net connect descriptor:

    Notice that for these connection strings, variables DB_PORT and DB_HOST are not set. Instead, the entire connection string is set in the DB_NAME variable.

    • DB_NAME:
      • Example of Easy Connect: <host>:<port>/<service_name>
      • Example of Net connect descriptor: (DESCRIPTION=(FAILOVER=on)(ADDRESS=(PROTOCOL=tcp)(HOST=host)(PORT=port))(CONNECT_DATA=(SERVICE_NAME=service_name)))
    • DB_USER: boidas => Name of the user granted to DB_NAME with DB_PASS.
    • DB_PASS: boidas => A dummy default password.
    • DB_ENGINE: "oracle"=> The database engine used by the application. If postgresql is selected, the application will use the PostgreSQL database. Otherwise, if oracle is selected, the application will use the Oracle database. The default value is set to "postgresql".

To configure boidas to use an Oracle database, instructions can be found here

Database configuration

Boidas uses a database to store all the application data. The database configuration is done by using the following environment variables:

  • DB_HOST: pgsql-boidas => Here the host where DB is deployed. Can be the container name or the IP of the host where the DB is deployed.
  • DB_NAME: boidas => Name of the database for boi-Das.
  • DB_USER: boidas => Name of the user granted to DB_NAME with DB_PASS.
  • DB_PASS: boidas => A dummy default password.
  • DB_PORT: 5432 => The port where DB is exposed. Can be the container port in case DB_HOST is the container name or IP or the host mapped port in case it is the host IP.
  • DB_SCHEMA: public => Name of the database schema. When a schema other than "public" is selected, the schema must exist in the database before running the application.
  • DB_ENGINE: "postresql" or "oracle"=> The database engine used by the application. If postgresql is selected, the application will use the PostgreSQL database. Otherwise, if oracle is selected, the application will use the Oracle database. The default value is set to "postgresql".

To upgrade boidas database PostgreSQL version from 12 to 15, instructions can be found here

To configure boidas to use an Oracle database, instructions can be found here

Email sender configuration

Boidas uses an email server to send emails to users in different situations. Such as the forget password process and the VeriSaaS connection status check email notification. The email server configuration is done by using the following environment variables:

  • EMAIL_HOST_NAME: "smtp.mycompany.com" => SMTP server configured to send emails when agents forgot password to access
  • EMAIL_HOST_PORT: "587" => SMTP port
  • EMAIL_HOST_USER: "myloginaccount@mycompany.com" => SMTP login account
  • EMAIL_HOST_PASSWORD: "xxx" => SMTP password
  • DEFAULT_FROM_EMAIL: "no-reply@mycompany.com" => email account configured as FROM for sending emails
Password policy

This policy includes the following aspects:

For blocking due to login attempts the following configuration is needed:

  • BLOCK_LOGIN_ENABLED: "yes" or "no" => If this variable is set to "yes", user blocking after a number (BLOCK_LOGIN_LIMIT) unsuccessful login attempts, is activated for the service instance. Default value is set to "yes".
  • BLOCK_LOGIN_LIMIT: "3" => This is the number of allowed login attempts on boi-Das before an account access is locked. The default value is set to "3".
  • BLOCK_LOGIN_TIME: "600" => This is the period of time in seconds which a user account access is blocked when the max number of login attempts have been wasted. After this amount of seconds passes, the user could try to login again. The default value is set to "600".

boi-Das enforce to comply with a password generation policy so the passwords associated with a user must comply with it.

  • UserAttributeSimilarityValidator, which checks the similarity between the password and a set of attributes of the user.
  • MinimumLengthValidator, which checks whether the password meets a minimum length of eight characters. The lenght limit can be configured
  • CommonPasswordValidator, which checks whether the password occurs in a list of common passwords. It compares to an included list of 20,000 common passwords.
  • NumericPasswordValidator, which checks whether the password isn’t entirely numeric.
  • SpecialCharactersValidator which checks whether the password contains the configured number of special characters
  • LowerCaseCharactersValidator which checks whether the password contains the configured number of lower case characters
  • UpperCaseCharactersValidator which checks whether the password contains the configured number of upper case characters
  • NumericCharactersValidator which checks whether the password contains the configured number of special characters
  • RepeatedPasswordValidator which checks whether the password was already used. To use this policy it must be configured the number of previous passwords to compare.
  • BlockPasswordChangeValidator which checks the password is not changed in a specific time period after last change. The policy must be configured with the number of days the password to be used.
  • ExpiredPasswordValidator which checks whether the password is expired. To use this policy it must be configured the number of days the password is active.

The configurable password generation policies use the following configuration:

  • PASSWORD_MIN_LENGTH: 12 => It sets the minimum length it is required in a valid password.
  • PASSWORD_EXPIRATION_DAYS: 60 => It enables ExpiredPasswordValidator. It set the number of days the password is active.
  • PASSWORD_PREVIOUS_REUSE: 5 => It enables RepeatedPasswordValidator. It set the number of last passwords already used.
  • PASSWORD_SPECIAL_CHARACTERS: 1 => It enables SpecialCharactersValidator. It set the number of special characters in the password.
  • PASSWORD_UPPERCASE_CHARACTERS: 1 => It enables UpperCaseCharactersValidator. It set the number of upper case characters in the password.
  • PASSWORD_LOWERCASE_CHARACTERS: 1 => It enables LowerCaseCharactersValidator. It set the number of lower case characters in the password.
  • PASSWORD_NUMERIC_CHARACTERS: 1 => It enables NumericCharactersValidator. It set the number of digits in the password.
Logging Configuration

The format, behaviour and storage of logs can be configured by using these environment variables:

  • LOG_FORMAT=console-simple => Format of the printed log messages. Possible values are:
    • console: Colorized output meant for development/debugging using the console on a local environment.
    • console-simple: Same as the ‘console’ format but less verbose.
    • plain: Same as the ‘console’ format but without colors (plain text output).
    • json: JSON-formatted log messages (each line is a JSON structure). Recommended for production environments.
    • semaas: SEMaaS-formatted log messages. Recommended for production environments.
  • LOG_HANDLER=stdout => Handler used to collect log messages:
    • stdout: Output log mesages to the standard output.
    • file: Print log messages to a rotating log file.
    • dailyfile: Print log messages to a daily-rotating log file.
  • LOG_TIMESTAMP_UTC=yes => Defines if the log timestamps wants to be in UTC time (yes) or in the docker container timezone (no).
  • LOG_BACKUP_COUNT=100 => Defines the maximum number of rotated backup logs. Must be an integer.
  • LOG_MAX_BYTES=1048576 => Defines the maximum size in bytes of the log file. Once the log file reaches this amount it will rotate. Only works with LOG_HANDLER=file.
  • LOG_FOLDER=/var/log/boidas => The folder to hold the log files. If you want the files in this folder to persist, please mount it as a volume.

Logs related to polling services can also be configured by using these environment variables:

  • LOG_FORMAT_POLLING=console-simple => Format of the printed log messages. Possible values are:
    • console: Colorized output meant for development/debugging using the console on a local environment.
    • console-simple: Same as the ‘console’ format but less verbose.
    • plain: Same as the ‘console’ format but without colors (plain text output).
    • json: JSON-formatted log messages (each line is a JSON structure). Recommended for production environments.
    • semaas: SEMaaS-formatted log messages. Recommended for production environments.
  • LOG_HANDLER_POLLING=stdout => Handler used to collect log messages:
    • stdout: Output log mesages to the standard output.
    • file: Print log messages to a rotating log file.
    • dailyfile: Print log messages to a daily-rotating log file.
  • LOG_TIMESTAMP_UTC_POLLING=yes => Defines if the log timestamps wants to be in UTC time (yes) or in the docker container timezone (no).
  • LOG_BACKUP_COUNT_POLLING=100 => Defines the maximum number of rotated backup logs. Must be an integer.
  • LOG_MAX_BYTES_POLLING=1048576 => Defines the maximum size in bytes of the log file. Once the log file reaches this amount it will rotate. Only works with LOG_HANDLER_POLLING=file.
  • LOG_FOLDER_POLLING=/var/log/boidas => The folder to hold the log files. If you want the files in this folder to persist, please mount it as a volume.

In case of wanting to configure the logging integrity it is needed to be configured as explained below:

In order to preserve log integrity, the logs generate a field per event that contains the SHA256 hash of it's content.

To achieve this, it is mandatory to provide a public key that will be in charge of performing this task. A private key will be needed to verify the integrity of the logs.

The following environment variable must be defined:

  • LOG_INTEGRITY_KEY=None => Public key that will be used for the integrity logs. The application will need this file to run, so the folder that contains this file will need to be mounted as a volume as it is explained here.
Volumes Configuration

Besides these variables, a few data volumes are necessary for a more convenient use of boi-Das container. These volumes require permissions for www-data user (uid=33):

All volumes must be properly configured in the deployment tool scripts, as it is indicated in the section boi-Das dashboard deployment of the current document, to allow the service docker containers to write and read from them.

  • /var/keys: This volume (or the folder that has the LOG_INTEGRITY_KEY file) is used to store log integrity key pair. In order to run the application, the public key file must be declared as LOG_INTEGRITY_KEY env variable. The volume must contain the following keys in .pem format:

    • public_key.pem
    • private_key.pem

    You must use RSA keys of 2048 bits in size for certificates. Anything under that length (e.g. 1024) is deprecated and considered insecure.

    Why 2048 bits?

    NIST recommends the use of keys with a minimum strength of 112 bits of security to protect data until 2030. 2048-bit RSA keys provide 112-bit of security, so they should be safe for the remainder of the current decade. PCI DSS also recommends keys of 2048 bits or higher.

    key pair generation

    openssl genrsa -out private_key.pem 2048
    openssl rsa -in private_key.pem -pubout > public_key.pem
    
  • /var/log/boidas: This volume persists log files generated by boi-Das service. Each container generates its logs. This folder requires www-data user permissions and it must be created before running boi-Das service. Several log files are generated as a result of boi-Das operation. The files and the paths where they are generated are the following:

    • /var/log/boidas/boidas.log: boi-Das output logs.
    • /var/log/boidas/boidasPolling.log: boi-Das polling logs.
  • /opt/boidas_backend/media: This volume is used by the service to keep selfie images, selfie videos, ID document images and the rest of the resources used on a validation. It should be shared among containers sharing the same database. This folder requires www-data user permissions and it must be created before running boi-Das service.

    It is required for boi-Das data persistence. This volume stores all the media data gathered from user validations, as document photos, image crops, selfie photos, videos and other resources which were used on vali-Das validations.

    Although these media resources can be retrieved by using the boi-Das API and also viewed in the boi-Das GUI, the users of boi-Das could have the necessity of working with these files directly. For this reason, it is convenient for them to know the naming conventions for these resources, allowing them to develop custom scripts and applications for doing ad-hoc processes.

    The folders contained inside this "media data" volume are the following:

    validation_export

    Contains temporarily the ZIP files created when a validation or a group of validations are exported from the boi-Das dashboard.

    validation_data

    boi-Das media data is stored in a folder structure hierarchy which gives context about the creation date and allows an easier search and organization.

    The media folder structure is the following one.

    media

    validation_data
    
        YYYY
    
            MM
    
                DD
    
                    VALIDATION_ID
    

    With this folder structure, all the media resources (images, videos, xml, etc.) regarding the validation with the id equals to VALIDATION_ID, are stored under VALIDATION_ID directory,

    The naming conventions of the media resources are the following.

    "ImageType_ValidationId_TOKEN(_ImageSubtype).Extension". The name of the video is build as follows: "Video_ValidationId_HASH.Extension". The image and video extensions are the same than the images and video sent to the server on the requests with the exception of the cases indicated in the section below. The values that can take each part of the image name are the following.

    • ImageType:

      • obverse
      • obverseFlash
      • reverse
      • selfie
      • selfie_alive
      • nfc_face
      • video
    • ValidationId example: 6cfb586f4cx343ec205ddf5e28638863
    • Token example: 116b8c4f33d3171febe2
    • ImageSubtype:

      • cut: image with the cut of the I.D. document which appears in the photo sent to the server for being analysed. Its file extension is always "jpg".
      • cut_face: image with the cut of the face which is obtained from the I.D. document photo sent to the server. Its extension is always "png".
      • cut_signature: image with the cut of the signature which is obtained from the I.D. document. Its extension is always "png".
      • cut_fingerprint: image with the cut of the fingerprint which is obtained from the I.D. document. Its extension is always "png".
    • Image and video names examples:

      • reverse_21ed9fh055ah425g90b5v6c14c2b5854_21gaff7dd302ef2b2fc1_cut.jpg
      • nfc_face_819d9fb085ae4256s0b526carcbbg854_1g7ddfc265fd8b7ab2a0.png
      • obverse_21e69f6055ae4d569sb5edca44bb5454_49g259dbe3d7489b9bbc_cut_signature.png
      • video_63c563896e424dddae470ae24ce4s9s1_cd2er6b4512311d79f3f.mp4

Second-factor authentication Configuration

By default, boi-Das service authentication is based on user and password verification.

In addition, it is possible to configure "second-factor" authentication functionality that provides more security to the service.

The following variables are necessary when "second-factor" authentication feature is required:

  • OTP_ENABLED: "yes" or "no" => If this variable is set to "yes", the Google Authenticator 2FA is activated for the service instance, requiring all the users to activate and use it. Otherwise, this 2FA is not activated.
  • OTP_PERIOD_ENABLED: "yes" or "no" => If this variable is set to "yes", the 2FA will not be asked to the users during a period of time after the last login. This period of time can be configured by using the env. var. "OTP_PERIOD_HOURS", and its default value is 24 hours. Default value is set to "no".
  • OTP_PERIOD_HOURS: "24" => how many hours will need to pass until the next OTP authentication requirement. Default value is set to "24" hours.
  • GOOGLE_AUTH_ISSUER: "Veridas-Boidas" => There are two fields which identify the 2FA provider in the Google Authenticator QR code. These fields are called "issuer" and "user ID". These values are displayed by the Google Authenticator app just above the 6-digit OTP code. To customize the "issuer" field, this environment variable can be defined. If this variable is not defined, it will default to "Veridas-Boidas". The "user ID" field will always be filled with the username of the user for whom the QR code was created.

Proxy Configuration

In certain environments like corporate networks, it is mandatory to route all the traffic through a proxy server, so every connected device needs to be routed to that.

boi-Das allows to be configured to redirect its network traffic through a proxy. This can be made by configuring the following docker environment variables with the indicated values:

  • http_proxy: http://SERVER:PORT/ o http://USERNAME:PASSWORD@SERVER:PORT/ o http://DOMAIN\\USERNAME:PASSWORD@SERVER:PORT/
  • https_proxy: http://SERVER:PORT/ o http://USERNAME:PASSWORD@SERVER:PORT/ o http://DOMAIN\\USERNAME:PASSWORD@SERVER:PORT/

In case that some requests should be excluded from the proxy routing, the way to indicate it is by configuring the following variable:

  • no_proxy: hostname:port

Azure AD Single Sign-On Configuration

The Single Sign-On (SSO) is an authentication method which allows users to just init session in many applications with the same unique credentials.

boi-Das allows to delegate the user authentication to Azure AD service allowing the customers which already use this service as a credentials manager to avoid creating users in boi-Das.

To properly configurate the Azure AD connection with Boidas, the following steps are required:

1. Register an app by using the Azure portal
  1. Sign in to the Azure portal.
  2. If you have access to multiple tenants, use the Directories + subscriptions filter in the top menu to switch to the tenant in which you want to register the application.
  3. Search for and select Azure Active Directory.
  4. Under Manage, select App registrations > New registration.
    1. When the Register an application page appears, enter your application's registration information:
      1. Enter a Name for your application, for example Boidas. Users of your app might see this name, and you can change it later.
      2. Change Supported account types to Accounts in this organizational directory only (Company Group only).
      3. In the Redirect URI (optional) section, select Web in the combo box and enter the following redirect URI: https://[BOIDAS_URL]/login.
      4. Select Register to create the application.
    2. On the app's Overview page, find the Application (client) ID value (=CLIENT_ID) and Directory (tenant) ID (=TENANT_ID) and and record both for later . You'll need those to set up the boi-Das instance.
    3. Under Manage, select Certificates & secrets.
    4. In the Client Secrets section, select New client secret, and then:
      1. Enter a key description.
      2. Select a key duration of In 1 year.
      3. Select Add.
      4. When the key value appears, copy it because it is only shown when it is created. You'll need it later (=CLIENT_SECRET). You'll need it to set up the boi-Das instance.
2. Restrict your Azure AD app to a set of users in an Azure AD tenant

Applications registered in an Azure Active Directory (Azure AD) tenant are, by default, available to all users of the tenant who authenticate successfully.

If you need that the application must be restricted to a certain set of users, update the app:

  1. Search for and select Enterprise Applications
  2. Under Manage, select All applications.
  3. Select the application you want to configure to require assignment, for example Boidas.
  4. On the application's Overview page, under Manage, select Properties.
  5. Locate the setting User assignment required? and set it to Yes. When this option is set to Yes, users and services attempting to access the application or services must first be assigned for this application, or they won't be able to sign-in or obtain an access token.
  6. Select Save.
  7. If you have configured user assigment, it is necessary to assign the app to users:
    1. Under Manage, select the Users and groups > Add user/group.
    2. Select the Users selector. A list of users and security groups will be shown along with a textbox to search and locate a certain user or group. This screen allows you to select multiple users and groups in one go.
    3. Once you're done selecting the users and groups, select Select.
    4. (Optional) If you have defined app roles in your application, you can use the Select role option to assign the app role to the selected users and groups.
    5. Select Assign to complete the assignments of the app to the users and groups.
    6. Confirm that the users and groups you added are showing up in the updated Users and groups list.
3. Configure platform settings
  1. In the Azure portal, in App registrations, select your application.
  2. Under Manage, select Authentication.
  3. Under Platform configurations, select Add a platform.
  4. Under Configure platforms, select Web and enter a Redirect URI: https://[BOIDAS_URL]/login
  5. Select Configure to complete the platform configuration.
4. Add permissions to access your web API
  1. In the Azure portal, in App registrations, select your application.
  2. Under Manage, select API permissions > Add a permission > Microsoft Graph.
  3. Select Delegated permissions. Microsoft Graph exposes many permissions, with the most commonly used shown at the top of the list.
  4. Under Select permissions, select the following permission:
    • User.Read: Sign users in and View users' basic profile
  5. Select Add permissions to complete the process.

The integration of Azure AD with boi-Das consists of configuring some environment variables in the deployment. To do this, it retrieves the values we have recorded in the AZURE configuration:

  • CLIENT_ID
  • CLIENT_SECRET
  • TENANT_ID

The environment variables needed in boi-Das are:

  • AD_CLIENT_ID: with the value of CLIENT_ID.
  • AD_CLIENT_SECRET: with the value of CLIENT_SECRET.
  • AD_AUTHORITY: with the value of https://login.microsoftonline.com/[TENANT_ID]
  • AD_URL_BOIDAS_LOGIN: with the value of https://[BOIDAS_URL]/login.
  • AD_DEFAULT_ROLE: role (agent/supervisor) to assign to users. Default: supervisor
  • AD_FIELD_CONTEXTUAL_DATA: Name of the contextual data field to filter by username.

When a user is created, if the environment variable AD_FIELD_CONTEXTUAL_DATA is set, the system create a contextual data filter (key-value) with key=AD_FIELD_CONTEXTUAL_DATA value and value=username.

The username value to create the user and the context data filter is obtained from the userPrincipalName field returned by Azure AD (More info)

NGINX docker

General Configuration

Apart from the boi-Das container, an NGINX proxy has to be deployed alongside the dashboard. It is used as a statics server, and is also the boi-Das gateway, so the NGINX container host and the port exposed by NGINX container has to be used for accessing boi-Das.

The boi-Das Docker image is prepared for deploying the NGINX server configured and ready to work. Like in the boi-Das Web App container, NGINX container creation procedure is configured by means of some environment variables which can be given to the _docker run _command or in a docker-compose.yml file. The default value of these variables is given after equals sign.

  • SERVER_NAME: localhost => The name of the NGINX server used just for internal configuration purposes, so a name like "localhost" is valid.
  • NGINX_UPSTREAM: boi-Das Host (i.e. boi-Das container name).
  • PORT: 8850 => the port that boidas has configured, 8850 by default.

Also, it is important to note that the ports which NGINX server expose are the 8080 (for HTTP) and the 8443 (for HTTPS). At least one of these ports must be binded to a host port to allow accessing boi-Das service through the NGINX. This binding can be done as it is shown in the docker run and docker-compose examples below.

Volumes Configuration for SSL implementation

Besides these variables, a data volume is necessary for a more convenient use of NGINX container. Some of these containers require permissions for www-data user (uid=33):

  • /etc/boidas/security/certs/: This volume is necessary for the deployment of boi-Das using SSL (ENABLE_SSL=TRUE.). boi-Das service includes default certificates valid for testing and sandboxing purposes, but for production ready environments, these default certificates must be replaced for valid ones issued by a trusted CA. This folder requires www-data user permissions and it must be created before running boi-Das service. We recommend mounting this volume with read-only permissions. It should contain the following SSL keys and certs:

    • server.crt
    • server.key

This volume must be properly configured in the deployment tool scripts, as it is indicated in the section boi-Das dashboard deployment of the current document, to allow the service docker containers to write and read from them.

You must use RSA keys of 2048 bits in size for certificates. Anything under that length (e.g. 1024) is deprecated and considered insecure.

Why 2048 bits?

NIST recommends the use of keys with a minimum strength of 112 bits of security to protect data until 2030. 2048-bit RSA keys provide 112-bit of security, so they should be safe for the remainder of the current decade. PCI DSS also recommends keys of 2048 bits or higher.

Deploy on a subdirectory Configuration

By default, boi-Das service is deployed under root path like https://my-domain/.

However, it may be necessary to deploy it under a subdirectory or different base path. To do that, the following configuration is required:

  • LOCATION: /subdirectory => This variable allows you to configure the subdirectory used at deploy time. The boi-Das will serve requests under the subdirectory indicated by LOCATION (e.g. /boidas).

To complete this configuration, it is mandatory to modify the BASE_URL variable which is configured in boi-Das docker (General Configuration). The value of BASE_URL must be consistent with the value of the LOCATION var.

  • BASE_URL => The URL conformed with the protocol + hostname + port + subdirectory, https://HOSTNAME:PORT/LOCATION (e.g. https://my-domain/boidas).

Docker Container Creation

Boidas On Premises comprises the containers deployed by using the given boidas:BOIDAS-SERVICE_VERSION Docker image of boi-Das, where current version of the image is the following:

BOIDAS-SERVICE_VERSION = 1.28.X

boi-Das service deployment

In case you want to use docker tool (https://docs.docker.com/get-started/part2/) for the containers deployment, using a database deployed somewhere else, you can use the following instructions on the terminal. It assumes that the database has been initialized properly with access granted to DB_USER with DB_PASS. It is important to take into account that the services have to be deployed in the same docker network to allow each container to reach each other.

The deployment can also be done by using docker-compose tool (https://docs.docker.com/compose).

We recommend using docker-compose tool because the integration should be easier and only need one step to up boidas service.

For a complete containerized solution which uses a PSQL docker container as a database system, the following steps must be followed.

boi-Das dashboard deployment

If boi-Das is being deployed for the first time, the container and the database have to be initialized. Similarly, if it is being upgraded from a previous version, the database has to be upgraded also before deploying the new version.

To achieve this action, the MIGRATIONS environmental variable should be "yes".

Please, ensure the VALIDAS_URL and API_KEY are the ones provided by Veridas. The ones shown above are examples.

During boidas-service container start-up, an administrator superuser (with username "admin") is created aswell as the client verification credentials to connect to boidas via API. the user password for this user and the client verification credentials are printed on the screen log like the following example (this is just a format example, not a valid password):

boidas-service   | BOIDAS SUPERUSER PASSWORD: TexQ2N20xh1zQt3RaOwGvob3NKdEdGi0mR4pw3c1
boidas-service   | BOI-DAS API OAUTH2 CLIENT_ID: uYjiqNPlYZIR3X3VZ6dNPUJi
boidas-service   | BOI-DAS API OAUTH2 CLIENT_SECRET: XUClkf4wMdLnLS2RvHo9r

Superadmin password and client secret will only be printed in the console log when the container is deployed for the first time. Save this information in a secure place as it will not be shown again.

boi-Das service should be exposed at https://localhost:8443/.

In addition, the superadmin section (for DB management) is also reachable at: https://localhost:8443/boidas_admin_configuration. From here, the admin user password can be changed. Also additional supervisors and/or agent users can be created (this action is also possible via API, View API)

#docker-compose.yml
version: '2.1'
services:
  pgsql-boidas:
    image: postgres:15.5
    container_name: pgsql-boidas
    environment:
      POSTGRES_DB: boidas
      POSTGRES_USER: boidas
      POSTGRES_PASSWORD: boidas
      PGDATA: /var/lib/postgresql/data/pgdata
    volumes:
      - ./vols/pgdata:/var/lib/postgresql/data/pgdata
  boidas-service:
    image: "${DOCKER_REGISTRY_IP:PORT}/boidas:${BOIDAS-SERVICE_VERSION}"
    container_name: boidas-service
    restart: always
    depends_on:
      - pgsql-boidas
    environment:
      TZ: Europe/Madrid
      DB_HOST: pgsql-boidas
      DB_USER: boidas
      DB_PASS: boidas
      DB_NAME: boidas
      WORKERS: 1
      ENABLE_SSL: "TRUE"
      MIGRATIONS: "yes"
      VALIDAS_URL: ${VALIDAS_URL:-https://api.eu.veri-das.com/validas/v1}
      API_KEY: ${API_KEY}
      BASE_URL: 'https://localhost:8443'
      EMAIL_HOST_NAME: 'smtp.mycompany.com'
      EMAIL_HOST_PORT: '587'
      EMAIL_HOST_USER: "noreply@reply.com"
      EMAIL_HOST_PASSWORD: "1341"
      DEFAULT_FROM_EMAIL: "info-noreply@veridas.com"
   volumes:
      - ./vols/media:/opt/boidas_backend/media
      - ./vols/logs:/var/log/boidas
  nginx:
    image: "${DOCKER_REGISTRY_IP:PORT}/boidas:${BOIDAS-SERVICE_VERSION}"
    environment:
      SERVER_NAME: localhost
      NGINX_UPSTREAM: boidas-service
      PORT: 8850
    command: /opt/boidas_backend/run_nginx.sh
    depends_on:
      - boidas-service
    ports:
      - 8443:8443
    volumes:
      - ./vols/certs:/etc/boidas/security/certs/

For running this docker-compose script, the following command has to be executed, supposing that the docker-compose file shown above is called "docker-compose.yml".

docker-compose pull # Just if the images are in a docker registry
docker-compose -f docker-compose.yml up --abort-on-container-exit  --force-recreate

Container Cleaning

In case of failure during these steps, the first time it is executed, it is required to clean all volumes and recreate them as empty volumes. Similarly, it is required to remove any container left on the machine. It can be done as follows:

sudo docker rm CONTAINER_ID

Additional Considerations

Boi-Das Inner Decisions Logic

Although boi-Das downloads the vali-Das service confirmed validations as they are, there are certain decisions which are made by boi-Das based on the information contained in these validations.

Validation state based decisions

Once the correctness of a validation has been assured by boi-Das, the validation is downloaded and is assigned to one of the validation state queues based on the following elements:

  1. If there is a contextual data with the key "stats_onboarding_state" and the value "rejected" the validation is considered as rejected, and will appear in the rejected section in the boi-Das GUI.
  2. If there is a contextual data with the key "stats_onboarding_state" and the value "approved" the validation is considered as approved, and will appear in the approved section in the boi-Das GUI.
  3. If there is a contextual data with the key "stats_onboarding_state" and the value "inconclusive" the validation is considered as inconclusive, and will appear in the inconclusive section in the boi-Das GUI.
  4. If there is no contextual data with the key "stats_onboarding_state" the validation is considered as pending, and will appear in the pending section in the boi-Das GUI.

Upgrading PostgreSQL DB engine from version 12 to version 15

If a PSQL docker container is used as a database system, the following steps must be followed if you want to migrate from the older recommended version 12 to the new recommended version 15.5.

It is important to note that November 14, 2024 is the date of the end of life of the v12 of PSQL, so if you want to have support from Veridas (and from PostgreSQL) from that date, it is mandatory to upgrade.

The first boi-Das version which supports the PSQL v15 is the boi-Das v1.27, so if you have any version prior of this, please, update first to boi-Das v1.27 or superior to do the PSQL upgrade.

This is a disruptive process in which there is an availability loss.

The steps to do the PSQL docker container upgrade from v12 to v15.5 are the following:

  1. Ensure that boi-Das version is v1.27 or higher

  2. It is Optional but highly recommended to do a backup or a snapshot of the database, to be used as a fast remediation if something goes wrong

  3. Keep pgsql-boidas container awake and stop boidas-service container. If more than one boi-Das application instances (containers) are deployed, all of them must be stopped:

    docker stop boidas-service
    
  4. Run the following command:

    docker exec -it pgsql-boidas pg_dumpall -U boidas > pgdump.out
    
  5. Stop postgresql container:

    docker stop pgsql-boidas
    
  6. Delete postgresql volumes (pgdata directory) or create a new one (this option is more safe) and assign the corresponding permissions (more information here)

  7. Change to desired postgresql (pgsql-boidas) image version in docker-compose.yml file.
  8. Change boidas-service volumes in docker-compose.yml file (if you did create new ones).
  9. Run only pgsql-boidascontainer:

    docker-compose up -d pgsql-boidas
    
  10. Run the following command:

    docker cp pgdump.out pgsql-boidas:/
    
  11. Run the following command:

    docker exec -it pgsql-boidas psql -f pgdump.out -U boidas
    
  12. Stop postgresql container:

    docker stop pgsql-boidas
    
  13. In postgresql version 14, password hashing method changed from MD5 to SCRAM-SHA-256. This implies that all database users passwords must be reseted (in this example there is only a user called "boidas"):

    docker exec -it pgsql-boidas psql -U boidas
    \password
    
  14. If the password is different from the last one don't forget to set it in the docker-compose environment variables.

  15. Run boidas-servicedeployment as usual

For more information you can check https://www.postgresql.org/docs/15/upgrading.html

Deploy boi-Das on a High Availability (HA) environment

boi-Das service supports its deployment on a high availability environment, and with all its instances running active-active, acting as a cluster, enabling an horizontal scalability of the service as the load grows at the time the service keeps fault-tolerant. For most users, a two-node cluster will be sufficient for this purpose.

The following diagram depicts the HA architecture:

ha_diagram

In order to conduct this deployment, you will consider the following assumptions:

  • The container must be configured through the available mechanisms (environment variables) without overriding anything via volumes, new layers, changing the ENTRYPOINT/CMD etc.
  • At least 2 instances of nginx+boidas will be deployed. Each nginx will proxy requests to their corresponding api (NOT load balance calls to the API)
  • All boidas instances will share a single PostgreSQL database
  • All boidas instances will share a volume mount point for media assets (images, etc.) typically using NFS on a linux environment, or EFS on AWS deployments
  • Each instance will mount a dedicated volume for logs, or map different folders of the same volume to each boidas container
  • boi-Das exposes a HTTPS service under a TCP port for both the UI and the API services (see 4.2 NGINX docker). API calls are stateless, and UI calls rely on a token cached on the user's browser for session persistence. Both services are served by the NGINX service and for HA purposes, the load balancer does not require to use sticky sessions, so a round-robin or leastConnections balancing mechanism should work.
  • All instances will run their own polling processes (as is the default on the docker image) with 2 threads each (4 total)

Deploy boi-Das with Oracle Database

boi-Das supports the use of Oracle Database as a database system. Version 19c or superior is recommended.

To deploy boi-Das with Oracle Database, database environment variables must be configured in the docker-compose.yml in order to let know the application where the database is located. DB_ENGINE variable must be set to "oracle". Please check the database configuration section to see the environment variables that must be configured.

To migrate data from PostgreSQL to Oracle, please check the Migrate from PostgreSQL to Oracle section.

Migrate from PostgreSQL to Oracle

In case of a migration from PostgreSQL to Oracle, data can be migrated from one database to the other by using the following steps:

  1. Ensure that boi-Das version is v1.28 or higher.

  2. It is Optional but highly recommended to do a backup or a snapshot of the database, to be used as a fast remediation if something goes wrong

  3. Deploy boi-Das with PostgreSQL configuring USE_TZ environment variable to yes.

  4. Dump the data from the PostgreSQL database.

    docker exec -it boidas-service python manage.py dumpdata -o boidas_data.json
    
  5. Download the file boidas_data.json to your local machine.

    docker cp boidas-service:/opt/boidas_backend/boidas_data.json .
    
  6. Stop boi-Das deployment.

    docker-compose down
    
  7. Remove USE_TZ environment variable from the boidas configuration.

  8. Change the database configuration in the docker-compose.yml file to use Oracle Database.

  9. Deploy boi-Das with Oracle Database.

  10. Copy the file boidas_data.json to the boi-Das container.

    docker cp boidas_data.json boidas-service:/opt/boidas_backend/boidas_data.json
    
  11. Import the database content into the Oracle Database:

    docker exec -it boidas-service python manage.py import_database boidas_data.json
    

After this steps the data should be migrated from PostgreSQL to Oracle successfully.

Product Monitoring and Control

Monitoring is important to determine if the services are working correctly and to early detect potential issues.

Apart from logging files, you could prepare scripts or use monitoring tools to automate the control.

boi-Das service is composed with different parts that it is recommended to check.

Check availability of API service

If you want to check if the service is up, you can create a script using this shell command and verify that the result is 204.

curl -I --location --request GET 'https://<base_url>/api/v1/alive'
HTTP/1.1 204 No Content

Also, to check if the service connects to the database properly, you will use this other command with a 200 response.

curl -I --location --request POST 'https://<base_url>/api/v1/oauth/token/' \
     --header 'Authorization: your credentials oauth
     --form 'grant_type="password"' \
     --form 'username="your username here"' \
     --form 'password="your password here"'
HTTP/1.1 200 OK

Check availability of User Interface

Another very important component is the User Interface that you can check if it is running by doing this command. The result of this must be 200.

curl -I --location --request GET 'https://<base_url>'
HTTP/1.1 200 OK

Check polling for validations service

This service is responsible for connecting to VeriSaaS and downloading the validations availables.

On the one hand, it is necessary to verify that the polling runs every X seconds, by default 60. There is a log file called "boidasPolling" that registers an event each time the polling is launched. You can check it by looking for this event in "boidasPolling.log" file which should be appears every X seconds:

{"event": "Running job \"get_validations (trigger: interval[0:01:00], next run at: 2021-08-03 10:45:56 CEST)\" (scheduled at 2021-08-03 10:44:56.237152+02:00)"...}

On the other hand, it is very important to check if the polling process connects correctly to VeriSaaS. It may be that the polling works correctly but the connection or download fails. We recommend configuring an alert that is raised when this event appear in "boidas.log" file:

{"event": "Problems to obtain validations from Validas"... }

Additionally, this polling will check the VeriSaaS connection state that will send an email to a list configured in the environment variable VERISAAS_EMAILS_RECIPIENTS when the VeriSaaS connection is not available and when the connection is restored.

Backup

The following steps must be followed if you want to do a consistent backup of all the boi-Das information:

  1. Stop boi-Das service by stopping its docker, or all the dockers of the boidas cluster in case of a distributed implementation

    docker stop boidas-service
    
  2. Perform a backup of the postgreSQL database. For more information about Postgresql Backup and Restore options you can check https://www.postgresql.org/docs/12/backup.html

  3. Backup the volume mount points for media assets and logs. See Volumes Configuration for more information
  4. Start again boi-Das docker, or all the dockers of the boidas cluster in case of a distributed implementation

If the volume mount points are sitting on a storage cabinet you might want to leverage the snapshot capabilities of the cabinet to reduce the downtime due to the backup window. Same applies for the database datastores.

It is highly recommended to perform this backup operation during low user activity to minimize the amount of processes non gathered from the cloud after it resumes its operation.

Important: For security reasons, Veridas has an autocleaner process continuously running at VeriSaas to remove every validation process left at the cloud after a specific time regardless if they were or were not downloaded into boidas. By doing this, we avoid long term persistence of sensible data in the cloud. To avoid losing processes, you should carefully measure the backup process downtime of your backup process so that it is ensured that this time is lower than the autocleaner time period. The autocleaner time can be adjusted by Veridas Customer Support by simply raising a ticket specifying the appropriate time.

Restore

The following steps must be followed for restoring a backup of all the boi-Das information:

  1. Prepare a new deployment following boi-Das dashboard deployment steps.
  2. Restore the data of the volume mount points for media assets and logs in the new path and configure the new volume mount point correctly.
  3. Run only pgsql-boidascontainer:

    docker run -d pgsql-boidas
    

  4. Perform a restore of the postgreSQL database. For more information about Postgresql Backup and Restore options you can check https://www.postgresql.org/docs/9.1/backup.html

  5. Stop postgresql container:

    docker stop pgsql-boidas
    
  6. Run boidas-servicedeployment as usual

Data Export

Under some circumstances, the administrator of the system may need to export or migrate all or part of the data stored in boidas.

This task can be easily performed using the export feature, via UI or API.

The selected validations will be exported as a single ZIP file, containing individual folders per each validation process named with their validation IDs. Inside the folder, a media folder will contain all the images, video pieces of evidence as they were originally uploaded to Verisaas and their respective cuts. Also a json file is included containing all the validation data (ocr, scores, etc).

This way an administrator can easily migrate/move the validation processes to another storage/archival system.

Please have a look at section Get validation by validation_id for further information