Skip to content

SelphID SDK API

1. Introduction

This document includes the API description of the libraries provided in the FacePhi SelphID SDK product.

2. Description of SelphID API (Front-end)

SelphID SDK is a set of server libraries that uses information generated by FacePhi Widgets for native or web applications. As described in the following sections, some components so-called Widgets for face capture (Selphi) and ID document capture (SelphID) are provided and can be integrated into the front end of any application.

2.1. Selphi Widget

By means of the Selphi widget, which incorporates the face detection and extraction mechanism and the user's liveness, the following information can be retrieved using the properties listed below:

  • Image property: It represents the user’s image with the most frontal facial pose detected.

  • TemplateRaw property: It represents the user’s tokenized biometric template with the most frontal facial pose detected. This template is used to perform biometric authentication with SelphID SDK.

Note

Additionally, a generateTemplateRaw tokenized functionality is provided (see API) that allows to convert an image or data buffer into a tokenized data buffer as a templateRaw. This templateRaw can be used in several functionalities of SelphID SDK.

2.2. SelphID Widget

By means of the SelphID widget, which incorporates the automatic document detection and capture mechanism, the following information can be retrieved using the properties listed below:

  • TokenOCR property: Represents a token (time stamp + AES256 encryption) containing the data detected in the document through the performed OCR.

  • TokenFrontDocument property: Represents the tokenized image (time stamp + AES256 encryption) of the front of the document adjusted to the edges of the document.

  • TokenBackDocument property: Represents the tokenized image (time stamp + AES256 encryption) of the back of the document adjusted to the edges of the document.

  • TokenFaceImage property: Represents the tokenized image (time stamp + AES256 encryption) of the document user's photograph. This token is used to perform biometric authentication with the SDK.

  • TokenRawFrontDocument property: Represents the tokenized image (time stamp + AES256 encryption) of the front side of the document without being trimmed to the edges of the document, i.e. as it was captured by the camera.

  • TokenRawBackDocument property: Represents the tokenized image (time stamp + AES256 encryption) of the back side of the document without being trimmed to the edges of the document, i.e. as it was captured by the camera.

3. Description of SelphID API (Back-end)

The following describes the API of the libraries provided in SelphID, detailing the methods that the integrator can use to incorporate the functionalities of facial recognition, information extraction in identity documents and document validation.

3.1. Libraries Initialization

SelphIDVerifier represents the main class of the libraries, which contains all the methods available for each of the functionalities.

Library initialization can be done in three different ways depending on the environment variables set:

3.1.1. Initializate by loadWithConfigPath() method

Previous steps

Configure enviorenment variables and config.cfg file (Onpremise installation SDK configuration):

  • FACEPHI_SELPHID_INSTALL_PATH
  • FACEPHI_SELPHID_INSTALL_BIN
  • LD_LIBRARY_PATH
  • PATH
public static void main(String[] args) {
  // Instantiate a SelphIDVerifier object.
  SelphIDVerifier verifier = new SelphIDVerifier();

  // Specifies the path where the configuration file is located.
  String configurationFilePath =
    "C:/Program Files/FacePhi/Sdk/SelphId/x.x.x.x/config/config.cfg";

  // Load the library indicating the path where to look for the config file.
  verifier.loadWithConfigPath(configurationFilePath);

  // Make use of the library.

  // Unload the library when you have finished.
  verifier.unload();
}

3.1.2. Initializate by load() method

Previous steps

Configure environment variables and config.cfg file (On premise installation SDK configuration):

  • FACEPHI_SELPHID_INSTALL_PATH
  • FACEPHI_SELPHID_INSTALL_BIN
  • LD_LIBRARY_PATH
  • PATH

It is not necessary to specify the path of the configuration file, it will automatically look for it in the following path: FACEPHI_SELPHID_INSTALL_PATH/config/selphid.cfg

public static void main(String[] args) {
  // Instantiate a SelphIDVerifier object.
  SelphIDVerifier verifier = new SelphIDVerifier();

  // Load the library.
  verifier.load();

  // Make use of the library.

  // Unload the library when you have finished.
  verifier.unload();
}

3.1.3. Initializate by loadFromEnvVars() method

Previous steps

Configure environment variables (On premise installation SDK configuration):

  • FACEPHI_SELPHID_INSTALL_PATH
  • FACEPHI_SELPHID_INSTALL_BIN
  • LD_LIBRARY_PATH
  • PATH
  • FACEPHI_SELPHID_DEBUGPATH_KEY
  • FACEPHI_SELPHID_USAGEPATH_KEY
  • FACEPHI_SELPHID_FACIALLIVENESS_PATH_KEY
  • FACEPHI_SELPHID_FACIAL_LICPATH_KEY

Instead of setting the config.cfg file those variables are directly set as environment variables.

public static void main(String[] args) {
  // Instantiate a SelphIDVerifier object.
  SelphIDVerifier verifier = new SelphIDVerifier();

  // Load the library.
  verifier.loadFromEnvVars();

  // Make use of the library.

  // Unload the library when you have finished.
  verifier.unload();
}

Important

Initialization of libraries by means of the load(), loadWithConfigPath() or loadFromEnvVars() method should only be carried out once in the life span of your application.

Once all the processes that involve the use of these libraries are completed, it is important to free the resources associated to them, do not forget to unload the library before closing the app.

The completion of the libraries using the unload() method should only be done once in the life span of your application. Once the unload() method is called, it is no longer possible to perform a load().

If a condition occurs that does not allow the libraries to be loaded properly, a SelphIDException will be thrown. For exception types, see the corresponding section 3.10. Description of SelphIDException in the provided API.

3.2. Facial extraction methods

To perform the extraction of the facial data of a user, the integrator has different methods available in the SelphIDVerifier class. The integrator shall make use of one method or another depending on the data generated in the client. Each of the possible situations are described below.

Note

The result of this methods will always be a SelphIDFacialExtractionResult object, explained in 3.9.1. SelphIDFacialExtractionResult.

3.2.1. Facial extraction using an image

The method to be used is as follows:

SelphIDFacialExtractionResult r = extractFacialWithImageBuffer(
  byte[] imageBuffer,
  SelphIDVerifierOptions options
);

The steps required to perform a facial extraction using an image are as follows (example method):

Previous steps

  • Obtain the image (base64 string) through the Image property using the Selphi widget.

  • Send the base64 string image to the server.

String extractBiometricFacialTemplate(String imageBase64) {
  // Decode base64 to get the byte array corresponding to the image on the server.
  byte[] imageBuffer = Base64.getDecoder().decode(
    imageBase64.getBytes());

  // Create the SelphIDVerifierOptions and configure it if needed.
  SelphIDVerifierOptions options = new SelphIDVerifierOptions();

  // Extraction
  // Check 3.1. section to initializate SelphIDVerifier.
  SelphIDFacialExtractionResult r = verifier.extractFacialWithImageBuffer(imageBuffer, options);

  // Explore the information obtained in SelphIDFacialExtractionResult like facial position.
  Rectangle facePostion = r.getFaceRectangle();

  // Return the biometric facial template.
  return r.getFacialTemplate();
}

3.2.2. Facial extraction using a template

The method to be used is as follows:

SelphIDFacialExtractionResult r = extractFacialWithRawTemplate(
  byte[] rawTemplateBuffer,
  SelphIDVerifierOptions options
);

The steps required to perform a facial extraction using a biometric template are as follows (example method):

Previous steps

  • Obtain the biometric template (base64 string) through the TemplateRaw property using the Selphi widget.

  • Send the base64 string biometric template to the server.

String extractBiometricFacialTemplate(String templateRawBase64) {
  // Decode base64 to get the byte array corresponding to the template.
  byte[] rawTemplateBuffer = Base64.getDecoder().decode(
    templateRawBase64.getBytes());

  // Create the SelphIDVerifierOptions and configure it if needed.
  SelphIDVerifierOptions options = new SelphIDVerifierOptions();

  // Extraction
  // Check 3.1. section to initializate SelphIDVerifier.
  SelphIDFacialExtractionResult r = verifier.extractFacialWithRawTemplate(rawTemplateBuffer, options);

  // Explore the information obtained in SelphIDFacialExtractionResult like facial position.
  Rectangle facePostion = r.getFaceRectangle();

  // Return the biometric facial template.
  return r.getFacialTemplate();
}

3.3. Facial authentication methods

To perform facial authentication of a user, the integrator has different methods available in the SelphIDVerifier class. The integrator shall make use of one method or another depending on the data generated in the client. Each of the possible situations are described in the next subsections.

Note

The result of this methods will always be a SelphIDFacialAuthenticationResult object, explained in 3.9.2. SelphIDFacialAuthenticationResult.

3.3.1. Facial authentication using images

SelphIDFacialAuthenticationResult r = authenticateFacialWithImageBuffers(
  byte[] imageQuery,
  byte[] imageTarget,
  SelphIDVerifierOptions options
);

The steps required to perform a facial authentication using two images are as follows (example method):

Previous steps

  • Obtain the first image (base64 string) through the Image property using the Selphi widget.

  • Obtain the second image (base64 string) through the Image property using the Selphi widget.

  • Send both base64 strings to the server.

boolean isMatch(String firstImageBase64, String secondImageBase64) {
  // Decode base64 to get the byte array corresponding to each image on the server.
  byte[] imageQuery = Base64.getDecoder().decode(
    firstImageBase64.getBytes());
  byte[] imageTarget = Base64.getDecoder().decode(
    secondImageBase64.getBytes());

  // Create the SelphIDVerifierOptions and configure it if needed.
  SelphIDVerifierOptions options = new SelphIDVerifierOptions();

  // Extraction
  // Check 3.1. section to initializate SelphIDVerifier.
  SelphIDFacialAuthenticationResult r = verifier.authenticateFacialWithImageBuffers(
    imageQuery, imageTarget, options);

  // Explore the information obtained in SelphIDFacialAuthenticationResult like similarity
  float similarity = r.getSimilarity();

  // Return if images are matching
  return r.getFacialAuthenticationStatus() == FacialAuthenticationStatus.Positive;
}

3.3.2. Facial authentication using biometric templates

The method to be used is as follows:

SelphIDFacialAuthenticationResult r = authenticateFacialWithRawTemplates(
  byte[] templateQuery,
  byte[] templateTarget,
  SelphIDVerifierOptions options
);

The steps required to perform a facial authentication using two biometric templates are as follows (example method):

Previous steps

  • Obtain the first biometric template (base64 string) through the TemplateRaw property using the Selphi widget.

  • Obtain the second biometric template (base64 string) through the TemplateRaw property using the Selphi widget.

  • Send both base64 strings to the server.

boolean isMatch(String firstTemplateRawBase64, String secondTemplateRawBase64) {
  // Decode base64 to get the byte array corresponding to each image on the server.
  byte[] templateQuery = Base64.getDecoder().decode(
    firstTemplateRawBase64.getBytes());
  byte[] templateTarget = Base64.getDecoder().decode(
    secondTemplateRawBase64.getBytes());

  // Create the SelphIDVerifierOptions and configure it if needed.
  SelphIDVerifierOptions options = new SelphIDVerifierOptions();

  // Extraction
  // Check 3.1. section to initializate SelphIDVerifier.
  SelphIDFacialAuthenticationResult r = verifier.authenticateFacialWithRawTemplates(
    templateQuery, templateTarget, options);

  // Explore the information obtained in SelphIDFacialAuthenticationResult like similarity
  float similarity = r.getSimilarity();

  // Return if images are matching
  return r.getFacialAuthenticationStatus() == FacialAuthenticationStatus.Positive;
}

3.3.3. Facial authentication using an image and biometric template

The method to be used is as follows:

SelphIDFacialAuthenticationResult r = authenticateFacialWithImageRawTemplate(
  byte[] imageQuery,
  byte[] templateTarget,
  SelphIDVerifierOptions options
);

The steps required to perform a facial authentication using an image and a biometric template are as follows (example method):

Previous steps

  • Obtain the image (base64 string) through the Image property using the Selphi widget.

  • Obtain the biometric template (base64 string) through the TemplateRaw property using the Selphi widget.

  • Send both base64 strings to the server.

boolean isMatch(String imageBase64, String templateRawBase64) {
  // Decode base64 to get the byte array corresponding to each image on the server.
  byte[] imageQuery = Base64.getDecoder().decode(
    imageBase64.getBytes());
  byte[] templateTarget = Base64.getDecoder().decode(
    templateRawBase64.getBytes());

  // Create the SelphIDVerifierOptions and configure it if needed.
  SelphIDVerifierOptions options = new SelphIDVerifierOptions();

  // Extraction
  // Check 3.1. section to initializate SelphIDVerifier.
  SelphIDFacialAuthenticationResult r = verifier.authenticateFacialWithImageRawTemplate(
    imageQuery, templateTarget, options);

  // Explore the information obtained in SelphIDFacialAuthenticationResult like similarity
  float similarity = r.getSimilarity();

  // Return if images are matching
  return r.getFacialAuthenticationStatus() == FacialAuthenticationStatus.Positive;
}

3.3.4. Facial authentication using the document photo and an image

The method to be used is as follows:

SelphIDFacialAuthenticationResult r = authenticateFacialWithRawDocumentImage(
  byte[] rawDocument,
  byte[] imageTarget,
  SelphIDVerifierOptions options
);

The steps required to perform a facial authentication using the document photo and an image are as follows (example method):

Previous steps

  • Obtain the document photo token (base64 string) through the TokenFaceImage property using the SelphID widget.

  • Obtain the image (base64 string) through the Image property using the Selphi widget.

  • Send both base64 strings to the server.

boolean isMatch(String tokenFaceImageBase64, String imageBase64) {
  // Decode base64 to get the byte array corresponding to each image on the server.
  byte[] rawDocument = Base64.getDecoder().decode(
    firstImageBase64.getBytes());
  byte[] imageTarget = Base64.getDecoder().decode(
    secondImageBase64.getBytes());

  // Create the SelphIDVerifierOptions and configure it if needed.
  SelphIDVerifierOptions options = new SelphIDVerifierOptions();

  // Extraction
  // Check 3.1. section to initializate SelphIDVerifier.
  SelphIDFacialAuthenticationResult r = verifier.authenticateFacialWithRawDocumentImage(
    rawDocument, imageTarget, options);

  // Explore the information obtained in SelphIDFacialAuthenticationResult like similarity
  float similarity = r.getSimilarity();

  // Return if images are matching
  return r.getFacialAuthenticationStatus() == FacialAuthenticationStatus.Positive;
}

3.3.5. Facial authentication using the document photo and a biometric template

The method to be used is as follows:

SelphIDFacialAuthenticationResult r = authenticateFacialWithRawDocumentRawTemplate(
  byte[] rawDocument,
  byte[] templateTarget,
  SelphIDVerifierOptions options
);
  • The steps required to perform facial authentication using the document photo and the user's biometric template are as follows (example method):

Previous steps

  • Obtain the document photo token (base64 string) through the TokenFaceImage property using the SelphID widget.

  • Obtain the biometric template (base64 string) through the TemplateRaw property using the Selphi widget.

  • Send both base64 strings to the server.

boolean isMatch(String rawDocumentBase64, String templateTargetBase64) {
  // Decode base64 to get the byte array corresponding to each image on the server.
  byte[] rawDocument = Base64.getDecoder().decode(
    rawDocumentBase64.getBytes());
  byte[] templateTarget = Base64.getDecoder().decode(
    templateTargetBase64.getBytes());

  // Create the SelphIDVerifierOptions and configure it if needed.
  SelphIDVerifierOptions options = new SelphIDVerifierOptions();

  // Extraction
  // Check 3.1. section to initializate SelphIDVerifier.
  SelphIDFacialAuthenticationResult r = verifier.authenticateFacialWithRawDocumentRawTemplate(
    rawDocument, templateTarget, options);

  // Explore the information obtained in SelphIDFacialAuthenticationResult like similarity
  float similarity = r.getSimilarity();

  // Return if images are matching
  return r.getFacialAuthenticationStatus() == FacialAuthenticationStatus.Positive;
}

3.4. Document data extraction method

To obtain the document data required for digital onboarding processes, the integrator has one method available in the SelphIDVerifier class.

Note

The result of this method will be a SelphIDDocumentResult object that contains all detected data from the document. For more information check 3.9.3. SelphIDDocumentResult section.

3.4.1. Obtaining the detected data in a document

The method to be used is as follows:

SelphIDDocumentResult extractDocumentWithRawDocument(
  byte[] rawDocumentBuffer,
  SelphIDVerifierOptions selphIDVerifierOptions
);

The steps required to obtain the data from a document are as follows (example method):

Previous steps

  • Obtain the value of the TokenOCR property (base64 string) using the SelphID widget.

  • Send the base64 string to the server.

void printDocumentData(String tokenOCRBase64) {
  // Decode base64 to obtain the byte array corresponding to the token on the server.
  byte[] rawDocumentBuffer = Base64.getDecoder().decode(
    tokenOCRBase64.getBytes());

  // Create the SelphIDVerifierOptions and configure it if needed.
  SelphIDVerifierOptions options = new SelphIDVerifierOptions();

  // Extraction
  // Check 3.1. section to initializate SelphIDVerifier.
  SelphIDDocumentResult r = verifier.extractDocumentWithRawDocument(
    rawDocumentBuffer, options);

  // Explore the information obtained in SelphIDDocumentResult.

  // Get document keys.
  String[] keys = r.listDocumentKeys();

  // Print document data.
  for(int x=0; x<keys.length; x++) {
    System.out.println(
      keys[x] + ": " + getDocumentValue(keys[x]);
    );
  }
}

3.5. Liveness evaluation methods

To evaluate the user's liveness on the server, which is a necessary functionality in the digital onboarding process to prevent photo or video fraud, the integrator has an available method for this purpose in the SelphIDVerifier class.

Note

The result of this methods will be always a SelphIDFacialLivenessResult object, explained in 3.9.4. SelphIDFacialAuthenticationResult.

3.5.1. Liveness evaluation from an image

SelphIDFacialLivenessResult r = evaluatePassiveLivenesWithImageBuffer(
  byte[] imageBuffer
);

The steps required to perform a facial authentication using two images are as follows (example method):

Previous steps

  • Obtain an image from the user (base64 string) using the Selphi widget.

  • Send the base64 string image to the server.

boolean isAlive(String imageBase64) {
  // Decode the base64 image to get the byte array.
  byte[] imageBuffer = Base64.getDecoder().decode(
    imageBase64.getBytes());

  // Evaluate
  // Check 3.1. section to initializate SelphIDVerifier.
  SelphIDFacialLivenessResult r = verifier.evaluatePassiveLivenesWithImageBuffer(
    imageBuffer);

  // Return if is alive
  return r.getFacialLivenessDiagnostic == FacialLivenessDiagnostic.Live;
}

3.5.2. Liveness evaluation from tokenized image

SelphIDFacialLivenessResult r = evaluatePassiveLivenessWithTokenBuffer(
  byte[] tokenBuffer
);

The steps required to perform a facial authentication using two images are as follows (example method):

Previous steps

  • Obtain an image from the user (base64 string) using the Selphi widget.

  • Send the base64 string image to the server.

boolean isAlive(String imageBase64) {
  // Decode the base64 image to get the byte array.
  byte[] tokenBuffer = Base64.getDecoder().decode(
    imageBase64.getBytes());

  // Evaluate
  // Check 3.1. section to initializate SelphIDVerifier.
  SelphIDFacialLivenessResult r = verifier.evaluatePassiveLivenessWithTokenBuffer(
    tokenBuffer);

  // Return if is alive
  return r.getFacialLivenessDiagnostic == FacialLivenessDiagnostic.Live;
}

3.6. Methods for identification 1:N

In order to carry out 1:N searches to identify a certain biometric pattern against a database and thus obtain a set of candidates with a higher percentage of similarity, the integrator has different available methods in the SelphIDIdentifier class.

public class SelphIDIdentifier {
  // True if gallery has been created correctly.
  boolean createGallery(String galleryID) {}

  // True if gallery has been created correctly.
  boolean createGalleryWithPath(String galleryID, String galleryFilePath) {}

  // True if gallery has been removed correctly.
  boolean clearGallery(String galleryID) {}

  // Get all galleries for identification.
  String[] getAllGalleries() {}

  // True if each gallery have been removed correctly.
  boolean clearAllGalleries() {}

  // Feed a gallery with a new template using SelphIDFacialExtractionResult
  int enrollWithExtractionResult(String galleryID, String templateID, SelphIDFacialExtractionResult extractionResult) {}

  // Feed a gallery with a new template using byte[] template
  int enrollWithFacialTemplate(String galleryID, String templateID, byte[] facialTemplateBuffer) {}

  // Check if a person exists in the gallery via SelphIDFacialExtractionResult.
  SelphIDIdentifierResult identifyWithExtractionResult(String galleryID, SelphIDFacialExtractionResult extractionResult, SelphIDIdentifierOptions identifierOptions) {}

  // Check if a person exists in the gallery via byte[] template.
  SelphIDIdentifierResult identifyWithFacialTemplate(String galleryID, byte[] facialTemplateBuffer, SelphIDIdentifierOptions identifierOptions) {}

  SelphIDFacialGalleryInfo getGalleryInfo(String galleryID) {}

  boolean removeWithGalleryIndex(String galleryID, int templateIndex) {}
}

As a preliminary step to perform identification operations, a gallery should be created and the set of biometric patterns on which performing the search should be recorded in it. To enroll templates into the gallery you can use the next methods:

public class SelphIDIdentifier {
  int enrollWithExtractionResult(
    String galleryID,
    String templateID,
    SelphIDFacialExtractionResult extractionResult
  ) {}

  int enrollWithFacialTemplate(
    String galleryID,
    String templateID,
    byte[] facialTemplateBuffer
  ) {}
}

In both methods, the gallery identifier to which the template has to be added and a logical identifier referring to the application business logic will be specified, so that the candidates obtained as a result of a gallery search can be linked to it.

// 1. Create the gallery.
String galleryID = "my-new-id-or-uuid-1";
createGallery(galleryID);

// 2. Generate the facial templates from the images if you do not have them.
byte[] image1 = ...;
byte[] image2 = ...;
SelphIDVerifierOptions options = new SelphIDVerifierOptions();

SelphIDFacialExtractionResult extractionResult1 = extractFacialWithImageBuffer(
  image1, options);

SelphIDFacialExtractionResult extractionResult2 = extractFacialWithImageBuffer(
  image2, options);

// 3. The obtained facial templates will be added to the gallery by using any of the following methods of the `SelphIDIdentifier` class (you need to initializate it).

//    3.1. Create / Generate the template ID that will have in gallery
String templateID1 = "my-new-id-or-uuid-2";
String templateID2 = "my-new-id-or-uuid-3";

//    3.2. Insert the templates into the gallery with enrollWithExtractionResult
identifier.enrollWithExtractionResult(galleryID, templateID1, extractionResult1);

//    3.3. Insert the templates into the gallery with enrollWithFacialTemplate
byte[] facialTemplateBuffer = extractionResult2.getFacialTemplate();

identifier.enrollWithFacialTemplate(galleryID, templateID2, facialTemplateBuffer);

Note

In order to register a biometric template in a gallery, it will be necessary to generate the equivalent facial pattern or FacialTemplate by using any of the following methods of the SelphIDVerifier class: - ExtractFacialWithRawTemplate - ExtractFacialWithImageBuffer

Check 3.2.1. Facial extraction using an image section.

The search process will be performed using either of the following two methods of the SelphIDIdentifier class:

public class SelphIDIdentifier {
  SelphIDIdentifierResult identifyWithExtractionResult(
    String galleryID,
    SelphIDFacialExtractionResult extractionResult,
    SelphIDIdentifierOptions identifierOptions
  ) {}

  SelphIDIdentifierResult identifyWithFacialTemplate(
    String galleryID,
    byte[] facialTemplateBuffer,
    SelphIDIdentifierOptions identifierOptions
  ) {}
}

The gallery identifier and a SelphIDIdentifierOptions object will be specified with the following search options:

  • MaxIdentificationCandidates, to indicate the maximum number of returned candidates in order of higher to lower similarity percentage.

  • MinIdentificationSimilarity, to indicate the minimum similarity threshold in the comparison for a candidate to be included in the result set.

Note

By default, MaxIdentificationCandidates will be 20 and MinIdentificationSimilarity will be 0f.

Now we will see an example with both methods:

// 1. For an existing gallery we only need its ID and the image to search for that person in the gallery.
String galleryID = "galleryID";
byte[] imageToSearch = ...;
SelphIDIdentifierOptions identifierOptions = new SelphIDIdentifierOptions();

// 2. Configure the SelphIDIdentifierOptions
identifierOptions.setMaxIdentificationCandidates(5);
identifierOptions.setMinIdentificationSimilarity(0.6f);

// 3. Extract the facial template
SelphIDVerifierOptions verifierOptions = new SelphIDVerifierOptions();

SelphIDFacialExtractionResult extractionResult = identifier.extractFacialWithImageBuffer(
  imageToSearch, verifierOptions);

// 4. Search it with the identifyWithExtractionResult or identifyWithFacialTemplate if you alredy have the facial template.
SelphIDIdentifierResult result = identifier.identifyWithExtractionResult(
  galleryID, extractionResult, identifierOptions);

byte[] facialTemplateBuffer = extractionResult.getFacialTemplate();
SelphIDIdentifierResult result = identifyWithFacialTemplate(
  galleryID, facialTemplateBuffer, identifierOptions);

// 5. View the results. We obtain an array with all possible coincidence, so if for example we want to get the first coincidence:
if (result.size() > 0) {
  FacialAuthenticationStatus authenticate = identifierResult.getFacialAuthenticationStatus(0);
  float similarity = result.getSimilarity(0);
  String templateId = result.getTemplateID(0);
}

Note

To know more about SelphIDIdentifierResult check 3.9.5. SelphIDIdentifierResult section.

The remove process consists of blocking a biometric template from a specific gallery so that is not taken into account for identification processes. This removal does not reduce the size of the gallery neither the associated biometric templates indexing.

public class SelphIDIdentifier {
  boolean removeWithGalleryIndex(
    String galleryID,
    int templateIndex
  ) {}
}

The remove process is performed with the following method of the class SelphIDIdentifier:

// For a example we are going to remove an specific user.
String galleryID = "galleryID";
byte[] facialTemplateBuffer = ...;
SelphIDIdentifierOptions identifierOptions = new SelphIDIdentifierOptions();
identifierOptions.setMaxIdentificationCandidates(1);
identifierOptions.setMinIdentificationSimilarity(0.5f);

// Search the coincidence template in gallery.
SelphIDIdentifierResult identifierResult = identifier.identifyWithFacialTemplate(
  galleryID, facialTemplateBuffer, identifierOptions);

// If coincidence have been found and it is matching, it will be removed.
if (result.size() > 0 &&
  identifierResult.getFacialAuthenticationStatus(0) == FacialAuthenticationStatus.Positive) {
  // Get the index of the template
  int templateIndex = identifierResult.getGalleryIndex(0);

  boolean removed = removeWithGalleryIndex(galleryID, templateIndex);

  if (removed) {
    System.out.println("Template correctly removed.");
  }
}

The objective for the process of querying a gallery is to help as a previous step to be able to make other queries a gallery.

The process will be done with the following class from SelphIDIdentifier:

public class SelphIDIdentifier {
  SelphIDFacialGalleryInfo getGalleryInfo(
    String galleryID
  ) {}
}

A gallery identifier must be used for querying the specific gallery. Now we are going to see what is the SelphIDFacialGalleryInfo object:

public class SelphIDFacialGalleryInfo {
  // Gets if gallery is valid.
  boolean getValidGallery();

  // Gets gallery ID.
  String getGalleryID();

  // Gets gallery size.
  int getGallerySize();

  // Obtain a template with the index.
  String getTemplateID(int index);

  // Obtain all indices from a gallery that matches a exact value.
  int[] getIndicesWithTemplateID(String templateID);
}

3.6.5. Query indices that matches specific templateID

The process of querying all indices from a gallery that matches an exact value of templateID must be done with the following method from SelphIDFacialGalleryInfo that we have seen in the section before:

public class SelphIDFacialGalleryInfo {
  int[] getIndicesWithTemplateID(
    String templateID
  );
}

A templateID must be specified that matches the templateID that is searched in the gallery.

To achieve it, it is necessary to have a SelphIDFacialGalleryInfo instance with valid data:

String galleryID = "galleryID";
String templateID = "templateID";

SelphIDFacialGalleryInfo galleryInfo = identifier.getGalleryInfo(galleryID);

int[] indicesTemplateID = galleryInfo.getIndicesWithTemplateID(templateID);

3.7. Orchestrator

The orchestrator allows obtaining, in the same call, facial authentication and proof of life of the person. We can choose to perform the test with an image and a template or with two templates.

Important

Proof of life will only be performed after successful authentication.

SelphIDVerifierResult r = verifySelphIDWithImageRawTemplate(
  byte[] imageBuffer,
  byte[] templateTarget,
  SelphIDVerifierOptions options
);

SelphIDVerifierResult r = verifySelphIDWithRawTemplates(
  byte[] templateQuery,
  byte[] templateTarget,
  SelphIDVerifierOptions options
);

Note

For more information about SelphIDVerifierResult check 3.9.6. SelphIDVerifierResult section.

3.8. API Tracking

As of version 4.1.0, SelphID-SDK incorporates the event tracking functionality that allows API activity to be monitored and viewed through a Web interface. The SelphID license will include all the data necessary for access to the platform (in single-tenant version).

All previous methods in other versions of the API remain in effect. Some duplicate methods have been added, which now receive a new parameter where tokenized data is found with essential information to communicate with the tracking server. This parameter is called extraData.

3.8.1. Multi-tenant version

As of version 4.3.0, the SelphID-SDK allows two modes of operation regarding the recording of events through API-Tracking:

  • Single-tenant: The connection data to the API Tracking service is encrypted within the SelphID-SDK license and will be used in all calls to the service.

  • Multi-tenant: The connection data will be received in each call through the mobile application. This will allow the SDK to log events to different servers depending on the calling client.

Important

To enable multi-tenant mode, tracking data must not be included in the SelphID-SDK license. Otherwise, single-tenant mode will be activated when the backend is started.

As of version 5.0.0, it is possible to switch between Single-tenant and Multi-tenant modes at runtime, using these methods of SelphIDVerifier:

void setMultitenantMode(boolean multiTenant);

boolean isMultitenantEnabled();

3.8.2. Register

With the following method, the decryption of tokenized data in rawDocumentBuffer is allowed, such as: OCR, document images, etc. In the extraData parameter, the necessary data for the tracking service must be sent.

SelphIDDocumentResult r = extractDocumentWithRawDocument(
  byte[] rawDocumentBuffer,
  byte[] extraData,
  SelphIDVerifierOptions options
);

The following methods receive an encrypted image rawTemplateBufferTarget or an unencrypted imageBufferTarget, to match against the image extracted from the rawDocumentBufferQuery document. In both cases, the data necessary for the tracking service must be sent in the extraData parameter.

SelphIDFacialAuthenticationResult r = authenticateFacialWithRawDocumentRawTemplate(
  byte[] rawDocumentBufferQuery,
  byte[] rawTemplateBufferTarget,
  byte[] extraData,
  SelphIDVerifierOptions options
);

SelphIDFacialAuthenticationResult r = authenticateFacialWithRawDocumentImage(
  byte[] rawDocumentBufferQuery,
  byte[] imageBufferTarget,
  byte[] extraData,
  SelphIDVerifierOptions options
);

3.8.3. SelphIDVerifierOptions

The following methods, added to the SelphIDVerifierOptions class, allow the insertion of information by the client to be represented in the tracking servers, in the register case of use.

public class SelphIDVerifierOptions {
  void setElapsedTimeAllowed(int elapsedTimeAllowed);
  void setFacialDetectionType(int facialDetectionType);
  void setMinFaceAbs(int minFaceAbs);
  void setMinFaceRel(float minFaceRel);
  void setMinIODThreshold(int minIODThreshold);
  void setMaxPoseThreshold(int maxPoseThreshold);
  void setMinimumFacialQuality(int minimumFacialQuality);
  void setAnalyticsDetection(boolean analyticsDetection);
  void setFacialTemplateRawExtraction(boolean facialTemplateRawExtraction);
  void setOptionalDataClientInformation(String json);
}

The input parameter for setOptionalDataClientInformation method must be a well-formed JSON that will accept the following keys:

{
  "address": "String",
  "birthDate": "String",
  "birthPlace": "String",
  "city": "String",
  "documentNumber": "String",
  "name": "String",
  "nationality": "String",
  "surname": "String"
}

Note

Any other key will be ignored. More support for different keys will be provided in the future.

3.8.4. Authentication

The following methods are available to perform authentication. The rawTemplateBuffer parameters receive the encrypted image, and the imagebufferQuery parameter receives the unencrypted image. In both cases, the data necessary for the tracking service must be sent in the extraData parameter.

SelphIDFacialAuthenticationResult r = authenticateFacialWithRawTemplates(
  byte[] rawTemplateBufferQuery,
  byte[] rawTemplateBufferTarget,
  byte[] extraData,
  SelphIDVerifierOptions options
);

SelphIDFacialAuthenticationResult r = authenticateFacialWithImageRawTemplate(
  byte[] imageBufferQuery,
  byte[] rawTemplateBufferTarget,
  byte[] extraData,
  SelphIDVerifierOptions options
);

3.8.5. Passive liveness

The following methods are available for performing the passive life test. The tokenBuffer parameter receives the encrypted image and imageBuffer the unencrypted one. In both cases, the data necessary for the tracking service must be sent in the extraData parameter.

SelphIDFacialLivenessResult r = evaluatePassiveLivenessWithTokenBuffer(
  byte[] tokenBuffer,
  byte[] extraData
);

SelphIDFacialLivenessResult r = evaluatePassiveLivenesWithImageBuffer(
  byte[] imageBuffer,
  byte[] extraData
);

3.8.6. Custom events

As of version 4.5.0 SelphID implements the possibility of sending custom events to the Tracking API, not linked to any internal operation of the SDK. All custom event operations return a status code and message, inside a SelphIDApiTrackingResult object.

The following method allows you to send to the Api Tracking the Facial authentication event, using the parameters authStatus and similarity. We may also record in the API Tracking service the image(s) involved in the authentication. Both images are optional, accepting null or empty buffers.

SelphIDApiTrackingResult r = authenticateFacialTrackingEvent(
  TrackingFamily family,
  FacialAuthenticationStatus authStatus,
  float similarity,
  String source,
  byte[] imageBufferQuery,
  byte[] imageBufferTarget,
  byte[] extraData
);

Possible values of TrackingFamily:

public enum TrackingFamily {
  OnBoarding,
  Authentication
}

The following method allows you to send a custom OCR event to Api Tracking server.

SelphIDApiTrackingResult r =  ocrTrackingEvent(
  String ocrDataJson,
  String source,
  byte[] extraData
);

ocrDataJson is a key-value dictionary in well-formed JSON. Accepts any type of key name with any string value:

{
  "OCR_Key1": "OCR_Value1",
  "OCR_Key2": "OCR_Value2",
  "OCR_Key3": "OCR_Value3",
  "OCR_Key4": "OCR_Value4",
  "OCR_Key5": "OCR_Value5"
}

The following method allow you to send a custom SECURITY_INFO_DATA event to Api Tracking server:

  SelphIDApiTrackingResult r = securityInfoTrackingEvent(
  String securityDataJson,
  boolean succeed,
  String source,
  byte[] extraData
  );

securityDataJson are the security data in JSON format:

{
  "Security_Key1": "Security_Value1",
  "Security_Key2": "Security_Value2",
  "Security_Key3": "Security_Value3",
  "Security_Key4": "Security_Value4",
  "Security_Key5": "Security_Value5"
}

succeed is a boolean that represents if obtaining the security data has gone well or not, and source is the name of the service or source of the security data.

For all custom event operations, we can modify the encrypted eventSource field inside the extraData token.

byte[] extraData2 = setTrackingEventSource(
  String eventSource,
  byte[] extraData
);

Within the custom events, the possibility of closing the operation is offered by the following method:

SelphIDApiTrackingResult r = finishTrackingEvent(
  TrackingFamily family,
  OperationResultStatus status,
  OperationResultReason reason,
  byte[] extraData
);

This method will register in Api Tracking service the Operation result and Step change finish events, that close the operation.

Important

The meaning of these attributes, including those enums, will be given by the user.

Possible values of OperationResultStatus:

public enum OperationResultStatus {
  Succeeded,

  // Denied.
  Denied,

  // Error.
  Error,

  // Cancelled.
  Cancelled
}

Possible values of OperationResultReason:

public enum OperationResultReason {
  // We do not specify a concrete reason.
  None,

  // Some error using the SDK.
  InternalError,

  // The operation was canceled by the user.
  CancelledByUser,

  // The set timeout has expired.
  Timeout,

  // Document validation failed.
  DocumentValidationNotPassed,

  // Error during document validation.
  DocumentValidationError,

  // Authentication failed.
  AuthenticationNotPassed,

  // Error during authentication.
  AuthenticationError,

  // Liveness failed.
  LivenessNotPassed,

  // Error during liveness.
  LivenessError
}

3.8.7. Proxy server

Since SelphID-SDK version 4.5.5, it is possible to send requests to the Tracking API through a proxy server. To configure the proxy parameters, use the following method:

setTrackingProxy(
        String proxyHost,
        int proxyPort,
        String proxyUser,
        String proxyPass);

To disable the proxy, an empty string must be passed as the proxyHost parameter.

3.9. Description of API Results

The following properties, in the result classes, allow you to evaluate the results of each of the above-mentioned methods.

3.9.1. SelphIDFacialExtractionResult

It presents different properties to assess the facial extraction result:

public class SelphIDFacialExtractionResult {
  // Indicates whether the extraction was able to process successfully.
  boolean getExtractionOK() {}

  // Biometric pattern of the detected person.
  byte[] getFacialTemplate() {}

  // Value (0, 1) that indicates the degree of reliability of the extraction.
  float getFaceConfidence() {}

  // Location of landmark points detected in the image.
  Point getLeftEye() {}
  Point getRightEye() {}
  Point getChin() {}
  Point getNose() {}
  Point getLeftMouth() {}
  Point getRigthMouth() {}

  // Interocular distance.
  int getIOD() {}

  // Orientation of the face with respect to the camera. Possible values are as follows:
  FacialPose getFacialPose() {}

  // Face Orientation Angles.
  float getYaw() {}
  float getPitch() {}
  float getRoll() {}

  // Value (0, 1) that indicates the quality of the input image.
  float getImageQuality() {}

  // Indicates the quality of the detected face. Possible values are as follows:
  FacialQuality getFacialQuality() {}

  // Indicates if the person wears glasses. Possible values are as follows:
  FacialGlasses getGlasses() {}

  // Indicates the position of the person's lips. Possible values are as follows:
  FacialLips getLips() {}

  // Numerical value indicating the approximate age of the person.
  int getAge() {}

  // Gender of the person. Possible values are as follows:
  FacialGender getGender() {}

  // Emotion perceived in the person. Possible values are as follows:
  FacialEmotion getEmotion() {}

  // Value (0, 100) that indicates the facial hair quantity.
  float getFacialHair() {}

  // Facial geographic origin summary. Possible values are as follows:
  FacialGeographicOrigin getFacialGeographicOrigin() {}

  // Art work summary. Possible values are as follows:
  FacialArtwork getArtwork() {}

  // Value (0, 1) that indicating the probability that the person wears a mask.
  float getFacialMask() {}
}

Possible values of FacialPose:

public enum FacialPose {
  // Unknown or not computed.
  None,

  // Looking straight ahead.
  Frontal,

  // Person looking to the right.
  RightAngled,

  // Person looking to the left.
  LeftAngled
}

Possible values of FacialQuality:

public enum FacialQuality {
  None,
  Bad,
  Regular,
  Good
}

Possible values of Glasses:

public enum Glasses {
  None,
  Eyes,
  Sun
}

Possible values of Lips:

public enum Lips {
  None,
  Together,
  Apart
}

Possible values of Gender:

public enum Gender {
  None,
  Male,
  Female
}

Possible values of Emotion:

public enum Emotion {
  None,
  Anger,
  Disgust,
  Fear,
  Joy,
  Neutral,
  Sadness,
  Surprise
}

Possible values of FacialGeographicOrigin:

public enum FacialGeographicOrigin {
  None,
  African,
  European,
  EastAsian,
  SouthAsian,
  LatinAmerican,
  MiddleEastern,
  SoutheastAsian
}

Possible values of Artwork:

public enum Artwork {
  None,
  Human,
  Cartoon,
  Painting
}

3.9.2. SelphIDFacialAuthenticationResult

It presents different properties to evaluate the biometric authentication result:

public class SelphIDFacialAuthenticationResult {
  // Results of a facial authentication process.
  FacialAuthenticationStatus getFacialAuthenticationStatus() {}

  // Similarity value between 0 and 1 representing the similarity between the faces of the two images.
  float getSimilarity() {}
}

Important

To evaluate the result you should use the FacialAuthenticationStatus property, similarity value is only used for statistical purposes.

Possible values of FacialAuthenticationStatus:

public enum FacialAuthenticationStatus {
  // Biometric authentication could not be performed.
  None,

  // Negative Authentication result.
  Negative,

  // DEPRECATED.
  Uncertain,

  // Positive Authentication result.
  Positive,

  // Biometric authentication could not be performed because the maximum allowed angle between faces of each image provided was exceeded.
  NoneBecausePoseExceeded,

  // Biometric authentication could not be performed because biometric feature extraction could not be performed on any of the images provided.
  NoneBecauseInvalidExtractions
}

3.9.3. SelphIDDocumentResult

It presents different methods to recover the images used in the process and the data read from the document. The methods are as follows:

public class SelphIDDocumentResult {
  // Obtains all the keys associated with each of the read data from the document.
  String[] listDocumentKeys() {}

  // Obtains a certain data read from the document by means of its associated key.
  String getDocumentValue(String key) {}

  // Obtains all the keys associated with each of the read images from the document.
  String[] listImageKeys() {}

  // Obtains a certain image of the document by means of its associated key.
  byte[] getImage(String key) {}

  // Gets the ExtraData key list.
  String[] listExtraDataKeys() {}

  // Gets the ExtraData value for the key provided.
  String getExtraDataValue(String key) {}
}

3.9.4. SelphIDFacialLivenessResult

It presents the FacialLivenessDiagnostic property to evaluate the result of the passive liveness diagnostic.

public class SelphIDFacialLivenessResult {
  // Gets facial liveness diagnostic.
  FacialLivenessDiagnostic getFacialLivenessDiagnostic() {}

  // Gets the valid timestamp
  boolean getValidTimeStamp() {}
}

Possible values of FacialLivenessDiagnostic:

public enum FacialLivenessDiagnostic {
  // Liveness Diagnostic could not be evaluated.
  None,

  // A fraud condition has been detected, a video or a photograph.
  // DEPRECATED > use "NoLive".
  Spoof,

  // DEPRECATED.
  Uncertain,

  // The subject passes the liveness diagnostic.
  Live,

  // Liveness could not be evaluated because of the low quality of the images used.
  NoneBecauseBadQuality,

  // Liveness could not be evaluated as the face is too close to the camera.
  NoneBecauseFaceTooClose,

  // Liveness could not be evaluated as no faces were detected in the images used.
  NoneBecauseFaceNotFound,

  // Liveness could not be evaluated as very small faces were detected in the images used.
  NoneBecauseFaceTooSmall,

  // Liveness could not be evaluated because the angle between faces was exceeded.
  NoneBecauseAngleTooLarge,

  // Liveness could not be evaluated because of the format of the images used.
  NoneBecauseImageDataError,

  // Liveness could not be evaluated due to an internal error.
  NoneBecauseInternalError,

  // Liveness could not be evaluated due to an error in the processing of the images used.
  NoneBecauseImagePreprocessError,

  // Liveness could not be evaluated because there are too many people at the scene.
  NoneBecauseTooManyFaces,

  // Liveness could not be evaluated as the face is too close to the image borders.
  NoneBecauseFaceTooCloseToBorder,

  // Liveness could not be evaluated as the image used is cropped.
  NoneBecauseFaceCropped,

  // The proof of life could not be evaluated due to a licensing error.
  NoneBecauseLicenseError,

  // Liveness could not be evaluated because the person's face is occluded.
  NoneBecauseFaceOccluded,

  // The image does not correspond to a real person.
  NoLive,

  // Liveness could not be evaluated because the person's eyes are closed.
  NoneBecauseEyesClosed
}

3.9.5. SelphIDIdentifierResult

It presents different methods to retrieve the comparison information of each of the returned candidates as a result of a 1:N search. They are as follows:

public class SelphIDIdentifierResult {
  // Gets the number of candidates obtained in the search.
  int size() {}

  // Gets similarity from result index.
  float getSimilarity(int index) {}

  // Gets gallery index from result index.
  int getGalleryIndex(int index) {}

  // Gets templateID from result index.
  String getTemplateID(int index) {}

  // Gets the status code from the comparison made between the search template and the indicated candidate.
  FacialAuthenticationStatus getFacialAuthenticationStatus(int index) {}

  // Obtains additional information in case the biometric comparison between the search template and the indicated candidate was positive.
  FacialAuthenticationDetail getFacialAuthenticationDetail(int index) {}
}

Possible values of FacialAuthenticationStatus:

public enum FacialAuthenticationStatus {
  // The biometric comparison could not be performed.
  None,

  // The facial patterns do not match.
  Negative,

  // DEPRECATED.
  Uncertain,

  // Facial patterns do match.
  Positive,

  // The biometric comparison could not be performed because the face in some of the captures is located at a too high rotation angle with respect to the camera.
  NoneBecausePoseExceed,

  // The biometric comparison could not be performed because no face could be detected in any of the captures made.
  NoneBecauseInvalidExtractions
}

Possible values of FacialAuthenticationDetail:

public enum FacialAuthenticationDetail {
  // The comparison was not successful.
  None,

  // The facial patterns do match with a low percentage of similarity.
  PositiveLowSecurityLevel,

  // The facial patterns do match with an average percentage of similarity.
  PositiveMediumSecurityLevel,

  // Facial patterns do match with a high percentage of similarity.,
  PositiveHighSecurityLevel
}

3.9.6. SelphIDVerifierResult

It presents different methods to retrieve information of orchestrator. They are as follows:

public class SelphIDVerifierResult {
  // Obtains information about the document.
  SelphIDDocumentResult getSelphIDDocumentResult() {}

  // Obtains information about the authentication process.
  SelphIDFacialAuthenticationResult GetSelphIDFacialAuthenticationResult() {}

  // Obtains information about the liveness process.
  SelphIDFacialLivenessResult GetSelphIDFacialLivenessResult() {}
}

Note

For more information:

3.9.7. SelphIDApiTrackingResult

It presents different methods to retrieve information of Api Tracking operation. They are as follows:

public class SelphIDApiTrackingResult {
  // Obtains the tracking request HTTP result.
  int GetTrackingStatus() {}

  // Obtains the tracking request message.
  String GetTrackingMessage() {}
}

3.10. Description of SelphIDException

If an error occurs within SelphID-SDK the exception SelphIDException will be thrown.

public class SelphIDException extends Exception {
  // Get the type of exception that occurred.
  SelphIDExceptionType getExceptionType() {}
}

Possible values of SelphIDExceptionType:

public enum SelphIDExceptionType {
  // Error in license content.
  ErrorLicenseContent,

  // License has been expired.
  ErrorLicenseExpired,

  // Error in license HostID.
  ErrorLicenseHostID,

  // Error in facial template.
  ErrorFacialTemplate,

  // Error loading library.
  // DEPRECATED.
  ErrorLoadingLibrary,

  // Error in license usage logging.
  ErrorLicenseUsageLogging,

  // Facial image with error or invalid.
  ErrorFacialImage,

  // Error in document data.
  ErrorDocumentData,

  // Error in network license.
  ErrorLicenseNetwork,

  // Error because network license connections have been exceeded.
  ErrorLicenseNetworkConnectionsExceeded,

  // Gallery size reached.
  ErrorLicenseGallerySizeReached,

  // Invalid options.
  ErrorInvalidOptions,

  // Error license is too old.
  ErrorLicenseTooOld,

  // Error because a feature is unavailable.
  ErrorUnavailableFeature,

  // Error because incompatible facial template.
  ErrorIncompatibleFacialTemplate,

  // Error when access to configuration file.
  ErrorConfigurationFileAccess,

  // Error when accessing to API Tracking data.
  ErrorTrackingFileAccess,

  // Error when loading API Tracking configuration file.
  // DEPRECATED.
  ErrorLoadingTrackingFile,

  // SelphID configuration variables are empty (config file or environment variables).
  ErrorConfigVarsEmpty,

  // `FACEPHI_SELPHID_INSTALL_PATH` key is empty.
  ErrorInstallPathEmpty,

  // `FACEPHI_SELPHID_FACIALLIVENESS_PATH_KEY` key is empty.
  ErrorLivenessDataPathEmpty,

  // Error when loading the facial authentication library.
  ErrorLoadingFacialLibrary,

  // Error when loading the liveness library.
  ErrorLoadingLivenessLibrary,

  // Internal error when execute the facial extraction.
  ErrorProcessingFacial,

  // Internal error when execute the liveness check.
  ErrorProcessingLiveness,

  // Error when accessing a gallery in disk.
  ErrorGalleryFile
}