Facial Dysmorphology Novel Analysis
FDNA or Facial Dysmorphology Novel Analysis is the name of a technology that detects and evaluates dysmorphic facial features, as well as disease-specific recognizable patterns of human malformations from regular facial photos. The technology is used by geneticists and pediatricians as one of the search and reference tools for standard phenotyping. The technology was developed into a software or a mobile application called Face2Gene by the company FDNA Inc.[1]
Background
Rare genetic disorders are not at all rare, they affect almost 8% of the population. To date, more than 1,500 disorders that include cranio-facial dysmorphology have been described and the number is growing. Only a minority of these patients will receive a definitive diagnosis, which is crucial for medical intervention and reproductive planning. For those who reach a definitive genetic diagnosis, the process entails multiple clinical evaluations (starting with a dysmorphic evaluation) and a scheme of confirmatory tests to confirm or rule out the clinician’s hypothesis, making the entire process very long and expensive.
Research with the FDNA technology
Several studies have been conducted to measure and validate the ability of computerized system incorporating the FDNA technology, to aid and empower genetics professionals in the research and investigation of rare genetic diseases, as well as to study phenotype/ genotype correlations.For example, in a study about the recognition accuracy of the Cornelia de Lange Syndrome (CdLS) phenotype was examined with the automated Facial Dysmorphology Novel Analysis technology. In the first experiment, 2D facial images of CdLS patients with either an NIPBL or SMC1A gene mutation as well as non-CdLS patients which were assessed by dysmorphologists in a previous study were evaluated by the FDNA technology; the average detection rate of experts was 77% while the system’s detection rate was 87%. In the second study, when a new set of NIPBL, SMC1A and non-CdLS patient photos was evaluated, the detection rate increased to 94%. The results from both studies indicated that the system’s detection rate was comparable to that of dysmorphology experts. Therefore, utilizing such technologies may be a useful tool in a clinical setting.[2] The level of accuracy of this technology has been studied a number of times, and presented in several conferences [3][4]
Image analysis and training
The images analyzed by the FDNA technology are 2D and the analysis consists of five steps. First, the frame of the face is located using Haar based cascaded face detection algorithm [5]. Then within the detection frame, 130 fiducial facial markers are located using similar local image detectors trained for each one of the points individually. From the anatomical points located, various local properties such as ratios of distances and local image descriptors of the face are computed. These measurements are used in combination with statistical models called Bayesian networks to indicate the presence of dysmorphic features and to evaluate the extent of similarity to the gestalt associated with each of many genetic syndromes for which the system is trained to identify. In addition, local image information is integrated to provide the appearance or "gestalt" description of the face. Specifically, vectors of local binary patterns are used to capture the appearance of the entire face. Lastly, a mask depicting the characteristic appearance of each syndrome is created. To train the system, images of patients diagnosed with relevant genetic syndromes are collected, analyzed, and the de-identified visual information extracted from the images is stored in a database. Training images are obtained through collaborations with healthcare providers and from public scientific resources. Once the system has been sufficiently trained to recognize the visual markers of a syndrome, a new “syndrome” classifier is deployed in the system and is added to the set of existing syndromes’ classifiers. Any new image is analyzed by all the existing classifiers and similarities to each specific syndrome phenotype is evaluated by each of the deployed classifiers.[5]
References
- ↑ Manousaki D, Allanson J, Wolf L, Deal C. 2015. Characterization of facial phenotypes of children with congenital hypopituitarism and their parents: A matched case-control study. Am J Med Genet Part A167A:1525–1533.
- ↑ Lina Basel-Vanagaite, Lior Wolf, Melanie Orin, ,Lidia Larizza, Cristina Gervasini, Ian D. Krantz, ,Matthew A. Deardoff: DOI: 10.1111/cge.12716 Clinical Genetics 2015
- ↑ Haldeman-Englert, C., Wolf, L. EVALUATING THE BENEFIT AND EFFICIENCY OF USING COMPUTER-AIDED FACIAL DYSMORPHOLOGY NOVEL ANALYSIS IN THE CLINICAL SETTING, ACMG 2015
- ↑ Akalin, I., Yilmaz, S. EFFICIENCY OF COMPUTER‐AIDED FACIAL DYSMORPHOLOGY ANALYSIS IN THE MEDICAL GENETICS CLINIC, ESHG 2015
- ↑ Lina Basel-Vanagaite, Lior Wolf, Melanie Orin, ,Lidia Larizza, Cristina Gervasini, Ian D. Krantz, ,Matthew A. Deardoff: DOI: 10.1111/cge.12716 Clinical Genetics 2015