Patents by Inventor Hamid Hekmatian

Hamid Hekmatian has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11398013
    Abstract: A novel GAN is trained to predict high fidelity synthetic images based on low quality input dental images. The GAN further takes input anatomic masks as inputs with each image, the masks labeling pixels of the image corresponding to dental features. The GAN includes an encoder-decoder generator with semantically aware normalization between stages of the decoder according to the masks. The predicted synthetic dental image and an unpaired dental image are evaluated by a first discriminator of the GAN to obtain a realism estimate. The synthetic image and an unpaired dental image may be processed using a pretrained dental encoder to obtain a perceptual loss. The GAN is trained with the realism estimate, perceptual loss, and L1 loss. Utilization may include inputting noisy, low contrast, low resolution, blurry, or degraded dental images and outputting high resolution, denoised, high contrast, deobfuscated, and sharp dental images.
    Type: Grant
    Filed: September 25, 2020
    Date of Patent: July 26, 2022
    Assignee: Retrace Labs
    Inventors: Vasant Kearney, Hamid Hekmatian, Ali Sadat
  • Patent number: 11366985
    Abstract: In medicine and dentistry, image quality affects computer vision accuracy. However, some problems are more tolerant of noise depending on disease severity and radiographic obviousness. There is a need to have a noise estimation model that adapts to each specific domain. A noise estimation model is trained to output a set of domain noise estimates for an input image, each estimate indicating an impact of noise present in the input image on a particular domain, e.g. labeling of a dental feature such as a dental anatomy, pathology, or treatment. The noise estimation model is trained by processing image pairs with a set of machine learning models for a plurality of domains, the image pairs including a raw image and a modified image obtained by adding noise to the raw image. Outputs of the set of machine learning models for the raw and modified images are compared to obtain measured noise metrics. The noise estimation model processes the modified image and is trained to estimate noise metrics.
    Type: Grant
    Filed: August 4, 2021
    Date of Patent: June 21, 2022
    Assignee: Retrace Labs
    Inventors: Vasant Kearney, Hamid Hekmatian, Ali Sadat
  • Patent number: 11367188
    Abstract: A GAN is trained to process input images and produce a synthetic dental image. The GAN further takes masks as inputs with each image, the masks labeling pixels of the image corresponding to dental features (anatomy and/or treatments). The GAN includes an encoder-decoder with normalization between stages of the decoder according to the masks. A synthetic image and an unpaired dental image is evaluated by a first discriminator of the GAN to obtain a realism estimate. The synthetic image and an unpaired dental image may be processed using a pretrained dental encoder to obtain a perceptual loss. The GAN is trained with the realism estimate and perceptual loss. Utilization may include modifying a mask for an input image to include or exclude a shape of a feature such that the synthetic image includes or excludes a dental feature.
    Type: Grant
    Filed: September 25, 2020
    Date of Patent: June 21, 2022
    Assignee: Retrace Labs
    Inventors: Vasant Kearney, Hamid Hekmatian, Stephen Chan, Ali Sadat
  • Patent number: 11357604
    Abstract: A comprehensive dental readiness platform is presented. Dental patient data including an image, proposed treatments, and a dental form are received and processed by first machine learning models to obtain clinical findings and predicted values for fields of the dental form. The clinical findings and other results are processed by a second machine learning model to obtain predictions of a future dental condition of a patient. The second machine learning model utilizes an ensemble of Transformer Neural Networks, Long-Short-Term-Memory Networks, Convolutional Neural Networks, and Tree-Based Algorithms to predict the dental readiness classification, dental readiness durability, dental readiness error, dental emergency likelihood, prognosis, and alternative treatment options.
    Type: Grant
    Filed: June 15, 2021
    Date of Patent: June 14, 2022
    Assignee: Retrace Labs
    Inventors: Vasant Kearney, Hamid Hekmatian, Wenxiang Deng, Ming Ted Wong, Ali Sadat
  • Publication number: 20220180447
    Abstract: Patient meta information, narratives, charts, and images are processed according to a first machine learning model to determine hidden features relating to the adjudication outcome of a proposed claim packet. Image are concatenated and processed using a second machine learning model to label anatomy including periodontal, endodontic, restorative, orthodontic, decay, and other general clinical findings. The meta information, anatomy labels, and image are concatenated and processed using a third machine learning model to obtain feature measurements, such as decay quantifications and periodontal measurements. The feature measurements, anatomy labels, teeth labels, and image information may be concatenated and input to a fourth machine learning model to obtain a diagnosis for a periodontal, decay, endodontic, orthodontic, or restorative condition.
    Type: Application
    Filed: February 2, 2022
    Publication date: June 9, 2022
    Inventors: Vasant Kearney, Hamid Hekmatian, Ali Sadat
  • Patent number: 11276151
    Abstract: Dental images are processed according to a first machine learning model to determine teeth labels. The teeth labels and image are processed using a second machine learning model to label anatomy. The anatomy labels, teeth labels, and image are processed using a third machine learning model to obtain feature measurements, such as pocket depth and clinical attachment level. The feature measurements, labels, and image may be input to a fourth machine learning model to obtain a diagnosis for a periodontal condition. Machine learning models may further be used to reorient, decontaminate, and restore the image prior to processing. A machine learning model may be trained with images and randomly generated masks in order to perform inpainting of dental images with missing information.
    Type: Grant
    Filed: June 12, 2020
    Date of Patent: March 15, 2022
    Assignee: Retrace Labs
    Inventors: Vasant Kearney, Hamid Hekmatian, Ali Sadat
  • Publication number: 20220012815
    Abstract: A dental procedure, one or more dental images, and documentation are processed to extract data and label and/or measure dental anatomy or pathologies using a first stage. The extracted data and labels are processed with a second stage to obtain predictions of deficiencies of the dental images and documentation. The predictions may include tasks to remedy the deficiencies, adjudication likelihood, instant payment amount, patient fee, and average time to payment. The first stage and second stage may each include a plurality of machine learning models. The second stage may include a plurality of machine learning models coupled to a concatenation layer. Inputs to the concatenation layer may include outputs of hidden layers of the plurality of machine learning models. The concatenation layer may take the extracted data and labels as inputs.
    Type: Application
    Filed: September 27, 2021
    Publication date: January 13, 2022
    Inventors: Vasant Kearney, Hamid Hekmatian, Wenxiang Deng, Kevin Yang, Ali Sadat
  • Patent number: 11189028
    Abstract: A machine learning model is trained to predict pixel spacing, distance, and volumetric measurements. Training images are obtained by inpainting around an original image and scaling the inpainted image to obtain the training image having a different pixel spacing than the original image. The machine learning model may include an encoder, a transformer, a first TC layer, and a second TC layer. During training, loss may be obtained from a comparison of the output to the first TC layer to a coarse pixel spacing matrix and a comparison of the output of the second TC layer to a fine pixel spacing matrix. During utilization, the pixel spacing of an image may be obtained using the machine learning model and used to correct the image or measurements obtained from the image.
    Type: Grant
    Filed: April 14, 2021
    Date of Patent: November 30, 2021
    Assignee: Retrace Labs
    Inventors: Vasant Kearney, Ashwini Jha, Wenxiang Deng, Hamid Hekmatian, Ali Sadat
  • Publication number: 20210365736
    Abstract: In medicine and dentistry, image quality affects computer vision accuracy. However, some problems are more tolerant of noise depending on disease severity and radiographic obviousness. There is a need to have a noise estimation model that adapts to each specific domain. A noise estimation model is trained to output a set of domain noise estimates for an input image, each estimate indicating an impact of noise present in the input image on a particular domain, e.g. labeling of a dental feature such as a dental anatomy, pathology, or treatment. The noise estimation model is trained by processing image pairs with a set of machine learning models for a plurality of domains, the image pairs including a raw image and a modified image obtained by adding noise to the raw image. Outputs of the set of machine learning models for the raw and modified images are compared to obtain measured noise metrics. The noise estimation model processes the modified image and is trained to estimate noise metrics.
    Type: Application
    Filed: August 4, 2021
    Publication date: November 25, 2021
    Inventors: Vasant Kearney, Hamid Hekmatian, Ali Sadat
  • Publication number: 20210358604
    Abstract: An interface enables a user to select a block type, place an instance of that block type in a schematic and connect the instance to other instances. Each block type defines processing of dental data, such as dental images according to any of a plurality of modalities and defines logic, such as if statements, to determine an output (positive/negative) for instances of that block type. Logic may include Boolean expressions relating to results of the inf statements. The logic may operate with respect to data derived from patient data using a machine learning model trained to measure dental anatomy, measure dental pathologies, or diagnose dental conditions. A workflow may be created with instances to determine the appropriateness of a dental treatment.
    Type: Application
    Filed: March 26, 2021
    Publication date: November 18, 2021
    Inventors: Vasant Kearney, Stephen Chan, Jiahong Weng, Hamid Hekmatian, Wenxiang Deng, Ashwini Jha, Ali Sadat
  • Publication number: 20210353393
    Abstract: A comprehensive dental readiness platform is presented. Dental patient data including an image, proposed treatments, and a dental form are received and processed by first machine learning models to obtain clinical findings and predicted values for fields of the dental form. The clinical findings and other results are processed by a second machine learning model to obtain predictions of a future dental condition of a patient. The second machine learning model utilizes an ensemble of Transformer Neural Networks, Long-Short-Term-Memory Networks, Convolutional Neural Networks, and Tree-Based Algorithms to predict the dental readiness classification, dental readiness durability, dental readiness error, dental emergency likelihood, prognosis, and alternative treatment options.
    Type: Application
    Filed: June 15, 2021
    Publication date: November 18, 2021
    Inventors: Vasant Kearney, Hamid Hekmatian, Wenxiang Deng, Ming Ted Wong, Ali Sadat
  • Publication number: 20210358123
    Abstract: A machine learning model is trained to predict pixel spacing, distance, and volumetric measurements. Training images are obtained by inpainting around an original image and scaling the inpainted image to obtain the training image having a different pixel spacing than the original image. The machine learning model may include an encoder, a transformer, a first TC layer, and a second TC layer. During training, loss may be obtained from a comparison of the output to the first TC layer to a coarse pixel spacing matrix and a comparison of the output of the second TC layer to a fine pixel spacing matrix. During utilization, the pixel spacing of an image may be obtained using the machine learning model and used to correct the image or measurements obtained from the image.
    Type: Application
    Filed: April 14, 2021
    Publication date: November 18, 2021
    Inventors: Vasant Kearney, Ashwini Jha, Wenxiang Deng, Hamid Hekmatian, Ali Sadat
  • Publication number: 20210357688
    Abstract: A dental form image may be processed with a segmentation network to identify point labels corresponding to reference point labels of a reference form. The image and the point labels along with a reference image and the reference point labels may be processed by a pair of encoders to obtain offsets. Text blobs may be identified from portions of the image corresponding to the reference point labels, such as with correction according to the offsets. Image portions and text blobs for each field of the dental form may be processed to extract text. Intermediate values of machine learning models used to extract text may be input to a machine learning model estimating a procedure code for the dental form. Machine learning models may be used to correctly identify a provider referenced by the dental form.
    Type: Application
    Filed: December 16, 2020
    Publication date: November 18, 2021
    Inventors: Vasant Kearney, Wenxiang Deng, Ashwini Jha, Hamid Hekmatian, Ali Sadat
  • Publication number: 20210118099
    Abstract: A novel GAN is trained to predict high fidelity synthetic images based on low quality input dental images. The GAN further takes input anatomic masks as inputs with each image, the masks labeling pixels of the image corresponding to dental features. The GAN includes an encoder-decoder generator with semantically aware normalization between stages of the decoder according to the masks. The predicted synthetic dental image and an unpaired dental image are evaluated by a first discriminator of the GAN to obtain a realism estimate. The synthetic image and an unpaired dental image may be processed using a pretrained dental encoder to obtain a perceptual loss. The GAN is trained with the realism estimate, perceptual loss, and L1 loss. Utilization may include inputting noisy, low contrast, low resolution, blurry, or degraded dental images and outputting high resolution, denoised, high contrast, deobfuscated, and sharp dental images.
    Type: Application
    Filed: September 25, 2020
    Publication date: April 22, 2021
    Inventors: Vasant Kearney, Hamid Hekmatian, Ali Sadat
  • Publication number: 20210118129
    Abstract: A GAN is trained to process input images and produce a synthetic dental image. The GAN further takes masks as inputs with each image, the masks labeling pixels of the image corresponding to dental features (anatomy and/or treatments). The GAN includes an encoder-decoder with normalization between stages of the decoder according to the masks. A synthetic image and an unpaired dental image is evaluated by a first discriminator of the GAN to obtain a realism estimate. The synthetic image and an unpaired dental image may be processed using a pretrained dental encoder to obtain a perceptual loss. The GAN is trained with the realism estimate and perceptual loss. Utilization may include modifying a mask for an input image to include or exclude a shape of a feature such that the synthetic image includes or excludes a dental feature.
    Type: Application
    Filed: September 25, 2020
    Publication date: April 22, 2021
    Inventors: Vasant Kearney, Hamid Hekmatian, Stephen Chan, Ali Sadat
  • Patent number: 10929995
    Abstract: Methods and systems may be used for obtaining a high-confidence point-cloud. The method includes obtaining three-dimensional sensor data. The three-dimensional sensor data may be raw data. The method includes projecting the raw three-dimensional sensor data to a two-dimensional image space. The method includes obtaining sparse depth data of the two-dimensional image. The method includes obtaining a predicted depth map. The predicted depth map may be based on the sparse depth data. The method includes obtaining a predicted error-map. The predicted error map may be based on the sparse depth data. The method includes outputting a high-confidence point-cloud. The high-confidence point-cloud may be based on the predicted depth map and the predicted error-map.
    Type: Grant
    Filed: June 24, 2019
    Date of Patent: February 23, 2021
    Assignee: Great Wall Motor Company Limited
    Inventors: Hamid Hekmatian, Samir Al-Stouhi, Jingfu Jin
  • Publication number: 20200411167
    Abstract: A first machine learning model is trained to output a patient ID, study ID, and/or image view ID. A final layer of the first model is removed to obtain an encoder that outputs feature vectors that may be used to characterize input images. Images with matching patient ID, study ID, and/or image view ID may be identified by comparing feature vectors. The first machine learning model may be a CNN with two fully connected layers, one of which is removed after training. The encoder may also be trained by evaluating triplet loss, comparing feature vectors for matching and non-matching images, or by training an encoder to reproduce a vector used to generate a synthetic image by a generator as part of an adversarial learning routine.
    Type: Application
    Filed: June 25, 2020
    Publication date: December 31, 2020
    Inventors: Vasant Kearney, Ashwini Jha, Hamid Hekmatian, Ali Sadat
  • Publication number: 20200410649
    Abstract: Dental images are processed according to a first machine learning model to determine teeth labels. The teeth labels and image are processed using a second machine learning model to label anatomy. The anatomy labels, teeth labels, and image are processed using a third machine learning model to obtain feature measurements, such as pocket depth and clinical attachment level. The feature measurements, labels, and image may be input to a fourth machine learning model to obtain a diagnosis for a periodontal condition. Machine learning models may further be used to reorient, decontaminate, and restore the image prior to processing. A machine learning model may be trained with images and randomly generated masks in order to perform inpainting of dental images with missing information.
    Type: Application
    Filed: June 12, 2020
    Publication date: December 31, 2020
    Inventors: Vasant Kearney, Hamid Hekmatian, Ali Sadat
  • Publication number: 20200402246
    Abstract: Methods and systems may be used for obtaining a high-confidence point-cloud. The method includes obtaining three-dimensional sensor data. The three-dimensional sensor data may be raw data. The method includes projecting the raw three-dimensional sensor data to a two-dimensional image space. The method includes obtaining sparse depth data of the two-dimensional image. The method includes obtaining a predicted depth map. The predicted depth map may be based on the sparse depth data. The method includes obtaining a predicted error-map. The predicted error map may be based on the sparse depth data. The method includes outputting a high-confidence point-cloud. The high-confidence point-cloud may be based on the predicted depth map and the predicted error-map.
    Type: Application
    Filed: June 24, 2019
    Publication date: December 24, 2020
    Applicant: Great Wall Motor Company Limited
    Inventors: Hamid Hekmatian, Samir Al-Stouhi, Jingfu Jin
  • Patent number: 10867409
    Abstract: Methods and system for compensating for vehicle system errors. A virtual camera is added to vehicle sensor configurations and a coordinate transformation process which attempts to match multiple 3D points associated with a landmark to a detected landmark. The virtual camera is associated with the detected landmark. The 3D world coordinate points may be transformed to a real 3D camera coordinate system and then to a virtual 3D camera coordinate system. The 3D points in real and virtual camera coordinate frames are projected onto the corresponding 2D image pixel coordinates, respectively. Inclusion of the virtual camera in the coordinate transformation process presents a 3D to 2D point corresponding problem which may be resolved using camera pose estimation algorithms. An offset compensation transformation matrix may be determined which accounts for errors contributed by mis-calibrated vehicle sensors or systems and applied to all data prior to use by the vehicle control systems.
    Type: Grant
    Filed: April 22, 2019
    Date of Patent: December 15, 2020
    Assignee: Great Wall Motor Company Limited
    Inventors: Jingfu Jin, Samir Al-Stouhi, Hamid Hekmatian