Patents by Inventor Parham Aarabi
Parham Aarabi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11908128Abstract: Systems and methods process images to determine a skin condition severity analysis and to visualize a skin analysis such as using a deep neural network (e.g. a convolutional neural network) where a problem was formulated as a regression task with integer-only labels. Auxiliary classification tasks (for example, comprising gender and ethnicity predictions) are introduced to improve performance. Scoring and other image processing techniques may be used (e.g. in assoc. with the model) to visualize results such as highlighting the analyzed image. It is demonstrated that the visualization of results, which highlight skin condition affected areas, can also provide perspicuous explanations for the model. A plurality (k) of data augmentations may be made to a source image to yield k augmented images for processing. Activation masks (e.g. heatmaps) produced from processing the k augmented images are used to define a final map to visualize the skin analysis.Type: GrantFiled: August 18, 2020Date of Patent: February 20, 2024Assignee: L'OrealInventors: Ruowei Jiang, Irina Kezele, Zhi Yu, Sophie Seite, Frederic Antoinin Raymond Serge Flament, Parham Aarabi, Mathieu Perrot, Julien Despois
-
Publication number: 20240037870Abstract: Methods, apparatus and techniques herein relates to determining directions in GAN latent space and obtaining disentangled controls over GAN output semantics, for example, to enable use of such to generating synthesized images such as for use to train another model or create an augmented reality The methods, apparatus and techniques herein, in accordance with embodiments, utilize the gradient directions of auxiliary networks to control semantics in GAN latent codes. It is shown that minimal amounts of labelled data with sizes as small as 60 samples can be used, which data can be obtained quickly with human supervision. It is also shown herein, in accordance with embodiments, to select important latent code channels with masks during manipulation, resulting in more disentangled controls.Type: ApplicationFiled: July 28, 2023Publication date: February 1, 2024Applicant: L'OrealInventors: Zikun CHEN, Ruowei JIANG, Brendan DUKE, Parham AARABI
-
Patent number: 11861497Abstract: A system and method implement deep learning on a mobile device to provide a convolutional neural network (CNN) for real time processing of video, for example, to color hair. Images are processed using the CNN to define a respective hair matte of hair pixels. The respective object mattes may be used to determine which pixels to adjust when adjusting pixel values such as to change color, lighting, texture, etc. The CNN may comprise a (pre-trained) network for image classification adapted to produce the segmentation mask. The CNN may be trained for image segmentation (e.g. using coarse segmentation data) to minimize a mask-image gradient consistency loss. The CNN may further use skip connections between corresponding layers of an encoder stage and a decoder stage where shallower layers in the encoder, which contain high-res but weak features are combined with low resolution but powerful features from deeper decoder layers.Type: GrantFiled: December 30, 2021Date of Patent: January 2, 2024Assignee: L'OREALInventors: Alex Levinshtein, Cheng Chang, Edmund Phung, Irina Kezele, Wenzhangzhi Guo, Eric Elmoznino, Ruowei Jiang, Parham Aarabi
-
Patent number: 11832958Abstract: There is shown and described a deep learning based system and method for skin diagnostics as well as testing metrics that show that such a deep learning based system outperforms human experts on the task of apparent skin diagnostics. Also shown and described is a system and method of monitoring a skin treatment regime using a deep learning based system and method for skin diagnostics.Type: GrantFiled: December 13, 2022Date of Patent: December 5, 2023Assignee: L'OREALInventors: Ruowei Jiang, Junwei Ma, He Ma, Eric Elmoznino, Irina Kezele, Alex Levinshtein, Julien Despois, Matthieu Perrot, Frederic Antoinin Raymond Serge Flament, Parham Aarabi
-
Patent number: 11775056Abstract: This document relates to hybrid eye center localization using machine learning, namely cascaded regression and hand-crafted model fitting to improve a computer. There are proposed systems and methods of eye center (iris) detection using a cascade regressor (cascade of regression forests) as well as systems and methods for training a cascaded regressor. For detection, the eyes are detected using a facial feature alignment method. The robustness of localization is improved by using both advanced features and powerful regression machinery. Localization is made more accurate by adding a robust circle fitting post-processing step. Finally, using a simple hand-crafted method for eye center localization, there is provided a method to train the cascaded regressor without the need for manually annotated training data. Evaluation of the approach shows that it achieves state-of-the-art performance.Type: GrantFiled: November 10, 2020Date of Patent: October 3, 2023Assignee: L'OrealInventors: Alex Levinshtein, Edmund Phung, Parham Aarabi
-
Patent number: 11748888Abstract: There are provided methods and computing devices using semi-supervised learning to perform end-to-end video object segmentation, tracking respective object(s) from a single-frame annotation of a reference frame through a video sequence of frames. A known deep learning model may be used to annotate the reference frame to provide ground truth locations and masks for each respective object. A current frame is processed to determine current frame object locations, defining object scoremaps as a normalized cross-correlation between encoded object features of the current frame and encoded object features of a previous frame. Scoremaps for each of more than one previous frame may be defined. An Intersection over Union (IoU) function, responsive to the scoremaps, ranks candidate object proposals defined from the reference frame annotation to associate the respective objects to respective locations in the current frame. Pixel-wise overlap may be removed using a merge function responsive to the scoremaps.Type: GrantFiled: November 12, 2020Date of Patent: September 5, 2023Assignee: L'OrealInventors: Abdalla Ahmed, Irina Kezele, Parham Aarabi, Brendan Duke
-
Publication number: 20230169571Abstract: Techniques are provided for computing systems, methods and computer program products to produce efficient image-to-image translation by adapting unpaired datasets for supervised learning. A first model (a powerful model) may be defined and conditioned using unsupervised learning to produce a synthetic paired dataset from the unpaired dataset, translating images from a first domain to a second domain and images from the second domain to the first domain. The synthetic data generated is useful as ground truths in supervised learning. The first model may be conditioned to overfit the unpaired dataset to enhance the quality of the paired dataset (e.g. the synthetic data generated). A run-time model such as for a target device is trained using the synthetic paired dataset and supervised learning. The run-time model is small and fast to meet the processing resources of the target device (e.g. a personal user device such as a smart phone, tablet, etc.Type: ApplicationFiled: January 27, 2023Publication date: June 1, 2023Applicant: L'OREALInventors: Eric ELMOZNINO, Irina KEZELE, Parham AARABI
-
Patent number: 11645497Abstract: Systems and methods relate to a network model to apply an effect to an image such as an augmented reality effect (e.g. makeup, hair, nail, etc.). The network model uses a conditional cycle-consistent generative image-to-image translation model to translate images from a first domain space where the effect is not applied and to a second continuous domain space where the effect is applied. In order to render arbitrary effects (e.g. lipsticks) not seen at training time, the effect's space is represented as a continuous domain (e.g. a conditional variable vector) learned by encoding simple swatch images of the effect, such as are available as product swatches, as well as a null effect. The model is trained end-to-end in an unsupervised fashion. To condition a generator of the model, convolutional conditional batch normalization (CCBN) is used to apply the vector encoding the reference swatch images that represent the makeup properties.Type: GrantFiled: November 14, 2019Date of Patent: May 9, 2023Assignee: L'OrealInventors: Eric Elmoznino, He Ma, Irina Kezele, Edmund Phung, Alex Levinshtein, Parham Aarabi
-
Publication number: 20230123037Abstract: There is shown and described a deep learning based system and method for skin diagnostics as well as testing metrics that show that such a deep learning based system outperforms human experts on the task of apparent skin diagnostics. Also shown and described is a system and method of monitoring a skin treatment regime using a deep learning based system and method for skin diagnostics.Type: ApplicationFiled: December 13, 2022Publication date: April 20, 2023Applicant: L'OREALInventors: Ruowei JIANG, Junwei MA, He MA, Eric ELMOZNINO, Irina KEZELE, Alex LEVINSHTEIN, Julien DESPOIS, Matthieu PERROT, Frederic Antoinin Raymond Serge FLAMENT, Parham AARABI
-
Patent number: 11615516Abstract: Techniques are provided for computing systems, methods and computer program products to produce efficient image-to-image translation by adapting unpaired datasets for supervised learning. A first model (a powerful model) may be defined and conditioned using unsupervised learning to produce a synthetic paired dataset from the unpaired dataset, translating images from a first domain to a second domain and images from the second domain to the first domain. The synthetic data generated is useful as ground truths in supervised learning. The first model may be conditioned to overfit the unpaired dataset to enhance the quality of the paired dataset (e.g. the synthetic data generated). A run-time model such as for a target device is trained using the synthetic paired dataset and supervised learning. The run-time model is small and fast to meet the processing resources of the target device (e.g. a personal user device such as a smart phone, tablet, etc.Type: GrantFiled: November 12, 2020Date of Patent: March 28, 2023Assignee: L'OREALInventors: Eric Elmoznino, Irina Kezele, Parham Aarabi
-
Patent number: 11553872Abstract: There is shown and described a deep learning based system and method for skin diagnostics as well as testing metrics that show that such a deep learning based system outperforms human experts on the task of apparent skin diagnostics. Also shown and described is a system and method of monitoring a skin treatment regime using a deep learning based system and method for skin diagnostics.Type: GrantFiled: December 4, 2019Date of Patent: January 17, 2023Assignee: L'OREALInventors: Ruowei Jiang, Junwei Ma, He Ma, Eric Elmoznino, Irina Kezele, Alex Levinshtein, Julien Despois, Matthieu Perrot, Frederic Antoinin Raymond Serge Flament, Parham Aarabi
-
Publication number: 20220351416Abstract: Provided are systems and methods to perform colour extraction from swatch images and to define new images using extracted colours. Source images may be classified using a deep learning net (e.g. a CNN) to indicate colour representation strength and drive colour extraction. A clustering classifier is trained to use feature vectors extracted by the net. Separately, pixel clustering is useful when extracting the colour. Cluster count can vary according to classification. In another manner, heuristics (with or without classification) are useful when extracting. Resultant clusters are evaluated against a set of (ordered) expected colours to determine a match. Instances of standardized swatch images may be defined from a template swatch image and respective extracted colours using image processing. The extracted colour may be presented in an augmented reality GUI such as a virtual try-on application and applied to a user image such as a selfie using image processing.Type: ApplicationFiled: July 21, 2022Publication date: November 3, 2022Applicant: L'OrealInventors: Eric ELMOZNINO, Parham AARABI, Yuze ZHANG
-
Patent number: 11461931Abstract: Provided are systems and methods to perform colour extraction from swatch images and to define new images using extracted colours. Source images may be classified using a deep learning net (e.g. a CNN) to indicate colour representation strength and drive colour extraction. A clustering classifier is trained to use feature vectors extracted by the net. Separately, pixel clustering is useful when extracting the colour. Cluster count can vary according to classification. In another manner, heuristics (with or without classification) are useful when extracting. Resultant clusters are evaluated against a set of (ordered) expected colours to determine a match. Instances of standardized swatch images may be defined from a template swatch image and respective extracted colours using image processing. The extracted colour may be presented in an augmented reality GUI such as a virtual try-on application and applied to a user image such as a selfie using image processing.Type: GrantFiled: April 22, 2020Date of Patent: October 4, 2022Assignee: L'OrealInventors: Eric Elmoznino, Parham Aarabi, Yuze Zhang
-
Publication number: 20220284688Abstract: With Convolutional Neural Networks (CNN), facial alignment networks (FAN) have achieved significant accuracy on a wide range of public datasets, which comes along with larger model size and expensive computation costs, making it infeasible to adapt them to real-time applications on edge devices. There is provided a model compression approach on FAN using One-Shot Neural Architecture Search to overcome this problem while preserving performance criteria. Methods and devices provide efficient training and searching (on a single GPU), and resultant models can deploy to run real-time in browser-based applications on edge devices including tablets and smartphones. The compressed models provide comparable cutting-edge accuracy, while having a 30 times smaller model size and can run 40.7 ms per frame in a popular browser on a popular smartphone and OS.Type: ApplicationFiled: March 3, 2022Publication date: September 8, 2022Applicant: L'OREALInventors: Zihao CHEN, Zhi Yu, Parham Aarabi
-
Publication number: 20220269947Abstract: Methods and systems are provided for providing media to a user based on a feature extracted from an input of the user. A communication interface receives the input from the user. Memory is provided for storing a neural network model, media objects and training data, the training data including a first training dataset and a second training dataset. The neural network model is trained in a pre-training step with the first training dataset and is followed by a fine-tuning step with the second training dataset to obtain a multi-layer neural network. Input is provided to the multi-layer neural network to obtain a classification vector. Based on the classification vector, one or more media objects are selected for delivery to the user through the communication interface.Type: ApplicationFiled: February 11, 2022Publication date: August 25, 2022Inventor: Parham Aarabi
-
Patent number: 11410314Abstract: Presented is a convolutional neural network (CNN) model for fingernail tracking, and a method design for nail polish rendering. Using current software and hardware, the CNN model and method to render nail polish runs in real-time on both iOS and web platforms. A use of Loss Mean Pooling (LMP) coupled with a cascaded model architecture simultaneously enables pixel-accurate fingernail predictions at up to 640×480 resolution. The proposed post-processing and rendering method takes advantage of the model's multiple output predictions to render gradients on individual fingernails, and to hide the light-colored distal edge when rendering on top of natural fingernails by stretching the nail mask in the direction of the fingernail tip. Teachings herein may be applied to track objects other than fingernails and to apply appearance effects other than color.Type: GrantFiled: April 29, 2020Date of Patent: August 9, 2022Assignee: L'OrealInventors: Brendan Duke, Abdalla Ahmed, Edmund Phung, Irina Kezele, Parham Aarabi
-
Publication number: 20220198830Abstract: There is provided methods, devices and techniques to process an image using a deep learning model to achieve continuous effect simulation by a unified network where a simple (effect class) estimator is embedded into a regular encoder-decoder architecture. The estimator allows learning of model-estimated class embeddings of all effect classes (e.g. progressive degrees of the effect), thus representing the continuous effect information without manual efforts in selecting proper anchor effect groups. In an embodiment, given a target age class, there is derived a personalized age embedding which considers two aspects of face aging: 1) a personalized residual age embedding at a model-estimated age of the subject, preserving the subject's aging information; and 2) exemplar-face aging basis at the target age, encoding the shared aging patterns among the entire population.Type: ApplicationFiled: December 22, 2021Publication date: June 23, 2022Applicant: L'OrealInventors: Zeqi LI, Ruowei Jiang, Parham Aarabi
-
Publication number: 20220122299Abstract: A system and method implement deep learning on a mobile device to provide a convolutional neural network (CNN) for real time processing of video, for example, to color hair. Images are processed using the CNN to define a respective hair matte of hair pixels. The respective object mattes may be used to determine which pixels to adjust when adjusting pixel values such as to change color, lighting, texture, etc. The CNN may comprise a (pre-trained) network for image classification adapted to produce the segmentation mask. The CNN may be trained for image segmentation (e.g. using coarse segmentation data) to minimize a mask-image gradient consistency loss. The CNN may further use skip connections between corresponding layers of an encoder stage and a decoder stage where shallower layers in the encoder, which contain high-res but weak features are combined with low resolution but powerful features from deeper decoder layers.Type: ApplicationFiled: December 30, 2021Publication date: April 21, 2022Applicant: L'OREALInventors: Alex LEVINSHTEIN, Cheng Chang, Edmund Phung, Irina Kezele, Wenzhangzhi Guo, Eric Elmoznino, Ruowei Jiang, Parham Aarabi
-
Publication number: 20220108445Abstract: Systems, methods and techniques provide for acne localization, counting and visualization. An image is processed using a trained model to identify objects. The model may be a deep learning (e.g. convolutional neural) network configured for object classification with a detection focus on small objects. The image may be a frontal or profile facial image, processed end to end. The model identifies and localizes different types of acne. Instances are counted and visualized such as by annotating the source image. An example annotation is an overlay identifying a type and location of each instance. Counts by acne type assist with scoring. A product and/or service may be recommended in response to the identification of the acne (e.g. the type, localization, counting and/or a score).Type: ApplicationFiled: October 1, 2021Publication date: April 7, 2022Applicant: L'OrealInventors: Yuze ZHANG, Ruowei Jiang, Parham AARABI
-
Publication number: 20220075988Abstract: There are provided systems and methods for facial landmark detection using a convolutional neural network (CNN). The CNN comprises a first stage and a second stage where the first stage produces initial heat maps for the landmarks and initial respective locations for the landmarks. The second stage processes the heat maps and performs Region of Interest-based pooling while preserving feature alignment to produce cropped features. Finally, the second stage predicts from the cropped features a respective refinement location offset to each respective initial location. Combining each respective initial location with its respective refinement location offset provides a respective final coordinate (x,y) for each respective landmark in the image. Two-stage localization design helps to achieve fine-level alignment while remaining computationally efficient.Type: ApplicationFiled: November 17, 2021Publication date: March 10, 2022Applicant: L'OrealInventors: Tian Xing LI, Zhi YU, Irina KEZELE, Edmund PHUNG, Parham AARABI