Patents by Inventor Irina Kezele
Irina Kezele has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11908128Abstract: Systems and methods process images to determine a skin condition severity analysis and to visualize a skin analysis such as using a deep neural network (e.g. a convolutional neural network) where a problem was formulated as a regression task with integer-only labels. Auxiliary classification tasks (for example, comprising gender and ethnicity predictions) are introduced to improve performance. Scoring and other image processing techniques may be used (e.g. in assoc. with the model) to visualize results such as highlighting the analyzed image. It is demonstrated that the visualization of results, which highlight skin condition affected areas, can also provide perspicuous explanations for the model. A plurality (k) of data augmentations may be made to a source image to yield k augmented images for processing. Activation masks (e.g. heatmaps) produced from processing the k augmented images are used to define a final map to visualize the skin analysis.Type: GrantFiled: August 18, 2020Date of Patent: February 20, 2024Assignee: L'OrealInventors: Ruowei Jiang, Irina Kezele, Zhi Yu, Sophie Seite, Frederic Antoinin Raymond Serge Flament, Parham Aarabi, Mathieu Perrot, Julien Despois
-
Patent number: 11861497Abstract: A system and method implement deep learning on a mobile device to provide a convolutional neural network (CNN) for real time processing of video, for example, to color hair. Images are processed using the CNN to define a respective hair matte of hair pixels. The respective object mattes may be used to determine which pixels to adjust when adjusting pixel values such as to change color, lighting, texture, etc. The CNN may comprise a (pre-trained) network for image classification adapted to produce the segmentation mask. The CNN may be trained for image segmentation (e.g. using coarse segmentation data) to minimize a mask-image gradient consistency loss. The CNN may further use skip connections between corresponding layers of an encoder stage and a decoder stage where shallower layers in the encoder, which contain high-res but weak features are combined with low resolution but powerful features from deeper decoder layers.Type: GrantFiled: December 30, 2021Date of Patent: January 2, 2024Assignee: L'OREALInventors: Alex Levinshtein, Cheng Chang, Edmund Phung, Irina Kezele, Wenzhangzhi Guo, Eric Elmoznino, Ruowei Jiang, Parham Aarabi
-
Patent number: 11832958Abstract: There is shown and described a deep learning based system and method for skin diagnostics as well as testing metrics that show that such a deep learning based system outperforms human experts on the task of apparent skin diagnostics. Also shown and described is a system and method of monitoring a skin treatment regime using a deep learning based system and method for skin diagnostics.Type: GrantFiled: December 13, 2022Date of Patent: December 5, 2023Assignee: L'OREALInventors: Ruowei Jiang, Junwei Ma, He Ma, Eric Elmoznino, Irina Kezele, Alex Levinshtein, Julien Despois, Matthieu Perrot, Frederic Antoinin Raymond Serge Flament, Parham Aarabi
-
Patent number: 11748888Abstract: There are provided methods and computing devices using semi-supervised learning to perform end-to-end video object segmentation, tracking respective object(s) from a single-frame annotation of a reference frame through a video sequence of frames. A known deep learning model may be used to annotate the reference frame to provide ground truth locations and masks for each respective object. A current frame is processed to determine current frame object locations, defining object scoremaps as a normalized cross-correlation between encoded object features of the current frame and encoded object features of a previous frame. Scoremaps for each of more than one previous frame may be defined. An Intersection over Union (IoU) function, responsive to the scoremaps, ranks candidate object proposals defined from the reference frame annotation to associate the respective objects to respective locations in the current frame. Pixel-wise overlap may be removed using a merge function responsive to the scoremaps.Type: GrantFiled: November 12, 2020Date of Patent: September 5, 2023Assignee: L'OrealInventors: Abdalla Ahmed, Irina Kezele, Parham Aarabi, Brendan Duke
-
Publication number: 20230169794Abstract: Methods, devices and computer-readable media for processing a compressed video to perform an inference task are disclosed. Processing the compressed video may include selecting a subset of frame encodings of the compressed video, or zero or more modalities (RGB, motion vectors, residuals) of a frame encoding, for further processing to perform the inference task. Pre-existing motion vector and/or residual information in frame encodings of the compressed video are leveraged to adaptively and efficiently perform the inference task. In some embodiments, the inference task is an action recognition task, such as a human action recognition task.Type: ApplicationFiled: November 30, 2021Publication date: June 1, 2023Inventors: Irina KEZELE, Mostafa SHAHABINEJAD, Seyed shahabeddin NABAVI, Wentao LIU, Yuanhao YU, Rui Xiang CHAI, Jin TANG, Yang WANG
-
Publication number: 20230169571Abstract: Techniques are provided for computing systems, methods and computer program products to produce efficient image-to-image translation by adapting unpaired datasets for supervised learning. A first model (a powerful model) may be defined and conditioned using unsupervised learning to produce a synthetic paired dataset from the unpaired dataset, translating images from a first domain to a second domain and images from the second domain to the first domain. The synthetic data generated is useful as ground truths in supervised learning. The first model may be conditioned to overfit the unpaired dataset to enhance the quality of the paired dataset (e.g. the synthetic data generated). A run-time model such as for a target device is trained using the synthetic paired dataset and supervised learning. The run-time model is small and fast to meet the processing resources of the target device (e.g. a personal user device such as a smart phone, tablet, etc.Type: ApplicationFiled: January 27, 2023Publication date: June 1, 2023Applicant: L'OREALInventors: Eric ELMOZNINO, Irina KEZELE, Parham AARABI
-
METHOD, APPARATUS AND SYSTEM FOR ADAPTATING A MACHINE LEARNING MODEL FOR OPTICAL FLOW MAP PREDICTION
Publication number: 20230148384Abstract: There is provided a method, apparatus and system for adapting a machine learning model for optical flow prediction. A machine learning model can be trained or adapted based on compressed video data, using motion vector information extracted from the compressed video data as ground-truth information for use in adapting the model to a motion vector prediction task. The model so adapted can accordingly be adapted for the similar task of optical flow prediction. Thus, the model can be adapted at test time to image data which is taken from an appropriate distribution. A meta-learning process can be performed prior to such model adaptation to potentially improve the model's performance.Type: ApplicationFiled: November 11, 2021Publication date: May 11, 2023Applicant: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Wentao LIU, Seyed Mehdi AYYOUBZADEH, Yuanhao YU, Irina KEZELE, Yang WANG, Xiaolin WU, Jin TANG -
Patent number: 11645497Abstract: Systems and methods relate to a network model to apply an effect to an image such as an augmented reality effect (e.g. makeup, hair, nail, etc.). The network model uses a conditional cycle-consistent generative image-to-image translation model to translate images from a first domain space where the effect is not applied and to a second continuous domain space where the effect is applied. In order to render arbitrary effects (e.g. lipsticks) not seen at training time, the effect's space is represented as a continuous domain (e.g. a conditional variable vector) learned by encoding simple swatch images of the effect, such as are available as product swatches, as well as a null effect. The model is trained end-to-end in an unsupervised fashion. To condition a generator of the model, convolutional conditional batch normalization (CCBN) is used to apply the vector encoding the reference swatch images that represent the makeup properties.Type: GrantFiled: November 14, 2019Date of Patent: May 9, 2023Assignee: L'OrealInventors: Eric Elmoznino, He Ma, Irina Kezele, Edmund Phung, Alex Levinshtein, Parham Aarabi
-
Publication number: 20230123037Abstract: There is shown and described a deep learning based system and method for skin diagnostics as well as testing metrics that show that such a deep learning based system outperforms human experts on the task of apparent skin diagnostics. Also shown and described is a system and method of monitoring a skin treatment regime using a deep learning based system and method for skin diagnostics.Type: ApplicationFiled: December 13, 2022Publication date: April 20, 2023Applicant: L'OREALInventors: Ruowei JIANG, Junwei MA, He MA, Eric ELMOZNINO, Irina KEZELE, Alex LEVINSHTEIN, Julien DESPOIS, Matthieu PERROT, Frederic Antoinin Raymond Serge FLAMENT, Parham AARABI
-
Patent number: 11615516Abstract: Techniques are provided for computing systems, methods and computer program products to produce efficient image-to-image translation by adapting unpaired datasets for supervised learning. A first model (a powerful model) may be defined and conditioned using unsupervised learning to produce a synthetic paired dataset from the unpaired dataset, translating images from a first domain to a second domain and images from the second domain to the first domain. The synthetic data generated is useful as ground truths in supervised learning. The first model may be conditioned to overfit the unpaired dataset to enhance the quality of the paired dataset (e.g. the synthetic data generated). A run-time model such as for a target device is trained using the synthetic paired dataset and supervised learning. The run-time model is small and fast to meet the processing resources of the target device (e.g. a personal user device such as a smart phone, tablet, etc.Type: GrantFiled: November 12, 2020Date of Patent: March 28, 2023Assignee: L'OREALInventors: Eric Elmoznino, Irina Kezele, Parham Aarabi
-
Patent number: 11553872Abstract: There is shown and described a deep learning based system and method for skin diagnostics as well as testing metrics that show that such a deep learning based system outperforms human experts on the task of apparent skin diagnostics. Also shown and described is a system and method of monitoring a skin treatment regime using a deep learning based system and method for skin diagnostics.Type: GrantFiled: December 4, 2019Date of Patent: January 17, 2023Assignee: L'OREALInventors: Ruowei Jiang, Junwei Ma, He Ma, Eric Elmoznino, Irina Kezele, Alex Levinshtein, Julien Despois, Matthieu Perrot, Frederic Antoinin Raymond Serge Flament, Parham Aarabi
-
Patent number: 11410314Abstract: Presented is a convolutional neural network (CNN) model for fingernail tracking, and a method design for nail polish rendering. Using current software and hardware, the CNN model and method to render nail polish runs in real-time on both iOS and web platforms. A use of Loss Mean Pooling (LMP) coupled with a cascaded model architecture simultaneously enables pixel-accurate fingernail predictions at up to 640×480 resolution. The proposed post-processing and rendering method takes advantage of the model's multiple output predictions to render gradients on individual fingernails, and to hide the light-colored distal edge when rendering on top of natural fingernails by stretching the nail mask in the direction of the fingernail tip. Teachings herein may be applied to track objects other than fingernails and to apply appearance effects other than color.Type: GrantFiled: April 29, 2020Date of Patent: August 9, 2022Assignee: L'OrealInventors: Brendan Duke, Abdalla Ahmed, Edmund Phung, Irina Kezele, Parham Aarabi
-
Publication number: 20220122299Abstract: A system and method implement deep learning on a mobile device to provide a convolutional neural network (CNN) for real time processing of video, for example, to color hair. Images are processed using the CNN to define a respective hair matte of hair pixels. The respective object mattes may be used to determine which pixels to adjust when adjusting pixel values such as to change color, lighting, texture, etc. The CNN may comprise a (pre-trained) network for image classification adapted to produce the segmentation mask. The CNN may be trained for image segmentation (e.g. using coarse segmentation data) to minimize a mask-image gradient consistency loss. The CNN may further use skip connections between corresponding layers of an encoder stage and a decoder stage where shallower layers in the encoder, which contain high-res but weak features are combined with low resolution but powerful features from deeper decoder layers.Type: ApplicationFiled: December 30, 2021Publication date: April 21, 2022Applicant: L'OREALInventors: Alex LEVINSHTEIN, Cheng Chang, Edmund Phung, Irina Kezele, Wenzhangzhi Guo, Eric Elmoznino, Ruowei Jiang, Parham Aarabi
-
Publication number: 20220075988Abstract: There are provided systems and methods for facial landmark detection using a convolutional neural network (CNN). The CNN comprises a first stage and a second stage where the first stage produces initial heat maps for the landmarks and initial respective locations for the landmarks. The second stage processes the heat maps and performs Region of Interest-based pooling while preserving feature alignment to produce cropped features. Finally, the second stage predicts from the cropped features a respective refinement location offset to each respective initial location. Combining each respective initial location with its respective refinement location offset provides a respective final coordinate (x,y) for each respective landmark in the image. Two-stage localization design helps to achieve fine-level alignment while remaining computationally efficient.Type: ApplicationFiled: November 17, 2021Publication date: March 10, 2022Applicant: L'OrealInventors: Tian Xing LI, Zhi YU, Irina KEZELE, Edmund PHUNG, Parham AARABI
-
Patent number: 11227145Abstract: There are provided systems and methods for facial landmark detection using a convolutional neural network (CNN). The CNN comprises a first stage and a second stage where the first stage produces initial heat maps for the landmarks and initial respective locations for the landmarks. The second stage processes the heat maps and performs Region of Interest-based pooling while preserving feature alignment to produce cropped features. Finally, the second stage predicts from the cropped features a respective refinement location offset to each respective initial location. Combining each respective initial location with its respective refinement location offset provides a respective final coordinate (x,y) for each respective landmark in the image. Two-stage localization design helps to achieve fine-level alignment while remaining computationally efficient.Type: GrantFiled: April 22, 2020Date of Patent: January 18, 2022Assignee: L'OrealInventors: Tian Xing Li, Zhi Yu, Irina Kezele, Edmund Phung, Parham Aarabi
-
Patent number: 11216988Abstract: A system and method implement deep learning on a mobile device to provide a convolutional neural network (CNN) for real time processing of video, for example, to color hair. Images are processed using the CNN to define a respective hair matte of hair pixels. The respective object mattes may be used to determine which pixels to adjust when adjusting pixel values such as to change color, lighting, texture, etc. The CNN may comprise a (pre-trained) network for image classification adapted to produce the segmentation mask. The CNN may be trained for image segmentation (e.g. using coarse segmentation data) to minimize a mask-image gradient consistency loss. The CNN may further use skip connections between corresponding layers of an encoder stage and a decoder stage where shallower layers in the encoder, which contain high-res but weak features are combined with low resolution but powerful features from deeper decoder layers.Type: GrantFiled: October 24, 2018Date of Patent: January 4, 2022Assignee: L'OREALInventors: Alex Levinshtein, Cheng Chang, Edmund Phung, Irina Kezele, Wenzhangzhi Guo, Eric Elmoznino, Ruowei Jiang, Parham Aarabi
-
Publication number: 20210150684Abstract: Techniques are provided for computing systems, methods and computer program products to produce efficient image-to-image translation by adapting unpaired datasets for supervised learning. A first model (a powerful model) may be defined and conditioned using unsupervised learning to produce a synthetic paired dataset from the unpaired dataset, translating images from a first domain to a second domain and images from the second domain to the first domain. The synthetic data generated is useful as ground truths in supervised learning. The first model may be conditioned to overfit the unpaired dataset to enhance the quality of the paired dataset (e.g. the synthetic data generated). A run-time model such as for a target device is trained using the synthetic paired dataset and supervised learning. The run-time model is small and fast to meet the processing resources of the target device (e.g. a personal user device such as a smart phone, tablet, etc.Type: ApplicationFiled: November 12, 2020Publication date: May 20, 2021Applicant: ModiFace Inc.Inventors: Eric ELMOZNINO, Irina Kezele, Parham Aarabi
-
Publication number: 20210150728Abstract: There are provided methods and computing devices using semi-supervised learning to perform end-to-end video object segmentation, tracking respective object(s) from a single-frame annotation of a reference frame through a video sequence of frames. A known deep learning model may be used to annotate the reference frame to provide ground truth locations and masks for each respective object. A current frame is processed to determine current frame object locations, defining object scoremaps as a normalized cross-correlation between encoded object features of the current frame and encoded object features of a previous frame. Scoremaps for each of more than one previous frame may be defined. An Intersection over Union (IoU) function, responsive to the scoremaps, ranks candidate object proposals defined from the reference frame annotation to associate the respective objects to respective locations in the current frame. Pixel-wise overlap may be removed using a merge function responsive to the scoremaps.Type: ApplicationFiled: November 12, 2020Publication date: May 20, 2021Applicant: ModiFace Inc.Inventors: Abdalla AHMED, Irina KEZELE, Parham AARABI, Brendan DUKE
-
Publication number: 20210012493Abstract: Systems and methods process images to determine a skin condition severity analysis and to visualize a skin analysis such as using a deep neural network (e.g. a convolutional neural network) where a problem was formulated as a regression task with integer-only labels. Auxiliary classification tasks (for example, comprising gender and ethnicity predictions) are introduced to improve performance. Scoring and other image processing techniques may be used (e.g. in assoc. with the model) to visualize results such as highlighting the analyzed image. It is demonstrated that the visualization of results, which highlight skin condition affected areas, can also provide perspicuous explanations for the model. A plurality (k) of data augmentations may be made to a source image to yield k augmented images for processing. Activation masks (e.g. heatmaps) produced from processing the k augmented images are used to define a final map to visualize the skin analysis.Type: ApplicationFiled: August 18, 2020Publication date: January 14, 2021Applicant: L'OrealInventors: Ruowei JIANG, Irina KEZELE, Zhi Yu, Sophie SEITE, Frederic FLAMENT, Parham AARABI
-
Publication number: 20200349711Abstract: Presented is a convolutional neural network (CNN) model for fingernail tracking, and a method design for nail polish rendering. Using current software and hardware, the CNN model and method to render nail polish runs in real-time on both iOS and web platforms. A use of Loss Mean Pooling (LMP) coupled with a cascaded model architecture simultaneously enables pixel-accurate fingernail predictions at up to 640×480 resolution. The proposed post-processing and rendering method takes advantage of the model's multiple output predictions to render gradients on individual fingernails, and to hide the light-colored distal edge when rendering on top of natural fingernails by stretching the nail mask in the direction of the fingernail tip. Teachings herein may be applied to track objects other than fingernails and to apply appearance effects other than color.Type: ApplicationFiled: April 29, 2020Publication date: November 5, 2020Applicant: L'OrealInventors: Brendan Duke, Abdalla Ahmed, Edmund Phung, Irina Kezele, Parham Aarabi