Patents by Inventor Eric ELMOZNINO

Eric ELMOZNINO has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11995703
    Abstract: Techniques are provided for computing systems, methods and computer program products to produce efficient image-to-image translation by adapting unpaired datasets for supervised learning. A first model (a powerful model) may be defined and conditioned using unsupervised learning to produce a synthetic paired dataset from the unpaired dataset, translating images from a first domain to a second domain and images from the second domain to the first domain. The synthetic data generated is useful as ground truths in supervised learning. The first model may be conditioned to overfit the unpaired dataset to enhance the quality of the paired dataset (e.g. the synthetic data generated). A run-time model such as for a target device is trained using the synthetic paired dataset and supervised learning. The run-time model is small and fast to meet the processing resources of the target device (e.g. a personal user device such as a smart phone, tablet, etc.
    Type: Grant
    Filed: January 27, 2023
    Date of Patent: May 28, 2024
    Assignee: L'OREAL
    Inventors: Eric Elmoznino, Irina Kezele, Parham Aarabi
  • Patent number: 11861497
    Abstract: A system and method implement deep learning on a mobile device to provide a convolutional neural network (CNN) for real time processing of video, for example, to color hair. Images are processed using the CNN to define a respective hair matte of hair pixels. The respective object mattes may be used to determine which pixels to adjust when adjusting pixel values such as to change color, lighting, texture, etc. The CNN may comprise a (pre-trained) network for image classification adapted to produce the segmentation mask. The CNN may be trained for image segmentation (e.g. using coarse segmentation data) to minimize a mask-image gradient consistency loss. The CNN may further use skip connections between corresponding layers of an encoder stage and a decoder stage where shallower layers in the encoder, which contain high-res but weak features are combined with low resolution but powerful features from deeper decoder layers.
    Type: Grant
    Filed: December 30, 2021
    Date of Patent: January 2, 2024
    Assignee: L'OREAL
    Inventors: Alex Levinshtein, Cheng Chang, Edmund Phung, Irina Kezele, Wenzhangzhi Guo, Eric Elmoznino, Ruowei Jiang, Parham Aarabi
  • Patent number: 11832958
    Abstract: There is shown and described a deep learning based system and method for skin diagnostics as well as testing metrics that show that such a deep learning based system outperforms human experts on the task of apparent skin diagnostics. Also shown and described is a system and method of monitoring a skin treatment regime using a deep learning based system and method for skin diagnostics.
    Type: Grant
    Filed: December 13, 2022
    Date of Patent: December 5, 2023
    Assignee: L'OREAL
    Inventors: Ruowei Jiang, Junwei Ma, He Ma, Eric Elmoznino, Irina Kezele, Alex Levinshtein, Julien Despois, Matthieu Perrot, Frederic Antoinin Raymond Serge Flament, Parham Aarabi
  • Publication number: 20230169571
    Abstract: Techniques are provided for computing systems, methods and computer program products to produce efficient image-to-image translation by adapting unpaired datasets for supervised learning. A first model (a powerful model) may be defined and conditioned using unsupervised learning to produce a synthetic paired dataset from the unpaired dataset, translating images from a first domain to a second domain and images from the second domain to the first domain. The synthetic data generated is useful as ground truths in supervised learning. The first model may be conditioned to overfit the unpaired dataset to enhance the quality of the paired dataset (e.g. the synthetic data generated). A run-time model such as for a target device is trained using the synthetic paired dataset and supervised learning. The run-time model is small and fast to meet the processing resources of the target device (e.g. a personal user device such as a smart phone, tablet, etc.
    Type: Application
    Filed: January 27, 2023
    Publication date: June 1, 2023
    Applicant: L'OREAL
    Inventors: Eric ELMOZNINO, Irina KEZELE, Parham AARABI
  • Patent number: 11645497
    Abstract: Systems and methods relate to a network model to apply an effect to an image such as an augmented reality effect (e.g. makeup, hair, nail, etc.). The network model uses a conditional cycle-consistent generative image-to-image translation model to translate images from a first domain space where the effect is not applied and to a second continuous domain space where the effect is applied. In order to render arbitrary effects (e.g. lipsticks) not seen at training time, the effect's space is represented as a continuous domain (e.g. a conditional variable vector) learned by encoding simple swatch images of the effect, such as are available as product swatches, as well as a null effect. The model is trained end-to-end in an unsupervised fashion. To condition a generator of the model, convolutional conditional batch normalization (CCBN) is used to apply the vector encoding the reference swatch images that represent the makeup properties.
    Type: Grant
    Filed: November 14, 2019
    Date of Patent: May 9, 2023
    Assignee: L'Oreal
    Inventors: Eric Elmoznino, He Ma, Irina Kezele, Edmund Phung, Alex Levinshtein, Parham Aarabi
  • Publication number: 20230123037
    Abstract: There is shown and described a deep learning based system and method for skin diagnostics as well as testing metrics that show that such a deep learning based system outperforms human experts on the task of apparent skin diagnostics. Also shown and described is a system and method of monitoring a skin treatment regime using a deep learning based system and method for skin diagnostics.
    Type: Application
    Filed: December 13, 2022
    Publication date: April 20, 2023
    Applicant: L'OREAL
    Inventors: Ruowei JIANG, Junwei MA, He MA, Eric ELMOZNINO, Irina KEZELE, Alex LEVINSHTEIN, Julien DESPOIS, Matthieu PERROT, Frederic Antoinin Raymond Serge FLAMENT, Parham AARABI
  • Patent number: 11615516
    Abstract: Techniques are provided for computing systems, methods and computer program products to produce efficient image-to-image translation by adapting unpaired datasets for supervised learning. A first model (a powerful model) may be defined and conditioned using unsupervised learning to produce a synthetic paired dataset from the unpaired dataset, translating images from a first domain to a second domain and images from the second domain to the first domain. The synthetic data generated is useful as ground truths in supervised learning. The first model may be conditioned to overfit the unpaired dataset to enhance the quality of the paired dataset (e.g. the synthetic data generated). A run-time model such as for a target device is trained using the synthetic paired dataset and supervised learning. The run-time model is small and fast to meet the processing resources of the target device (e.g. a personal user device such as a smart phone, tablet, etc.
    Type: Grant
    Filed: November 12, 2020
    Date of Patent: March 28, 2023
    Assignee: L'OREAL
    Inventors: Eric Elmoznino, Irina Kezele, Parham Aarabi
  • Patent number: 11553872
    Abstract: There is shown and described a deep learning based system and method for skin diagnostics as well as testing metrics that show that such a deep learning based system outperforms human experts on the task of apparent skin diagnostics. Also shown and described is a system and method of monitoring a skin treatment regime using a deep learning based system and method for skin diagnostics.
    Type: Grant
    Filed: December 4, 2019
    Date of Patent: January 17, 2023
    Assignee: L'OREAL
    Inventors: Ruowei Jiang, Junwei Ma, He Ma, Eric Elmoznino, Irina Kezele, Alex Levinshtein, Julien Despois, Matthieu Perrot, Frederic Antoinin Raymond Serge Flament, Parham Aarabi
  • Publication number: 20220351416
    Abstract: Provided are systems and methods to perform colour extraction from swatch images and to define new images using extracted colours. Source images may be classified using a deep learning net (e.g. a CNN) to indicate colour representation strength and drive colour extraction. A clustering classifier is trained to use feature vectors extracted by the net. Separately, pixel clustering is useful when extracting the colour. Cluster count can vary according to classification. In another manner, heuristics (with or without classification) are useful when extracting. Resultant clusters are evaluated against a set of (ordered) expected colours to determine a match. Instances of standardized swatch images may be defined from a template swatch image and respective extracted colours using image processing. The extracted colour may be presented in an augmented reality GUI such as a virtual try-on application and applied to a user image such as a selfie using image processing.
    Type: Application
    Filed: July 21, 2022
    Publication date: November 3, 2022
    Applicant: L'Oreal
    Inventors: Eric ELMOZNINO, Parham AARABI, Yuze ZHANG
  • Patent number: 11461931
    Abstract: Provided are systems and methods to perform colour extraction from swatch images and to define new images using extracted colours. Source images may be classified using a deep learning net (e.g. a CNN) to indicate colour representation strength and drive colour extraction. A clustering classifier is trained to use feature vectors extracted by the net. Separately, pixel clustering is useful when extracting the colour. Cluster count can vary according to classification. In another manner, heuristics (with or without classification) are useful when extracting. Resultant clusters are evaluated against a set of (ordered) expected colours to determine a match. Instances of standardized swatch images may be defined from a template swatch image and respective extracted colours using image processing. The extracted colour may be presented in an augmented reality GUI such as a virtual try-on application and applied to a user image such as a selfie using image processing.
    Type: Grant
    Filed: April 22, 2020
    Date of Patent: October 4, 2022
    Assignee: L'Oreal
    Inventors: Eric Elmoznino, Parham Aarabi, Yuze Zhang
  • Publication number: 20220122299
    Abstract: A system and method implement deep learning on a mobile device to provide a convolutional neural network (CNN) for real time processing of video, for example, to color hair. Images are processed using the CNN to define a respective hair matte of hair pixels. The respective object mattes may be used to determine which pixels to adjust when adjusting pixel values such as to change color, lighting, texture, etc. The CNN may comprise a (pre-trained) network for image classification adapted to produce the segmentation mask. The CNN may be trained for image segmentation (e.g. using coarse segmentation data) to minimize a mask-image gradient consistency loss. The CNN may further use skip connections between corresponding layers of an encoder stage and a decoder stage where shallower layers in the encoder, which contain high-res but weak features are combined with low resolution but powerful features from deeper decoder layers.
    Type: Application
    Filed: December 30, 2021
    Publication date: April 21, 2022
    Applicant: L'OREAL
    Inventors: Alex LEVINSHTEIN, Cheng Chang, Edmund Phung, Irina Kezele, Wenzhangzhi Guo, Eric Elmoznino, Ruowei Jiang, Parham Aarabi
  • Patent number: 11216988
    Abstract: A system and method implement deep learning on a mobile device to provide a convolutional neural network (CNN) for real time processing of video, for example, to color hair. Images are processed using the CNN to define a respective hair matte of hair pixels. The respective object mattes may be used to determine which pixels to adjust when adjusting pixel values such as to change color, lighting, texture, etc. The CNN may comprise a (pre-trained) network for image classification adapted to produce the segmentation mask. The CNN may be trained for image segmentation (e.g. using coarse segmentation data) to minimize a mask-image gradient consistency loss. The CNN may further use skip connections between corresponding layers of an encoder stage and a decoder stage where shallower layers in the encoder, which contain high-res but weak features are combined with low resolution but powerful features from deeper decoder layers.
    Type: Grant
    Filed: October 24, 2018
    Date of Patent: January 4, 2022
    Assignee: L'OREAL
    Inventors: Alex Levinshtein, Cheng Chang, Edmund Phung, Irina Kezele, Wenzhangzhi Guo, Eric Elmoznino, Ruowei Jiang, Parham Aarabi
  • Publication number: 20210150684
    Abstract: Techniques are provided for computing systems, methods and computer program products to produce efficient image-to-image translation by adapting unpaired datasets for supervised learning. A first model (a powerful model) may be defined and conditioned using unsupervised learning to produce a synthetic paired dataset from the unpaired dataset, translating images from a first domain to a second domain and images from the second domain to the first domain. The synthetic data generated is useful as ground truths in supervised learning. The first model may be conditioned to overfit the unpaired dataset to enhance the quality of the paired dataset (e.g. the synthetic data generated). A run-time model such as for a target device is trained using the synthetic paired dataset and supervised learning. The run-time model is small and fast to meet the processing resources of the target device (e.g. a personal user device such as a smart phone, tablet, etc.
    Type: Application
    Filed: November 12, 2020
    Publication date: May 20, 2021
    Applicant: ModiFace Inc.
    Inventors: Eric ELMOZNINO, Irina Kezele, Parham Aarabi
  • Publication number: 20200342630
    Abstract: Provided are systems and methods to perform colour extraction from swatch images and to define new images using extracted colours. Source images may be classified using a deep learning net (e.g. a CNN) to indicate colour representation strength and drive colour extraction. A clustering classifier is trained to use feature vectors extracted by the net. Separately, pixel clustering is useful when extracting the colour. Cluster count can vary according to classification. In another manner, heuristics (with or without classification) are useful when extracting. Resultant clusters are evaluated against a set of (ordered) expected colours to determine a match. Instances of standardized swatch images may be defined from a template swatch image and respective extracted colours using image processing. The extracted colour may be presented in an augmented reality GUI such as a virtual try-on application and applied to a user image such as a selfie using image processing.
    Type: Application
    Filed: April 22, 2020
    Publication date: October 29, 2020
    Applicant: L'Oreal
    Inventors: Eric ELMOZNINO, Parham AARABI, Yuze ZHANG
  • Publication number: 20200320748
    Abstract: A system and method implement deep learning on a mobile device to provide a convolutional neural network (CNN) for real time processing of video, for example, to color hair. Images are processed using the CNN to define a respective hair matte of hair pixels. The respective object mattes may be used to determine which pixels to adjust when adjusting pixel values such as to change color, lighting, texture, etc. The CNN may comprise a (pre-trained) network for image classification adapted to produce the segmentation mask. The CNN may be trained for image segmentation (e.g. using coarse segmentation data) to minimize a mask-image gradient consistency loss. The CNN may further use skip connections between corresponding layers of an encoder stage and a decoder stage where shallower layers in the encoder, which contain high-res but weak features are combined with low resolution but powerful features from deeper decoder layers.
    Type: Application
    Filed: October 24, 2018
    Publication date: October 8, 2020
    Applicant: L'OREAL
    Inventors: Alex LEVINSHTEIN, Cheng CHANG, Edmund PHUNG, Irina KEZELE, Wenzhangzhi GUO, Eric ELMOZNINO, Ruowei JIANG, Parham AARABI
  • Publication number: 20200170564
    Abstract: There is shown and described a deep learning based system and method for skin diagnostics as well as testing metrics that show that such a deep learning based system outperforms human experts on the task of apparent skin diagnostics. Also shown and described is a system and method of monitoring a skin treatment regime using a deep learning based system and method for skin diagnostics.
    Type: Application
    Filed: December 4, 2019
    Publication date: June 4, 2020
    Inventors: Ruowei Jiang, Junwei Ma, He Ma, Eric Elmoznino, Irina Kezele, Alex Levinshtein, John Charbit, Julien Despois, Matthieu Perrot, Frederic Antoinin Raymond Serge Flament, Parham Aarabi
  • Publication number: 20200160153
    Abstract: Systems and methods relate to a network model to apply an effect to an image such as an augmented reality effect (e.g. makeup, hair, nail, etc.). The network model uses a conditional cycle-consistent generative image-to-image translation model to translate images from a first domain space where the effect is not applied and to a second continuous domain space where the effect is applied. In order to render arbitrary effects (e.g. lipsticks) not seen at training time, the effect's space is represented as a continuous domain (e.g. a conditional variable vector) learned by encoding simple swatch images of the effect, such as are available as product swatches, as well as a null effect. The model is trained end-to-end in an unsupervised fashion. To condition a generator of the model, convolutional conditional batch normalization (CCBN) is used to apply the vector encoding the reference swatch images that represent the makeup properties.
    Type: Application
    Filed: November 14, 2019
    Publication date: May 21, 2020
    Applicant: L'Oreal
    Inventors: Eric ELMOZNINO, He MA, Irina KEZELE, Edmund PHUNG, Alex LEVINSHTEIN, Parham AARABI