Patents by Inventor Alex Levinshtein

Alex Levinshtein has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11861497
    Abstract: A system and method implement deep learning on a mobile device to provide a convolutional neural network (CNN) for real time processing of video, for example, to color hair. Images are processed using the CNN to define a respective hair matte of hair pixels. The respective object mattes may be used to determine which pixels to adjust when adjusting pixel values such as to change color, lighting, texture, etc. The CNN may comprise a (pre-trained) network for image classification adapted to produce the segmentation mask. The CNN may be trained for image segmentation (e.g. using coarse segmentation data) to minimize a mask-image gradient consistency loss. The CNN may further use skip connections between corresponding layers of an encoder stage and a decoder stage where shallower layers in the encoder, which contain high-res but weak features are combined with low resolution but powerful features from deeper decoder layers.
    Type: Grant
    Filed: December 30, 2021
    Date of Patent: January 2, 2024
    Assignee: L'OREAL
    Inventors: Alex Levinshtein, Cheng Chang, Edmund Phung, Irina Kezele, Wenzhangzhi Guo, Eric Elmoznino, Ruowei Jiang, Parham Aarabi
  • Patent number: 11832958
    Abstract: There is shown and described a deep learning based system and method for skin diagnostics as well as testing metrics that show that such a deep learning based system outperforms human experts on the task of apparent skin diagnostics. Also shown and described is a system and method of monitoring a skin treatment regime using a deep learning based system and method for skin diagnostics.
    Type: Grant
    Filed: December 13, 2022
    Date of Patent: December 5, 2023
    Assignee: L'OREAL
    Inventors: Ruowei Jiang, Junwei Ma, He Ma, Eric Elmoznino, Irina Kezele, Alex Levinshtein, Julien Despois, Matthieu Perrot, Frederic Antoinin Raymond Serge Flament, Parham Aarabi
  • Patent number: 11775056
    Abstract: This document relates to hybrid eye center localization using machine learning, namely cascaded regression and hand-crafted model fitting to improve a computer. There are proposed systems and methods of eye center (iris) detection using a cascade regressor (cascade of regression forests) as well as systems and methods for training a cascaded regressor. For detection, the eyes are detected using a facial feature alignment method. The robustness of localization is improved by using both advanced features and powerful regression machinery. Localization is made more accurate by adding a robust circle fitting post-processing step. Finally, using a simple hand-crafted method for eye center localization, there is provided a method to train the cascaded regressor without the need for manually annotated training data. Evaluation of the approach shows that it achieves state-of-the-art performance.
    Type: Grant
    Filed: November 10, 2020
    Date of Patent: October 3, 2023
    Assignee: L'Oreal
    Inventors: Alex Levinshtein, Edmund Phung, Parham Aarabi
  • Patent number: 11645497
    Abstract: Systems and methods relate to a network model to apply an effect to an image such as an augmented reality effect (e.g. makeup, hair, nail, etc.). The network model uses a conditional cycle-consistent generative image-to-image translation model to translate images from a first domain space where the effect is not applied and to a second continuous domain space where the effect is applied. In order to render arbitrary effects (e.g. lipsticks) not seen at training time, the effect's space is represented as a continuous domain (e.g. a conditional variable vector) learned by encoding simple swatch images of the effect, such as are available as product swatches, as well as a null effect. The model is trained end-to-end in an unsupervised fashion. To condition a generator of the model, convolutional conditional batch normalization (CCBN) is used to apply the vector encoding the reference swatch images that represent the makeup properties.
    Type: Grant
    Filed: November 14, 2019
    Date of Patent: May 9, 2023
    Assignee: L'Oreal
    Inventors: Eric Elmoznino, He Ma, Irina Kezele, Edmund Phung, Alex Levinshtein, Parham Aarabi
  • Publication number: 20230123037
    Abstract: There is shown and described a deep learning based system and method for skin diagnostics as well as testing metrics that show that such a deep learning based system outperforms human experts on the task of apparent skin diagnostics. Also shown and described is a system and method of monitoring a skin treatment regime using a deep learning based system and method for skin diagnostics.
    Type: Application
    Filed: December 13, 2022
    Publication date: April 20, 2023
    Applicant: L'OREAL
    Inventors: Ruowei JIANG, Junwei MA, He MA, Eric ELMOZNINO, Irina KEZELE, Alex LEVINSHTEIN, Julien DESPOIS, Matthieu PERROT, Frederic Antoinin Raymond Serge FLAMENT, Parham AARABI
  • Patent number: 11553872
    Abstract: There is shown and described a deep learning based system and method for skin diagnostics as well as testing metrics that show that such a deep learning based system outperforms human experts on the task of apparent skin diagnostics. Also shown and described is a system and method of monitoring a skin treatment regime using a deep learning based system and method for skin diagnostics.
    Type: Grant
    Filed: December 4, 2019
    Date of Patent: January 17, 2023
    Assignee: L'OREAL
    Inventors: Ruowei Jiang, Junwei Ma, He Ma, Eric Elmoznino, Irina Kezele, Alex Levinshtein, Julien Despois, Matthieu Perrot, Frederic Antoinin Raymond Serge Flament, Parham Aarabi
  • Publication number: 20220122299
    Abstract: A system and method implement deep learning on a mobile device to provide a convolutional neural network (CNN) for real time processing of video, for example, to color hair. Images are processed using the CNN to define a respective hair matte of hair pixels. The respective object mattes may be used to determine which pixels to adjust when adjusting pixel values such as to change color, lighting, texture, etc. The CNN may comprise a (pre-trained) network for image classification adapted to produce the segmentation mask. The CNN may be trained for image segmentation (e.g. using coarse segmentation data) to minimize a mask-image gradient consistency loss. The CNN may further use skip connections between corresponding layers of an encoder stage and a decoder stage where shallower layers in the encoder, which contain high-res but weak features are combined with low resolution but powerful features from deeper decoder layers.
    Type: Application
    Filed: December 30, 2021
    Publication date: April 21, 2022
    Applicant: L'OREAL
    Inventors: Alex LEVINSHTEIN, Cheng Chang, Edmund Phung, Irina Kezele, Wenzhangzhi Guo, Eric Elmoznino, Ruowei Jiang, Parham Aarabi
  • Patent number: 11216988
    Abstract: A system and method implement deep learning on a mobile device to provide a convolutional neural network (CNN) for real time processing of video, for example, to color hair. Images are processed using the CNN to define a respective hair matte of hair pixels. The respective object mattes may be used to determine which pixels to adjust when adjusting pixel values such as to change color, lighting, texture, etc. The CNN may comprise a (pre-trained) network for image classification adapted to produce the segmentation mask. The CNN may be trained for image segmentation (e.g. using coarse segmentation data) to minimize a mask-image gradient consistency loss. The CNN may further use skip connections between corresponding layers of an encoder stage and a decoder stage where shallower layers in the encoder, which contain high-res but weak features are combined with low resolution but powerful features from deeper decoder layers.
    Type: Grant
    Filed: October 24, 2018
    Date of Patent: January 4, 2022
    Assignee: L'OREAL
    Inventors: Alex Levinshtein, Cheng Chang, Edmund Phung, Irina Kezele, Wenzhangzhi Guo, Eric Elmoznino, Ruowei Jiang, Parham Aarabi
  • Publication number: 20210056360
    Abstract: This document relates to hybrid eye center localization using machine learning, namely cascaded regression and hand-crafted model fitting to improve a computer. There are proposed systems and methods of eye center (iris) detection using a cascade regressor (cascade of regression forests) as well as systems and methods for training a cascaded regressor. For detection, the eyes are detected using a facial feature alignment method. The robustness of localization is improved by using both advanced features and powerful regression machinery. Localization is made more accurate by adding a robust circle fitting post-processing step. Finally, using a simple hand-crafted method for eye center localization, there is provided a method to train the cascaded regressor without the need for manually annotated training data. Evaluation of the approach shows that it achieves state-of-the-art performance.
    Type: Application
    Filed: November 10, 2020
    Publication date: February 25, 2021
    Applicant: L'Oreal
    Inventors: Alex Levinshtein, Edmund Phung, Parham Aarabi
  • Patent number: 10872272
    Abstract: This document relates to hybrid eye center localization using machine learning, namely cascaded regression and hand-crafted model fitting to improve a computer. There are proposed systems and methods of eye center (iris) detection using a cascade regressor (cascade of regression forests) as well as systems and methods for training a cascaded regressor. For detection, the eyes are detected using a facial feature alignment method. The robustness of localization is improved by using both advanced features and powerful regression machinery. Localization is made more accurate by adding a robust circle fitting post-processing step. Finally, using a simple hand-crafted method for eye center localization, there is provided a method to train the cascaded regressor without the need for manually annotated training data. Evaluation of the approach shows that it achieves state-of-the-art performance.
    Type: Grant
    Filed: April 13, 2018
    Date of Patent: December 22, 2020
    Assignee: L'OREAL
    Inventors: Alex Levinshtein, Edmund Phung, Parham Aarabi
  • Publication number: 20200320748
    Abstract: A system and method implement deep learning on a mobile device to provide a convolutional neural network (CNN) for real time processing of video, for example, to color hair. Images are processed using the CNN to define a respective hair matte of hair pixels. The respective object mattes may be used to determine which pixels to adjust when adjusting pixel values such as to change color, lighting, texture, etc. The CNN may comprise a (pre-trained) network for image classification adapted to produce the segmentation mask. The CNN may be trained for image segmentation (e.g. using coarse segmentation data) to minimize a mask-image gradient consistency loss. The CNN may further use skip connections between corresponding layers of an encoder stage and a decoder stage where shallower layers in the encoder, which contain high-res but weak features are combined with low resolution but powerful features from deeper decoder layers.
    Type: Application
    Filed: October 24, 2018
    Publication date: October 8, 2020
    Applicant: L'OREAL
    Inventors: Alex LEVINSHTEIN, Cheng CHANG, Edmund PHUNG, Irina KEZELE, Wenzhangzhi GUO, Eric ELMOZNINO, Ruowei JIANG, Parham AARABI
  • Patent number: 10740649
    Abstract: An object attitude detection device includes a pick-up image acquisition unit, a template image acquisition unit, and an attitude decision unit. The pick-up image acquisition unit acquires a picked-up image of an object. The template image acquisition unit acquires a template image for each attitude of the object. The attitude decision unit decides an attitude of the object based on the template image having pixels. In the pixels, a distance between pixels forming a contour in the picked-up image and pixels forming a contour of the template image is shorter than a first threshold. Further, a degree of similarity between a gradient of the pixels forming the contour in the picked-up image and a gradient of the pixels forming the contour of the template image is higher than a second threshold.
    Type: Grant
    Filed: March 26, 2018
    Date of Patent: August 11, 2020
    Assignee: Seiko Epson Corporation
    Inventors: Alex Levinshtein, Joseph Chitai Lam, Mikhail Brusnitsyn, Guoyi Fu
  • Publication number: 20200170564
    Abstract: There is shown and described a deep learning based system and method for skin diagnostics as well as testing metrics that show that such a deep learning based system outperforms human experts on the task of apparent skin diagnostics. Also shown and described is a system and method of monitoring a skin treatment regime using a deep learning based system and method for skin diagnostics.
    Type: Application
    Filed: December 4, 2019
    Publication date: June 4, 2020
    Inventors: Ruowei Jiang, Junwei Ma, He Ma, Eric Elmoznino, Irina Kezele, Alex Levinshtein, John Charbit, Julien Despois, Matthieu Perrot, Frederic Antoinin Raymond Serge Flament, Parham Aarabi
  • Publication number: 20200160153
    Abstract: Systems and methods relate to a network model to apply an effect to an image such as an augmented reality effect (e.g. makeup, hair, nail, etc.). The network model uses a conditional cycle-consistent generative image-to-image translation model to translate images from a first domain space where the effect is not applied and to a second continuous domain space where the effect is applied. In order to render arbitrary effects (e.g. lipsticks) not seen at training time, the effect's space is represented as a continuous domain (e.g. a conditional variable vector) learned by encoding simple swatch images of the effect, such as are available as product swatches, as well as a null effect. The model is trained end-to-end in an unsupervised fashion. To condition a generator of the model, convolutional conditional batch normalization (CCBN) is used to apply the vector encoding the reference swatch images that represent the makeup properties.
    Type: Application
    Filed: November 14, 2019
    Publication date: May 21, 2020
    Applicant: L'Oreal
    Inventors: Eric ELMOZNINO, He MA, Irina KEZELE, Edmund PHUNG, Alex LEVINSHTEIN, Parham AARABI
  • Patent number: 10380763
    Abstract: A method includes acquiring, from a camera, an image frame including a representation of an object, and retrieving from a memory, data containing a template of a first pose of the object. A processor compares the first template to the image frame. A plurality of candidate locations in the image frame having a correlation with the template exceeding a predetermined threshold is determined. Edge registration on at least one candidate location of the plurality of candidate locations is performed to derive a refined pose of the object. Based at least in part on the performed edge registration, an initial pose of the object is determined, and a display image is output for display on a display device. The position at which the display image is displayed and/or the content of the display image is based at least in part on the determined initial pose of the object.
    Type: Grant
    Filed: November 16, 2017
    Date of Patent: August 13, 2019
    Assignee: SEIKO EPSON CORPORATION
    Inventors: Alex Levinshtein, Qadeer Baig, Andrei Mark Rotenstein, Yan Zhao
  • Patent number: 10373334
    Abstract: A computer program causes an object tracking device to realize functions of: acquiring a first image of a scene including an object captured with a camera positioned at a first position; deriving a 3D pose of the object in a second image captured with the camera positioned at a second position using a 3D model corresponding to the object; deriving 3D scene feature points of the scene based at least on the first image and the second image; obtaining a 3D-2D relationship between 3D points represented in a 3D coordinate system of the 3D model and image feature points on the second image; and updating the derived pose using the 3D-2D relationship, wherein the 3D points include the 3D scene feature points and 3D model points on the 3D model.
    Type: Grant
    Filed: October 17, 2017
    Date of Patent: August 6, 2019
    Assignee: SEIKO EPSON CORPORATION
    Inventor: Alex Levinshtein
  • Patent number: 10366276
    Abstract: An information processing device which processes information regarding a 3D model corresponding to a target object, includes a template creator that creates a template in which feature information and 3D locations are associated with each other, the feature information representing a plurality of 2D locations included in a contour obtained through a projection of the prepared 3D model onto a virtual plane based on a viewpoint, and the 3D locations corresponding to the 2D locations and being represented in a 3D coordinate system, the template being correlated with the viewpoint.
    Type: Grant
    Filed: March 6, 2017
    Date of Patent: July 30, 2019
    Assignee: SEIKO EPSON CORPORATION
    Inventors: Alex Levinshtein, Guoyi Fu
  • Patent number: 10304253
    Abstract: A method including acquiring a captured image of an object with a camera, detecting a first pose of the object on the basis of 2D template data and either the captured image at initial time or the captured image at time later than the initial time, detecting a second pose of the object corresponding to the captured image at current time on the basis of the first pose and the captured image at the current time, displaying an AR image in a virtual pose based on the second pose in the case where accuracy of the second pose at the current time falls in a range between a first criterion and a second criterion; and detecting a third pose of the object on the basis of the captured image at the current time and the 2D template data in the case where the accuracy falls in the range.
    Type: Grant
    Filed: September 20, 2017
    Date of Patent: May 28, 2019
    Assignee: SEIKO EPSON CORPORATION
    Inventors: Irina Kezele, Alex Levinshtein
  • Patent number: 10203505
    Abstract: A head-mounted display includes a camera that obtains an image of an object within a field of view. The head-mounted display further includes a processor configured to determine a plurality of feature points from the image and calculate a feature strength for each of the plurality of feature points. The processor is further configured to divide the image into a plurality of cells and select feature points having the highest feature strength from each cell and which have not yet been selected. The processor being further configured to detect and track the object within the field of view using the selected feature points.
    Type: Grant
    Filed: November 16, 2017
    Date of Patent: February 12, 2019
    Assignee: SEIKO EPSON CORPORATION
    Inventors: Alex Levinshtein, Mikhail Brusnitsyn, Andrei Mark Rotenstein
  • Patent number: 10109055
    Abstract: A machine vision system and method uses captured depth data to improve the identification of a target object in a cluttered scene. A 3D-based object detection and pose estimation (ODPE) process is use to determine pose information of the target object. The system uses three different segmentation processes in sequence, where each subsequent segmentation process produces larger segments, in order to produce a plurality of segment hypotheses, each of which is expected to contain a large portion of the target object in the cluttered scene. Each segmentation hypotheses is used to mask 3D point clouds of the captured depth data, and each masked region is individually submitted to the 3D-based ODPE.
    Type: Grant
    Filed: November 21, 2016
    Date of Patent: October 23, 2018
    Assignee: SEIKO EPSON CORPORATION
    Inventors: Liwen Xu, Joseph Chi Tai Lam, Alex Levinshtein