Patents by Inventor Irina Kezele
Irina Kezele has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20210012493Abstract: Systems and methods process images to determine a skin condition severity analysis and to visualize a skin analysis such as using a deep neural network (e.g. a convolutional neural network) where a problem was formulated as a regression task with integer-only labels. Auxiliary classification tasks (for example, comprising gender and ethnicity predictions) are introduced to improve performance. Scoring and other image processing techniques may be used (e.g. in assoc. with the model) to visualize results such as highlighting the analyzed image. It is demonstrated that the visualization of results, which highlight skin condition affected areas, can also provide perspicuous explanations for the model. A plurality (k) of data augmentations may be made to a source image to yield k augmented images for processing. Activation masks (e.g. heatmaps) produced from processing the k augmented images are used to define a final map to visualize the skin analysis.Type: ApplicationFiled: August 18, 2020Publication date: January 14, 2021Applicant: L'OrealInventors: Ruowei JIANG, Irina KEZELE, Zhi Yu, Sophie SEITE, Frederic FLAMENT, Parham AARABI
-
Publication number: 20200349711Abstract: Presented is a convolutional neural network (CNN) model for fingernail tracking, and a method design for nail polish rendering. Using current software and hardware, the CNN model and method to render nail polish runs in real-time on both iOS and web platforms. A use of Loss Mean Pooling (LMP) coupled with a cascaded model architecture simultaneously enables pixel-accurate fingernail predictions at up to 640×480 resolution. The proposed post-processing and rendering method takes advantage of the model's multiple output predictions to render gradients on individual fingernails, and to hide the light-colored distal edge when rendering on top of natural fingernails by stretching the nail mask in the direction of the fingernail tip. Teachings herein may be applied to track objects other than fingernails and to apply appearance effects other than color.Type: ApplicationFiled: April 29, 2020Publication date: November 5, 2020Applicant: L'OrealInventors: Brendan Duke, Abdalla Ahmed, Edmund Phung, Irina Kezele, Parham Aarabi
-
Publication number: 20200342209Abstract: There are provided systems and methods for facial landmark detection using a convolutional neural network (CNN). The CNN comprises a first stage and a second stage where the first stage produces initial heat maps for the landmarks and initial respective locations for the landmarks. The second stage processes the heat maps and performs Region of Interest-based pooling while preserving feature alignment to produce cropped features. Finally, the second stage predicts from the cropped features a respective refinement location offset to each respective initial location. Combining each respective initial location with its respective refinement location offset provides a respective final coordinate (x,y) for each respective landmark in the image. Two-stage localization design helps to achieve fine-level alignment while remaining computationally efficient.Type: ApplicationFiled: April 22, 2020Publication date: October 29, 2020Applicant: L'OrealInventors: Tian Xing LI, Zhi Yu, Irina Kezele, Edmund Phung, Parham Aarabi
-
Publication number: 20200320748Abstract: A system and method implement deep learning on a mobile device to provide a convolutional neural network (CNN) for real time processing of video, for example, to color hair. Images are processed using the CNN to define a respective hair matte of hair pixels. The respective object mattes may be used to determine which pixels to adjust when adjusting pixel values such as to change color, lighting, texture, etc. The CNN may comprise a (pre-trained) network for image classification adapted to produce the segmentation mask. The CNN may be trained for image segmentation (e.g. using coarse segmentation data) to minimize a mask-image gradient consistency loss. The CNN may further use skip connections between corresponding layers of an encoder stage and a decoder stage where shallower layers in the encoder, which contain high-res but weak features are combined with low resolution but powerful features from deeper decoder layers.Type: ApplicationFiled: October 24, 2018Publication date: October 8, 2020Applicant: L'OREALInventors: Alex LEVINSHTEIN, Cheng CHANG, Edmund PHUNG, Irina KEZELE, Wenzhangzhi GUO, Eric ELMOZNINO, Ruowei JIANG, Parham AARABI
-
Patent number: 10769437Abstract: A head-mounted display, a method, and a non-transitory computer readable medium are provided. An embodiment of a method for obtaining training sample views of an object includes the step of storing, in a memory, multiple views of an object. The method also includes the step of deriving similarity scores between adjacent views and then a sampling density is varied based on the similarity scores.Type: GrantFiled: April 10, 2018Date of Patent: September 8, 2020Assignee: SEIKO EPSON CORPORATIONInventors: Dibyendu Mukherjee, Jia Li, Mikhail Brusnitsyn, Irina Kezele
-
Patent number: 10755434Abstract: A method includes acquiring, from a camera, an image data sequence of a real object in a real scene and performing a first template-matching on an image frame in the image data sequence using intensity-related data sets stored in one or more memories to generate response maps. The intensity-related data sets represent an intensity distribution of a reference object from respective viewpoints. The reference object corresponds to the real object. A candidate region of interest is determined for the real object in the image frame based on the response maps, and second template-matching is performed on the candidate region of interest using shape-related feature data sets stored in one or more memories to derive a pose of the real object. The shape-related feature data sets represent edge information of the reference object from the respective viewpoints.Type: GrantFiled: March 28, 2018Date of Patent: August 25, 2020Assignee: SEIKO EPSON CORPORATIONInventors: Dibyendu Mukherjee, Irina Kezele, Mikhail Brusnitsyn
-
Publication number: 20200170564Abstract: There is shown and described a deep learning based system and method for skin diagnostics as well as testing metrics that show that such a deep learning based system outperforms human experts on the task of apparent skin diagnostics. Also shown and described is a system and method of monitoring a skin treatment regime using a deep learning based system and method for skin diagnostics.Type: ApplicationFiled: December 4, 2019Publication date: June 4, 2020Inventors: Ruowei Jiang, Junwei Ma, He Ma, Eric Elmoznino, Irina Kezele, Alex Levinshtein, John Charbit, Julien Despois, Matthieu Perrot, Frederic Antoinin Raymond Serge Flament, Parham Aarabi
-
Publication number: 20200160153Abstract: Systems and methods relate to a network model to apply an effect to an image such as an augmented reality effect (e.g. makeup, hair, nail, etc.). The network model uses a conditional cycle-consistent generative image-to-image translation model to translate images from a first domain space where the effect is not applied and to a second continuous domain space where the effect is applied. In order to render arbitrary effects (e.g. lipsticks) not seen at training time, the effect's space is represented as a continuous domain (e.g. a conditional variable vector) learned by encoding simple swatch images of the effect, such as are available as product swatches, as well as a null effect. The model is trained end-to-end in an unsupervised fashion. To condition a generator of the model, convolutional conditional batch normalization (CCBN) is used to apply the vector encoding the reference swatch images that represent the makeup properties.Type: ApplicationFiled: November 14, 2019Publication date: May 21, 2020Applicant: L'OrealInventors: Eric ELMOZNINO, He MA, Irina KEZELE, Edmund PHUNG, Alex LEVINSHTEIN, Parham AARABI
-
Publication number: 20190311199Abstract: A head-mounted display, a method, and a non-transitory computer readable medium are provided. An embodiment of a method for obtaining training sample views of an object includes the step of storing, in a memory, multiple views of an object. The method also includes the step of deriving similarity scores between adjacent views and then a sampling density is varied based on the similarity scores.Type: ApplicationFiled: April 10, 2018Publication date: October 10, 2019Applicant: SEIKO EPSON CORPORATIONInventors: Dibyendu MUKHERJEE, Jia LI, Mikhail BRUSNITSYN, Irina KEZELE
-
Publication number: 20190304124Abstract: A method includes acquiring, from a camera, an image data sequence of a real object in a real scene and performing a first template-matching on an image frame in the image data sequence using intensity-related data sets stored in one or more memories to generate response maps. The intensity-related data sets represent an intensity distribution of a reference object from respective viewpoints. The reference object corresponds to the real object. A candidate region of interest is determined for the real object in the image frame based on the response maps, and second template-matching is performed on the candidate region of interest using shape-related feature data sets stored in one or more memories to derive a pose of the real object. The shape-related feature data sets represent edge information of the reference object from the respective viewpoints.Type: ApplicationFiled: March 28, 2018Publication date: October 3, 2019Applicant: SEIKO EPSON CORPORATIONInventors: Dibyendu MUKHERJEE, Irina KEZELE, Mikhail BRUSNITSYN
-
Patent number: 10424117Abstract: A head-mounted display device includes an image display section, an imaging section, an image setting section configured to set an image, an operation input section, an object specification section configured to derive a spatial relationship of a specific object included in an imaged outside scene with respect to the imaging section, and a parameter setting section. The image setting section causes the image display section to display a setting image based at least on the derived spatial relationship and a predetermined parameter group so as to allow a user to visually perceive the setting image, and the parameter setting section adjusts at least one parameter in the parameter group so as to allow the user to visually perceive a condition that at least a position and pose of the setting image and those of the specific object are substantially aligned with each other.Type: GrantFiled: November 8, 2016Date of Patent: September 24, 2019Assignee: SEIKO EPSON CORPORATIONInventors: Jia Li, Guoyi Fu, Irina Kezele, Yang Yang
-
Patent number: 10304253Abstract: A method including acquiring a captured image of an object with a camera, detecting a first pose of the object on the basis of 2D template data and either the captured image at initial time or the captured image at time later than the initial time, detecting a second pose of the object corresponding to the captured image at current time on the basis of the first pose and the captured image at the current time, displaying an AR image in a virtual pose based on the second pose in the case where accuracy of the second pose at the current time falls in a range between a first criterion and a second criterion; and detecting a third pose of the object on the basis of the captured image at the current time and the 2D template data in the case where the accuracy falls in the range.Type: GrantFiled: September 20, 2017Date of Patent: May 28, 2019Assignee: SEIKO EPSON CORPORATIONInventors: Irina Kezele, Alex Levinshtein
-
Patent number: 10198865Abstract: An optical see-through (OST) head-mounted display (HMD) uses a calibration matrix having a fixed sub-set of adjustable parameters within all its parameters. Initial values for the calibration matrix are based on a model head. A predefined set of incremental adjustment values is provided for each adjustable parameter. During calibration, the calibration matrix is cycled through its predefined incremental parameter changes, and a virtual object is projected for each incremental change. The resultant projected virtual object is aligned to a reference real object, and the projected virtual object having the best alignment is identified. The setting values of the calibration matrix that resulted in the best aligned virtual object are deemed the final calibration matrix to be used with the OST HMD.Type: GrantFiled: June 16, 2015Date of Patent: February 5, 2019Assignee: SEIKO EPSON CORPORATIONInventors: Irina Kezele, Margarit Simeonov Chenchev, Stella Yuan, Simon Szeto, Arash Abadpour
-
Publication number: 20180096534Abstract: A method including acquiring a captured image of an object with a camera, detecting a first pose of the object on the basis of 2D template data and either the captured image at initial time or the captured image at time later than the initial time, detecting a second pose of the object corresponding to the captured image at current time on the basis of the first pose and the captured image at the current time, displaying an AR image in a virtual pose based on the second pose in the case where accuracy of the second pose at the current time falls in a range between a first criterion and a second criterion; and detecting a third pose of the object on the basis of the captured image at the current time and the 2D template data in the case where the accuracy falls in the range.Type: ApplicationFiled: September 20, 2017Publication date: April 5, 2018Applicant: SEIKO EPSON CORPORATIONInventors: Irina KEZELE, Alex LEVINSHTEIN
-
Publication number: 20180050254Abstract: A method for spatial alignment of golf club inertial measurement data and two-dimensional video for golf club swing analysis is provided. The method includes capturing inertial measurement data of a golf club swing through an inertial measurement unit (IMU), and sending the inertial measurement data from the inertial measurement unit to a computing device. The computing device is configured to align a first axis of the inertial measurement data to a first axis of further inertial measurement data from a mobile device, estimate translation of the inertial measurement data to two-dimensional video captured by the mobile device, align second and third axes of the inertial measurement data of the golf club swing to second and third axes of the two-dimensional video, and overlay a projected golf club trajectory based on the aligned captured inertial measurement data onto the two-dimensional video.Type: ApplicationFiled: August 22, 2016Publication date: February 22, 2018Inventors: Rouzbeh Maani, Jie Wang, Michael Belshaw, Irina Kezele
-
Publication number: 20180053308Abstract: A method for spatial alignment of golf-club inertial measurement data and a three-dimensional human model for golf club swing analysis is provided. The method includes capturing inertial measurement data through an inertial measurement unit (IMU), and sending the inertial measurement data from the IMU to a computing device. The computing device is configured to determine a three-dimensional trajectory in IMU coordinate space, determine in human model coordinate space a three-dimensional trajectory of an infrared marker in a video with the video having depth or depth information, determine a transformation matrix from human model coordinate space to IMU coordinate space, perform spatial alignment of the three-dimensional trajectory and a three-dimensional human model based on the video having depth or depth information, using the transformation matrix, and overlay a projected trajectory onto the three-dimensional human model.Type: ApplicationFiled: August 22, 2016Publication date: February 22, 2018Inventors: Rouzbeh Maani, Jie Wang, Irina Kezele
-
Publication number: 20170161955Abstract: A head-mounted display device includes an image display section, an imaging section, an image setting section configured to set an image, an operation input section, an object specification section configured to derive a spatial relationship of a specific object included in an imaged outside scene with respect to the imaging section, and a parameter setting section. The image setting section causes the image display section to display a setting image based at least on the derived spatial relationship and a predetermined parameter group so as to allow a user to visually perceive the setting image, and the parameter setting section adjusts at least one parameter in the parameter group so as to allow the user to visually perceive a condition that at least a position and pose of the setting image and those of the specific object are substantially aligned with each other.Type: ApplicationFiled: November 8, 2016Publication date: June 8, 2017Applicant: SEIKO EPSON CORPORATIONInventors: Jia LI, Guoyi FU, Irina KEZELE, Yang YANG
-
Patent number: 9438891Abstract: Aspects of the present invention comprise holocam systems and methods that enable the capture and streaming of scenes. In embodiments, multiple image capture devices, which may be referred to as “orbs,” are used to capture images of a scene from different vantage points or frames of reference. In embodiments, each orb captures three-dimensional (3D) information, which is preferably in the form of a depth map and visible images (such as stereo image pairs and regular images). Aspects of the present invention also include mechanisms by which data captured by two or more orbs may be combined to create one composite 3D model of the scene. A viewer may then, in embodiments, use the 3D model to generate a view from a different frame of reference than was originally created by any single orb.Type: GrantFiled: March 13, 2014Date of Patent: September 6, 2016Assignee: Seiko Epson CorporationInventors: Michael Mannion, Sujay Sukumaran, Ivo Moravec, Syed Alimul Huda, Bogdan Matei, Arash Abadpour, Irina Kezele
-
Publication number: 20160012643Abstract: An optical see-through (OST) head-mounted display (HMD) uses a calibration matrix having a fixed sub-set of adjustable parameters within all its parameters. Initial values for the calibration matrix are based on a model head. A predefined set of incremental adjustment values is provided for each adjustable parameter. During calibration, the calibration matrix is cycled through its predefined incremental parameter changes, and a virtual object is projected for each incremental change. The resultant projected virtual object is aligned to a reference real object, and the projected virtual object having the best alignment is identified. The setting values of the calibration matrix that resulted in the best aligned virtual object are deemed the final calibration matrix to be used with the OST HMD.Type: ApplicationFiled: June 16, 2015Publication date: January 14, 2016Inventors: Irina Kezele, Margarit Simeonov Chenchev, Stella Yuan, Simon Szeto, Arash Abadpour
-
Publication number: 20150261184Abstract: Aspects of the present invention comprise holocam systems and methods that enable the capture and streaming of scenes. In embodiments, multiple image capture devices, which may be referred to as “orbs,” are used to capture images of a scene from different vantage points or frames of reference. In embodiments, each orb captures three-dimensional (3D) information, which is preferably in the form of a depth map and visible images (such as stereo image pairs and regular images). Aspects of the present invention also include mechanisms by which data captured by two or more orbs may be combined to create one composite 3D model of the scene. A viewer may then, in embodiments, use the 3D model to generate a view from a different frame of reference than was originally created by any single orb.Type: ApplicationFiled: March 13, 2014Publication date: September 17, 2015Applicant: Seiko Epson CorporationInventors: Michael Mannion, Sujay Sukumaran, Ivo Moravec, Syed Alimul Huda, Bogdan Matei, Arash Abadpour, Irina Kezele