Patents by Inventor Alex Levinshtein
Alex Levinshtein has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20180300589Abstract: This document relates to hybrid eye center localization using machine learning, namely cascaded regression and hand-crafted model fitting to improve a computer. There are proposed systems and methods of eye center (iris) detection using a cascade regressor (cascade of regression forests) as well as systems and methods for training a cascaded regressor. For detection, the eyes are detected using a facial feature alignment method. The robustness of localization is improved by using both advanced features and powerful regression machinery. Localization is made more accurate by adding a robust circle fitting post-processing step. Finally, using a simple hand-crafted method for eye center localization, there is provided a method to train the cascaded regressor without the need for manually annotated training data. Evaluation of the approach shows that it achieves state-of-the-art performance.Type: ApplicationFiled: April 13, 2018Publication date: October 18, 2018Inventors: ALEX LEVINSHTEIN, EDMUND PHUNG, PARHAM AARABI
-
Publication number: 20180285684Abstract: An object attitude detection device includes: a picked-up image acquisition unit which acquires a picked-up image of an object; a template image acquisition unit which acquires a template image for each attitude of the object; and an attitude decision unit which decides an attitude of the object, based on the template image having pixels such that a distance between pixels forming a contour in the picked-up image and pixels forming a contour of the template image is shorter than a first threshold and that a degree of similarity between a gradient of the pixels forming the contour in the picked-up image and a gradient of the pixels forming the contour of the template image is higher than a second threshold.Type: ApplicationFiled: March 26, 2018Publication date: October 4, 2018Inventors: Alex LEVINSHTEIN, Joseph Chitai LAM, Mikhail BRUSNITSYN, Guoyi FU
-
Publication number: 20180144458Abstract: A machine vision system and method uses captured depth data to improve the identification of a target object in a cluttered scene. A 3D-based object detection and pose estimation (ODPE) process is use to determine pose information of the target object. The system uses three different segmentation processes in sequence, where each subsequent segmentation process produces larger segments, in order to produce a plurality of segment hypotheses, each of which is expected to contain a large portion of the target object in the cluttered scene. Each segmentation hypotheses is used to mask 3D point clouds of the captured depth data, and each masked region is individually submitted to the 3D-based ODPE.Type: ApplicationFiled: November 21, 2016Publication date: May 24, 2018Inventors: Liwen Xu, Joseph Chi Tai Lam, Alex Levinshtein
-
Publication number: 20180144500Abstract: A method includes : obtaining a first 3D model point cloud associated with surface feature elements of a 3D model corresponding to a real object; obtaining a 3D surface point cloud from current depth image data of the real object; obtaining a second 3D model point cloud associated with 2D model points in a model contour; obtaining a 3D image contour point cloud at respective intersections of first imaginary lines and second imaginary lines; and deriving a second pose based at least on the first 3D model point cloud, the 3D surface point cloud, the second 3D model point cloud, the 3D image contour point cloud and the first pose.Type: ApplicationFiled: November 20, 2017Publication date: May 24, 2018Applicant: SEIKO EPSON CORPORATIONInventors: Joseph Chitai LAM, Alex LEVINSHTEIN
-
Publication number: 20180136470Abstract: A head-mounted display includes a camera that obtains an image of an object within a field of view. The head-mounted display further includes a processor configured to determine a plurality of feature points from the image and calculate a feature strength for each of the plurality of feature points. The processor is further configured to divide the image into a plurality of cells and select feature points having the highest feature strength from each cell and which have not yet been selected. The processor being further configured to detect and track the object within the field of view using the selected feature points.Type: ApplicationFiled: November 16, 2017Publication date: May 17, 2018Applicant: SEIKO EPSON CORPORATIONInventors: Alex LEVINSHTEIN, Mikhail BRUSNITSYN, Andrei Mark ROTENSTEIN
-
Publication number: 20180137651Abstract: A method includes acquiring, from a camera, an image frame including a representation of an object, and retrieving from a memory, data containing a template of a first pose of the object. A processor compares the first template to the image frame. A plurality of candidate locations in the image frame having a correlation with the template exceeding a predetermined threshold is determined. Edge registration on at least one candidate location of the plurality of candidate locations is performed to derive a refined pose of the object. Based at least in part on the performed edge registration, an initial pose of the object is determined, and a display image is output for display on a display device. The position at which the display image is displayed and/or the content of the display image is based at least in part on the determined initial pose of the object.Type: ApplicationFiled: November 16, 2017Publication date: May 17, 2018Applicant: SEIKO EPSON CORPORATIONInventors: Alex LEVINSHTEIN, Qadeer BAIG, Andrei Mark ROTENSTEIN, Yan ZHAO
-
Publication number: 20180108149Abstract: A computer program causes an object tracking device to realize functions of: acquiring a first image of a scene including an object captured with a camera positioned at a first position; deriving a 3D pose of the object in a second image captured with the camera positioned at a second position using a 3D model corresponding to the object; deriving 3D scene feature points of the scene based at least on the first image and the second image; obtaining a 3D-2D relationship between 3D points represented in a 3D coordinate system of the 3D model and image feature points on the second image; and updating the derived pose using the 3D-2D relationship, wherein the 3D points include the 3D scene feature points and 3D model points on the 3D model.Type: ApplicationFiled: October 17, 2017Publication date: April 19, 2018Applicant: SEIKO EPSON CORPORATIONInventor: Alex LEVINSHTEIN
-
Publication number: 20180096534Abstract: A method including acquiring a captured image of an object with a camera, detecting a first pose of the object on the basis of 2D template data and either the captured image at initial time or the captured image at time later than the initial time, detecting a second pose of the object corresponding to the captured image at current time on the basis of the first pose and the captured image at the current time, displaying an AR image in a virtual pose based on the second pose in the case where accuracy of the second pose at the current time falls in a range between a first criterion and a second criterion; and detecting a third pose of the object on the basis of the captured image at the current time and the 2D template data in the case where the accuracy falls in the range.Type: ApplicationFiled: September 20, 2017Publication date: April 5, 2018Applicant: SEIKO EPSON CORPORATIONInventors: Irina KEZELE, Alex LEVINSHTEIN
-
Publication number: 20170286750Abstract: An information processing device which processes information regarding a 3D model corresponding to a target object, includes a template creator that creates a template in which feature information and 3D locations are associated with each other, the feature information representing a plurality of 2D locations included in a contour obtained through a projection of the prepared 3D model onto a virtual plane based on a viewpoint, and the 3D locations corresponding to the 2D locations and being represented in a 3D coordinate system, the template being correlated with the viewpoint.Type: ApplicationFiled: March 6, 2017Publication date: October 5, 2017Applicant: SEIKO EPSON CORPORATIONInventors: Alex LEVINSHTEIN, Guoyi FU
-
Publication number: 20120040727Abstract: The present invention relates to a system and method for identifying and tracking gaming objects and game states on a gaming table. Through the use of imaging devices and identity and positioning modules gaming objects are detected. An identity and positioning module identifies the value and position of cards on the gaming table. An intelligent position analysis and tracking (IPAT) module performs analysis of the identity and position data of cards and interprets them intelligently for the purpose of tracking game events, game states and general game progression. As a game progresses it changes states. A game tracking module processes data from the IPAT module and keeps track of game events and game states. Ambiguity resolution mechanisms such as backward tracking, forward tracking and multiple state tracking may be used in the process of game tracking. All events on the gaming table are recorded and stored on video and as data for reporting and analysis.Type: ApplicationFiled: September 2, 2011Publication date: February 16, 2012Applicant: TANGAM TECHNOLOGIES INC.Inventors: Prem GURURAJAN, Maulin GANDHI, Jason Robert JACKSON, Alex LEVINSHTEIN
-
Patent number: 8016665Abstract: The present invention relates to a system and method for identifying and tracking gaming objects and game states on a gaming table. Through the use of imaging devices and identity and positioning modules gaming objects are detected. An identity and positioning module identifies the value and position of cards on the gaming table. An intelligent position analysis and tracking (IPAT) module performs analysis of the identity and position data of cards and interprets them intelligently for the purpose of tracking game events, game states and general game progression. As a game progresses it changes states. A game tracking module processes data from the IPAT module and keeps track of game events and game states. Ambiguity resolution mechanisms such as backward tracking, forward tracking and multiple state tracking may be used in the process of game tracking. All events on the gaming table are recorded and stored on video and as data for reporting and analysis.Type: GrantFiled: March 21, 2006Date of Patent: September 13, 2011Assignee: Tangam Technologies Inc.Inventors: Prem Gururajan, Maulin Gandhi, Jason Jackson, Alex Levinshtein
-
Publication number: 20060252521Abstract: The present invention relates to a system and method for identifying and tracking gaming objects and game states on a gaming table. Through the use of imaging devices and identity and positioning modules gaming objects are detected. An identity and positioning module identifies the value and position of cards on the gaming table. An intelligent position analysis and tracking (IPAT) module performs analysis of the identity and position data of cards and interprets them intelligently for the purpose of tracking game events, game states and general game progression. As a game progresses it changes states. A game tracking module processes data from the IPAT module and keeps track of game events and game states. Ambiguity resolution mechanisms such as backward tracking, forward tracking and multiple state tracking may be used in the process of game tracking. All events on the gaming table are recorded and stored on video and as data for reporting and analysis.Type: ApplicationFiled: March 21, 2006Publication date: November 9, 2006Applicant: Tangam Technologies Inc.Inventors: Prem Gururajan, Maulin Gandhi, Jason Jackson, Alex Levinshtein
-
Publication number: 20060252554Abstract: The present invention relates to a system and method for identifying and tracking gaming objects and game states on a gaming table. Through the use of imaging devices and identity and positioning modules gaming objects are detected. An identity and positioning module identifies the value and position of cards on the gaming table. An intelligent position analysis and tracking (IPAT) module performs analysis of the identity and position data of cards and interprets them intelligently for the purpose of tracking game events, game states and general game progression. As a game progresses it changes states. A game tracking module processes data from the IPAT module and keeps track of game events and game states. Ambiguity resolution mechanisms such as backward tracking, forward tracking and multiple state tracking may be used in the process of game tracking. All events on the gaming table are recorded and stored on video and as data for reporting and analysis.Type: ApplicationFiled: March 21, 2006Publication date: November 9, 2006Applicant: Tangam Technologies Inc.Inventors: Prem Gururajan, Maulin Gandhi, Jason Jackson, Alex Levinshtein