Patents by Inventor Yichen Wei
Yichen Wei has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20140185924Abstract: A two-level boosted regression function is learned using shape-indexed image features and correlation-based feature selection. The regression function is learned by explicitly minimizing the alignment errors over the training data. Image features are indexed based on a previous shape estimate, and features are selected based on correlation to a random projection. The learned regression function enforces non-parametric shape constraint.Type: ApplicationFiled: December 27, 2012Publication date: July 3, 2014Applicant: MICROSOFT CORPORATIONInventors: Xudong Cao, Yichen Wei, Fang Wen, Jian Sun
-
Publication number: 20140140610Abstract: Techniques for unsupervised object class discovery via bottom-up multiple class learning are described. These techniques may include receiving multiple images containing one or more object classes. The multiple images may be analyzed to extract top saliency instances and least saliency instances. These saliency instances may be clustered to generate and/or update statistical models. The statistical models may be used to discover the one or more object classes. In some instances, the statistical models may be used to discover object classes of novel images.Type: ApplicationFiled: November 19, 2012Publication date: May 22, 2014Applicant: Microsoft CorporationInventors: Zhuowen Tu, Yichen Wei, Eric I-Chao Chang, Junyan Zhu, Jiajun Wu
-
Patent number: 8687880Abstract: Methods are provided for generating a low dimension pose space and using the pose space to estimate one or more head rotation angles of a user head. In one example, training image frames including a test subject head are captured under a plurality of conditions. For each frame an actual head rotation angle about a rotation axis is recorded. In each frame a face image is detected and converted to an LBP feature vector. Using principal component analysis a PCA feature vector is generated. Pose classes related to rotation angles about a rotation axis are defined. The PCA feature vectors are grouped into a pose class that corresponds to the actual rotation angle associated with the PCA feature vector. Linear discriminant analysis is applied to the pose classes to generate the low dimension pose space.Type: GrantFiled: March 20, 2012Date of Patent: April 1, 2014Assignee: Microsoft CorporationInventors: Yichen Wei, Fang Wen, Jian Sun, Tommer Leyvand, Jinyu Li, Casey Meekhof, Tim Keosababian
-
Publication number: 20130300653Abstract: The technology provides various embodiments for gaze determination within a see-through, near-eye, mixed reality display device. In some embodiments, the boundaries of a gaze detection coordinate system can be determined from a spatial relationship between a user eye and gaze detection elements such as illuminators and at least one light sensor positioned on a support structure such as an eyeglasses frame. The gaze detection coordinate system allows for determination of a gaze vector from each eye based on data representing glints on the user eye, or a combination of image and glint data. A point of gaze may be determined in a three-dimensional user field of view including real and virtual objects. The spatial relationship between the gaze detection elements and the eye may be checked and may trigger a re-calibration of training data sets if the boundaries of the gaze detection coordinate system have changed.Type: ApplicationFiled: July 12, 2013Publication date: November 14, 2013Inventors: John R. Lewis, Yichen Wei, Robert L. Crocco, Benjamin I. Vaught, Alex Aben-Athar Kipman, Kathryn Stone Perez
-
Publication number: 20130286178Abstract: The technology provides various embodiments for gaze determination within a see-through, near-eye, mixed reality display device. In some embodiments, the boundaries of a gaze detection coordinate system can be determined from a spatial relationship between a user eye and gaze detection elements such as illuminators and at least one light sensor positioned on a support structure such as an eyeglasses frame. The gaze detection coordinate system allows for determination of a gaze vector from each eye based on data representing glints on the user eye, or a combination of image and glint data. A point of gaze may be determined in a three-dimensional user field of view including real and virtual objects. The spatial relationship between the gaze detection elements and the eye may be checked and may trigger a re-calibration of training data sets if the boundaries of the gaze detection coordinate system have changed.Type: ApplicationFiled: March 15, 2013Publication date: October 31, 2013Inventors: John R. Lewis, Yichen Wei, Robert L. Crocco, Benjamin I. Vaught, Alex Aben-Athar Kipman, Kathryn Stone Perez
-
Publication number: 20130251244Abstract: Methods are provided for generating a low dimension pose space and using the pose space to estimate one or more head rotation angles of a user head. In one example, training image frames including a test subject head are captured under a plurality of conditions. For each frame an actual head rotation angle about a rotation axis is recorded. In each frame a face image is detected and converted to an LBP feature vector. Using principal component analysis a PCA feature vector is generated. Pose classes related to rotation angles about a rotation axis are defined. The PCA feature vectors are grouped into a pose class that corresponds to the actual rotation angle associated with the PCA feature vector. Linear discriminant analysis is applied to the pose classes to generate the low dimension pose space.Type: ApplicationFiled: March 20, 2012Publication date: September 26, 2013Applicant: MICROSOFT CORPORATIONInventors: Yichen Wei, Fang Wen, Jian Sun, Tommer Leyvand, Jinyu Li, Casey Meekhof, Tim Keosababian
-
Patent number: 8487838Abstract: The technology provides various embodiments for gaze determination within a see-through, near-eye, mixed reality display device. In some embodiments, the boundaries of a gaze detection coordinate system can be determined from a spatial relationship between a user eye and gaze detection elements such as illuminators and at least one light sensor positioned on a support structure such as an eyeglasses frame. The gaze detection coordinate system allows for determination of a gaze vector from each eye based on data representing glints on the user eye, or a combination of image and glint data. A point of gaze may be determined in a three-dimensional user field of view including real and virtual objects. The spatial relationship between the gaze detection elements and the eye may be checked and may trigger a re-calibration of training data sets if the boundaries of the gaze detection coordinate system have changed.Type: GrantFiled: August 30, 2011Date of Patent: July 16, 2013Inventors: John R. Lewis, Yichen Wei, Robert L. Crocco, Benjamin I. Vaught, Alex Aben-Athar Kipman, Kathryn Stone Perez
-
Publication number: 20130154918Abstract: Systems, methods, and computer media for estimating user eye gaze are provided. A plurality of images of a user's eye are acquired. At least one image of at least part of the user's field of view is acquired. At least one gaze target area in the user's field of view is determined based on the plurality of images of the user's eye. An enhanced user eye gaze is then estimated by narrowing a database of eye information and corresponding known gaze lines to a subset of the eye information having gaze lines corresponding to a gaze target area. User eye information derived from the images of the user's eye is then compared with the narrowed subset of the eye information, and an enhanced estimated user eye gaze is identified as the known gaze line of a matching eye image.Type: ApplicationFiled: December 20, 2011Publication date: June 20, 2013Inventors: BENJAMIN ISAAC VAUGHT, ROBERT L. CROCCO, JR., JOHN LEWIS, JIAN SUN, YICHEN WEI
-
Publication number: 20130050642Abstract: The technology provides for automatic alignment of a see-through near-eye, mixed reality device with an inter-pupillary distance (IPD). A determination is made as to whether a see-through, near-eye, mixed reality display device is aligned with an IPD of a user. If the display device is not aligned with the IPD, the display device is automatically adjusted. In some examples, the alignment determination is based on determinations of whether an optical axis of each display optical system positioned to be seen through by a respective eye is aligned with a pupil of the respective eye in accordance with an alignment criteria. The pupil alignment may be determined based on an arrangement of gaze detection elements for each display optical system including at least one sensor for capturing data of the respective eye and the captured data. The captured data may be image data, image and glint data, and glint data only.Type: ApplicationFiled: August 30, 2011Publication date: February 28, 2013Inventors: John R. Lewis, Yichen Wei, Robert L. Crocco, Benjamin I. Vaught, Kathryn Stone Perez, Alex Aben-Athar Kipman
-
Publication number: 20130050070Abstract: The technology provides various embodiments for gaze determination within a see-through, near-eye, mixed reality display device. In some embodiments, the boundaries of a gaze detection coordinate system can be determined from a spatial relationship between a user eye and gaze detection elements such as illuminators and at least one light sensor positioned on a support structure such as an eyeglasses frame. The gaze detection coordinate system allows for determination of a gaze vector from each eye based on data representing glints on the user eye, or a combination of image and glint data. A point of gaze may be determined in a three-dimensional user field of view including real and virtual objects. The spatial relationship between the gaze detection elements and the eye may be checked and may trigger a re-calibration of training data sets if the boundaries of the gaze detection coordinate system have changed.Type: ApplicationFiled: August 30, 2011Publication date: February 28, 2013Inventors: John R. Lewis, Yichen Wei, Robert L. Crocco, Benjamin I. Vaught, Alex Aben-Athar Kipman, Kathryn Stone Perez
-
Publication number: 20120294476Abstract: A computing device configured to determine, for each of a plurality of locations in an image, a saliency measure based at least on a cost of composing parts of the image in the location from parts of the image outside of the location is described herein. The computing device is further configured to select one or more of the locations as representing salient objects of the image based at least on the saliency measures.Type: ApplicationFiled: May 16, 2011Publication date: November 22, 2012Applicant: MICROSOFT CORPORATIONInventors: Yichen Wei, Jie Feng, Litian Tao, Jian Sun
-
Publication number: 20120165097Abstract: A video game system (or other data processing system) can visually identify a person entering a field of view of the system and determine whether the person has been previously interacting with the system. In one embodiment, the system establishes thresholds, enrolls players, performs the video game (or other application) including interacting with a subset of the players based on the enrolling, determines that a person has become detectable in the field of view of the system, automatically determines whether the person is one of the enrolled players, maps the person to an enrolled player and interacts with the person based on the mapping if it is determined that the person is one of the enrolled players, and assigns a new identification to the person and interacts with the person based on the new identification if it is determined that the person is not one of the enrolled players.Type: ApplicationFiled: March 2, 2012Publication date: June 28, 2012Applicant: MICROSOFT CORPORATIONInventors: Tommer Leyvand, Mitchell Stephen Dernis, Jinyu Li, Yichen Wei, Jian Sun, Casey Leon Meekhof, Timothy Milton Keosababian
-
Publication number: 20110190055Abstract: A video game system (or other data processing system) can visually identify a person entering a field of view of the system and determine whether the person has been previously interacting with the system. In one embodiment, the system establishes thresholds, enrolls players, performs the video game (or other application) including interacting with a subset of the players based on the enrolling, determines that a person has become detectable in the field of view of the system, automatically determines whether the person is one of the enrolled players, maps the person to an enrolled player and interacts with the person based on the mapping if it is determined that the person is one of the enrolled players, and assigns a new identification to the person and interacts with the person based on the new identification if it is determined that the person is not one of the enrolled players.Type: ApplicationFiled: January 29, 2010Publication date: August 4, 2011Applicant: MICROSOFT CORPORATIONInventors: Tommer Leyvand, Mitchell Stephen Dernis, Jinyu Li, Yichen Wei, Jian Sun, Casey Leon Meekhof, Timothy Milton Keosababian
-
Publication number: 20100164986Abstract: Described is a technology in which a collage of digital photographs is dynamically computed and rendered so as to vary the photographs that are visible in the collage over time. A dynamic collage mechanism coupled to a source of photographs computes a collage for visible output, and dynamically updates the collage on a scheduled basis by adding different photograph(s) in place of other photograph(s). Each arrangement of the photographs in each updated collage is computed from a previous collage. Also described is layout optimization in which the photographs in the updated collage are translated, rotated and/or layered so as to cover a maximum amount of the overall area of the collage.Type: ApplicationFiled: December 29, 2008Publication date: July 1, 2010Applicant: Microsoft CorporationInventors: Yichen Wei, Yasuyuki Matsushita
-
Publication number: 20100165123Abstract: Described is a technology in which existing motion information (e.g., obtained from professional quality videos) is used to correct the motion information of an input video, such as to stabilize the input video. The input video is processed into an original motion chain, which is segmented into original segments. Candidate segments are found for each original segment, and one candidate segment is matched (based on matching criteria or the like) to each original segments. The matched candidates are stitched together to form a changed motion chain that via image warping changes the motion in the output video. Also described is building the data store by processing reference videos into motion information; different data stores may be built based upon styles of reference videos that match a particular style of motion (e.g., action video, scenic video) for the data store.Type: ApplicationFiled: December 29, 2008Publication date: July 1, 2010Applicant: Microsoft CorporationInventors: Yichen Wei, Yasuyuki Matsushita