Patents by Inventor Zicheng Liu
Zicheng Liu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 9065976Abstract: Multi-device capture and spatial browsing of conferences is described. In one implementation, a system detects cameras and microphones, such as the webcams on participants' notebook computers, in a conference room, group meeting, or table game, and enlists an ad-hoc array of available devices to capture each participant and the spatial relationships between participants. A video stream composited from the array is browsable by a user to navigate a 3-dimensional representation of the meeting. Each participant may be represented by a video pane, a foreground object, or a 3-D geometric model of the participant's face or body displayed in spatial relation to the other participants in a 3-dimensional arrangement analogous to the spatial arrangement of the meeting. The system may automatically re-orient the 3-dimensional representation as needed to best show a currently interesting event.Type: GrantFiled: July 9, 2013Date of Patent: June 23, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Rajesh K. Hegde, Zhengyou Zhang, Philip A. Chou, Cha Zhang, Zicheng Liu, Sasa Junuzovic
-
Patent number: 9055607Abstract: Multi-modal, multi-lingual devices can be employed to consolidate numerous items including, but not limited to, keys, remote controls, image capture devices, audio recorders, cellular telephone functionalities, location/direction detectors, health monitors, calendars, gaming devices, smart home inputs, pens, optical pointing devices or the like. For example, a corner of a cellular telephone can be used as an electronic pen. Moreover, the device can be used to snap multiple pictures stitching them together to create a panoramic image. A device can automate ignition of an automobile, initiate appliances, etc. based upon relative distance. The device can provide for near to eye capabilities for enhanced image viewing. Multiple cameras/sensors can be provided on a single device to provide for stereoscopic capabilities. The device can also provide assistance to blind, privacy, etc. by consolidating services.Type: GrantFiled: November 26, 2008Date of Patent: June 9, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Michael J. Sinclair, Yuan Kong, Zhengyou Zhang, Behrooz Chitsaz, David W. Williams, Silviu-Petru Cucerzan, Zicheng Liu
-
Publication number: 20150124046Abstract: A system facilitates managing one or more devices utilized for communicating data within a telepresence session. A telepresence session can be initiated within a communication framework that includes a first user and one or more second users. In response to determining a temporary absence of the first user from the telepresence session, a recordation of the telepresence session is initialized to enable a playback of a portion or a summary of the telepresence session that the first user has missed.Type: ApplicationFiled: December 16, 2014Publication date: May 7, 2015Inventors: Christian Huitema, William A.S. Buxton, Jonathan E. Paff, Zicheng Liu, Rajesh Kutpadi Hegde, Zhengyou Zhang, Kori Marie Quinn, Jin Li, Michel Pahud
-
Patent number: 9014420Abstract: Described is providing an action model (classifier) for automatically detecting actions in video clips, in which unlabeled data of a target dataset is used to adaptively train the action model based upon similar actions in a labeled source dataset. The target dataset comprising unlabeled video data is processed into a background model. The action model is generated from the background model using a source dataset comprising labeled data for an action of interest. The action model is iteratively refined, generally by fixing a current instance of the action model and using the current instance of the action model to search for a set of detected regions (subvolumes), and then fixing the set of subvolumes and updating the current instance of the action model based upon the set of subvolumes, and so on, for a plurality of iterations.Type: GrantFiled: June 14, 2010Date of Patent: April 21, 2015Assignee: Microsoft CorporationInventors: Zicheng Liu, Liangliang Cao
-
Patent number: 9001118Abstract: A method for constructing an avatar of a human subject includes acquiring a depth map of the subject, obtaining a virtual skeleton of the subject based on the depth map, and harvesting from the virtual skeleton a set of characteristic metrics. Such metrics correspond to distances between predetermined points of the virtual skeleton. In this example method, the characteristic metrics are provided as input to an algorithm trained using machine learning. The algorithm may be trained using a human model in a range of poses, and a range of human models in a single pose, to output a virtual body mesh as a function of the characteristic metrics. The method also includes constructing a virtual head mesh distinct from the virtual body mesh, with facial features resembling those of the subject, and connecting the virtual body mesh to the virtual head mesh.Type: GrantFiled: August 14, 2012Date of Patent: April 7, 2015Assignee: Microsoft Technology Licensing, LLCInventors: David Molyneaux, Xin Tong, Zicheng Liu, Eric Chang, Fan Yang, Jay Kapur, Emily Yang, Yang Liu, Hsiang-Tao Wu
-
Patent number: 8943526Abstract: Technologies described herein relate to estimating engagement of a person with respect to content being presented to the person. A sensor outputs a stream of data relating to the person as the person is consuming the content. At least one feature is extracted from the stream of data, and a level of engagement of the person is estimated based at least in part upon the at least one feature. A computing function is performed based upon the estimated level of engagement of the person.Type: GrantFiled: April 19, 2013Date of Patent: January 27, 2015Assignee: Microsoft CorporationInventors: Javier Hernandez Rivera, Zicheng Liu, Geoff Hulten, Michael Conrad, Kyle Krum, David DeBarr, Zhengyou Zhang
-
Patent number: 8941710Abstract: A system facilitates managing one or more devices utilized for communicating data within a telepresence session. A telepresence session can be initiated within a communication framework that includes a first user and one or more second users. In response to determining a temporary absence of the first user from the telepresence session, a recordation of the telepresence session is initialized to enable a playback of a portion or a summary of the telepresence session that the first user has missed.Type: GrantFiled: August 13, 2012Date of Patent: January 27, 2015Assignee: Microsoft CorporationInventors: Christian Huitema, William A. S. Buxton, Jonathan E. Paff, Zicheng Liu, Rajesh Kutpadi Hegde, Zhengyou Zhang, Kori Marie Quinn, Jin Li, Michel Pahud
-
Patent number: 8929600Abstract: A plurality of depth maps corresponding to respective depth measurements determined over a respective plurality of time frames may be obtained. A plurality of skeleton representations respectively corresponding to the respective time frames may be obtained. Each skeleton representation may include joints associated with an observed entity. Local feature descriptors corresponding to the respective time frames may be determined, based on the depth maps and the joints associated with the skeleton representations. An activity recognition associated with the observed entity may be determined, based on the obtained skeleton representations and the determined local feature descriptors.Type: GrantFiled: December 19, 2012Date of Patent: January 6, 2015Assignee: Microsoft CorporationInventors: Zicheng Liu, Jiang Wang
-
Publication number: 20140184803Abstract: A technique for multi-camera object tracking is disclosed that preserves privacy of imagery from each camera or group of cameras. This technique uses secure multi-party computation to compute a distance metric across data from multiple cameras without revealing any information to operators of the cameras except whether or not an object was observed by both cameras. This is achieved by a distance metric learning technique that reduces the computing complexity of secure computation while maintaining object identification accuracy.Type: ApplicationFiled: December 31, 2012Publication date: July 3, 2014Applicant: MICROSOFT CORPORATIONInventors: Chun-Te Chu, Jaeyeon Jung, Zicheng Liu, Ratul Mahajan
-
Publication number: 20140169623Abstract: A plurality of depth maps corresponding to respective depth measurements determined over a respective plurality of time frames may be obtained. A plurality of skeleton representations respectively corresponding to the respective time frames may be obtained. Each skeleton representation may include joints associated with an observed entity. Local feature descriptors corresponding to the respective time frames may be determined, based on the depth maps and the joints associated with the skeleton representations. An activity recognition associated with the observed entity may be determined, based on the obtained skeleton representations and the determined local feature descriptors.Type: ApplicationFiled: December 19, 2012Publication date: June 19, 2014Applicant: MICROSOFT CORPORATIONInventors: Zicheng Liu, Jiang Wang
-
Publication number: 20140160123Abstract: Described herein are technologies pertaining to generating a relatively accurate virtual three-dimensional model of a head/face of a user. Depth frames are received from a depth sensor and color frames are received from a camera, wherein such frames capture a head of a user. Based upon the depth frames and the color frames, the three-dimensional model of the head of the user is generated.Type: ApplicationFiled: December 12, 2012Publication date: June 12, 2014Applicant: MICROSOFT CORPORATIONInventors: Fan Yang, Minmin Gong, Zicheng Liu, Xin Tong
-
Publication number: 20140111401Abstract: A pendant for use with a mobile terminal, including: a pendant body containing an antenna; and a pendant cord connected with the pendant body, the pendant cord enclosing a cable connected to the antenna for transmitting and receiving radio frequency signals.Type: ApplicationFiled: December 23, 2013Publication date: April 24, 2014Applicant: Xiaomi Inc.Inventors: Lei Zhang, Zicheng Liu, Zhu Mao
-
Patent number: 8675067Abstract: The subject disclosure is directed towards an immersive conference, in which participants in separate locations are brought together into a common virtual environment (scene), such that they appear to each other to be in a common space, with geometry, appearance, and real-time natural interaction (e.g., gestures) preserved. In one aspect, depth data and video data are processed to place remote participants in the common scene from the first person point of view of a local participant. Sound data may be spatially controlled, and parallax computed to provide a realistic experience. The scene may be augmented with various data, videos and other effects/animations.Type: GrantFiled: May 4, 2011Date of Patent: March 18, 2014Assignee: Microsoft CorporationInventors: Philip A. Chou, Zhengyou Zhang, Cha Zhang, Dinei A. Florencio, Zicheng Liu, Rajesh K. Hegde, Nirupama Chandrasekaran
-
Patent number: 8639042Abstract: Described is a hierarchical filtered motion field technology such as for use in recognizing actions in videos with crowded backgrounds. Interest points are detected, e.g., as 2D Harris corners with recent motion, e.g. locations with high intensities in a motion history image (MHI). A global spatial motion smoothing filter is applied to the gradients of MHI to eliminate low intensity corners that are likely isolated, unreliable or noisy motions. At each remaining interest point, a local motion field filter is applied to the smoothed gradients by computing a structure proximity between sets of pixels in the local region and the interest point. The motion at a pixel/pixel set is enhanced or weakened based on its structure proximity with the interest point (nearer pixels are enhanced).Type: GrantFiled: June 22, 2010Date of Patent: January 28, 2014Assignee: Microsoft CorporationInventors: Zicheng Liu, Yingli Tian, Liangliang Cao, Zhengyou Zhang
-
Publication number: 20140009562Abstract: Multi-device capture and spatial browsing of conferences is described. In one implementation, a system detects cameras and microphones, such as the webcams on participants' notebook computers, in a conference room, group meeting, or table game, and enlists an ad-hoc array of available devices to capture each participant and the spatial relationships between participants. A video stream composited from the array is browsable by a user to navigate a 3-dimensional representation of the meeting. Each participant may be represented by a video pane, a foreground object, or a 3-D geometric model of the participant's face or body displayed in spatial relation to the other participants in a 3-dimensional arrangement analogous to the spatial arrangement of the meeting. The system may automatically re-orient the 3-dimensional representation as needed to best show a currently interesting event.Type: ApplicationFiled: July 9, 2013Publication date: January 9, 2014Inventors: Rajesh K. HEGDE, Zhengyou ZHANG, Philip A. CHOU, Cha ZHANG, Zicheng LIU, Sasa JUNUZOVIC
-
Publication number: 20130342527Abstract: A method for constructing an avatar of a human subject includes acquiring a depth map of the subject, obtaining a virtual skeleton of the subject based on the depth map, and harvesting from the virtual skeleton a set of characteristic metrics. Such metrics correspond to distances between predetermined points of the virtual skeleton. In this example method, the characteristic metrics are provided as input to an algorithm trained using machine learning. The algorithm may be trained using a human model in a range of poses, and a range of human models in a single pose, to output a virtual body mesh as a function of the characteristic metrics. The method also includes constructing a virtual head mesh distinct from the virtual body mesh, with facial features resembling those of the subject, and connecting the virtual body mesh to the virtual head mesh.Type: ApplicationFiled: August 14, 2012Publication date: December 26, 2013Applicant: MICROSOFT CORPORATIONInventors: David Molyneaux, Xin Tong, Zicheng Liu, Eric Chang, Fan Yang, Jay Kapur, Emily Yang, Yang Liu, Hsiang-Tao Wu
-
Patent number: 8537196Abstract: Multi-device capture and spatial browsing of conferences is described. In one implementation, a system detects cameras and microphones, such as the webcams on participants' notebook computers, in a conference room, group meeting, or table game, and enlists an ad-hoc array of available devices to capture each participant and the spatial relationships between participants. A video stream composited from the array is browsable by a user to navigate a 3-dimensional representation of the meeting. Each participant may be represented by a video pane, a foreground object, or a 3-D geometric model of the participant's face or body displayed in spatial relation to the other participants in a 3-dimensional arrangement analogous to the spatial arrangement of the meeting.Type: GrantFiled: October 6, 2008Date of Patent: September 17, 2013Assignee: Microsoft CorporationInventors: Rajesh K. Hegde, Zhengyou Zhang, Philip A. Chou, Cha Zhang, Zicheng Liu, Sasa Junuzovic
-
Publication number: 20130232515Abstract: Technologies described herein relate to estimating engagement of a person with respect to content being presented to the person. A sensor outputs a stream of data relating to the person as the person is consuming the content. At least one feature is extracted from the stream of data, and a level of engagement of the person is estimated based at least in part upon the at least one feature. A computing function is performed based upon the estimated level of engagement of the person.Type: ApplicationFiled: April 19, 2013Publication date: September 5, 2013Applicant: Microsoft CorporationInventors: Javier Hernandez Rivera, Zicheng Liu, Geoff Hulten, Michael Conrad, Kyle Krum, David DeBarr, Zhengyou Zhang
-
Publication number: 20130201291Abstract: Head pose tracking technique embodiments are presented that use a group of sensors configured so as to be disposed on a user's head. This group of sensors includes a depth sensor apparatus used to identify the three dimensional locations of features within a scene, and at least one other type of sensor. Data output by each sensor in the group of sensors is periodically input, and each time the data is input it is used to compute a transformation matrix that when applied to a previously determined head pose location and orientation established when the first sensor data was input identifies a current head pose location and orientation. This transformation matrix is then applied to the previously determined head pose location and orientation to identify a current head pose location and orientation.Type: ApplicationFiled: February 8, 2012Publication date: August 8, 2013Applicant: MICROSOFT CORPORATIONInventors: Zicheng Liu, Zhengyou Zhang, Zhenning Li
-
Patent number: 8416302Abstract: In some implementations, invisible light is emitted toward a subject being imaged in a low-light environment. A camera having a first color image sensor captures an image of the subject. Image processing is used to correct distortion in the image caused by the invisible light, and an augmented color image is output.Type: GrantFiled: February 10, 2009Date of Patent: April 9, 2013Assignee: Microsoft CorporationInventors: Chunhui Zhang, Yasuyuki Matsushita, Yuan Kong, Zicheng Liu