Patents by Inventor Tsz Ho Yu
Tsz Ho Yu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20220253131Abstract: In one embodiment, a method includes capturing, using one or more cameras implemented in a wearable device worn by a user, a first image depicting at least a part of a hand of the user holding a controller in an environment, identifying one or more features from the first image to estimate a pose of the hand of the user, estimating a first pose of the controller based on the pose of the hand of the user and an estimated grip that defines a relative pose between the hand of the user and the controller, receiving IMU data of the controller, and estimating a second pose of the controller by updating the first pose of the controller using the IMU data of the controller. The method utilizes multiple data sources to track the controller under various conditions of the environment to provide an accurate controller tracking consistently.Type: ApplicationFiled: April 13, 2022Publication date: August 11, 2022Inventors: Tsz Ho Yu, Chengyuan Yan, Christian Forster
-
Patent number: 11320896Abstract: In one embodiment, a method includes capturing, using one or more cameras implemented in a wearable device worn by a user, a first image depicting at least a part of a hand of the user holding a controller in an environment, identifying one or more features from the first image to estimate a pose of the hand of the user, estimating a first pose of the controller based on the pose of the hand of the user and an estimated grip that defines a relative pose between the hand of the user and the controller, receiving IMU data of the controller, and estimating a second pose of the controller by updating the first pose of the controller using the IMU data of the controller. The method utilizes multiple data sources to track the controller under various conditions of the environment to provide an accurate controller tracking consistently.Type: GrantFiled: August 3, 2020Date of Patent: May 3, 2022Assignee: Facebook Technologies, LLC.Inventors: Tsz Ho Yu, Chengyuan Yan, Christian Forster
-
Publication number: 20220035441Abstract: In one embodiment, a method includes capturing, using one or more cameras implemented in a wearable device worn by a user, a first image depicting at least a part of a hand of the user holding a controller in an environment, identifying one or more features from the first image to estimate a pose of the hand of the user, estimating a first pose of the controller based on the pose of the hand of the user and an estimated grip that defines a relative pose between the hand of the user and the controller, receiving IMU data of the controller, and estimating a second pose of the controller by updating the first pose of the controller using the IMU data of the controller. The method utilizes multiple data sources to track the controller under various conditions of the environment to provide an accurate controller tracking consistently.Type: ApplicationFiled: August 3, 2020Publication date: February 3, 2022Inventors: Tsz Ho Yu, Chengyuan Yan, Christian Forster
-
Patent number: 10277813Abstract: A viewing device, such as a virtual reality headset, allows a user to view a panoramic scene captured by one or more video capture devices that may include multiple cameras that simultaneously capture 360° video data. The viewing device may display the panoramic scene in real time and change the display in response to moving the viewing device and/or changing perspectives by switching to video data being captured by a different video capture device within the environment. Moreover, multiple video capture devices located within an environment can be used to create a three-dimensional representation of the environment that allows a user to explore the three-dimensional space while viewing the environment in real time.Type: GrantFiled: June 25, 2015Date of Patent: April 30, 2019Assignee: Amazon Technologies, Inc.Inventors: Jim Oommen Thomas, Paul Aksenti Savastinuk, Cheng-Hao Kuo, Tsz Ho Yu, Ross David Roessler, William Evan Welbourne, Yinfei Yang
-
Patent number: 10104286Abstract: Systems and methods may be directed to de-blurring panoramic images and/or video. An image processor may receive a frame, where the frame comprises a plurality of pixel values arranged in a grid. The image processor may divide the frame into a first section and a second section. The image processor may determine a first motion kernel for the first section and apply the first motion kernel to the first section. The image processor may also determine a second motion kernel for the second section and apply the second motion kernel to the second section.Type: GrantFiled: August 27, 2015Date of Patent: October 16, 2018Assignee: AMAZON TECHNOLOGIES, INC.Inventors: Tsz Ho Yu, Paul Aksenti Savastinuk, Yinfei Yang, Cheng-Hao Kuo, Ross David Roessler, William Evan Welbourne
-
Patent number: 10084959Abstract: A video capture device may include multiple cameras that simultaneously capture video data. The video capture device and/or one or more remote computing resources may stitch the video data captured by the multiple cameras to generate stitched video data that corresponds to 360° video. The remote computing resources may apply one or more algorithms to the stitched video data to adjust the color characteristics of the stitched video data, such as lighting, exposure, white balance contrast, and saturation. The remote computing resources may further smooth the transition between the video data captured by the multiple cameras to reduce artifacts such as abrupt changes in color as a result of the individual cameras of the video capture device having different video capture settings. The video capture device and/or the remote computing resources may generate a panoramic video that may include up to a 360° field of view.Type: GrantFiled: June 25, 2015Date of Patent: September 25, 2018Assignee: Amazon Technologies, Inc.Inventors: Tsz Ho Yu, Jim Oommen Thomas, Cheng-Hao Kuo, Yinfei Yang, Ross David Roessler, Paul Aksenti Savastinuk, William Evan Welbourne
-
Patent number: 9973711Abstract: Devices, systems and methods are disclosed for identifying content in video data and creating content-based zooming and panning effects to emphasize the content. Contents may be detected and analyzed in the video data using computer vision, machine learning algorithms or specified through a user interface. Panning and zooming controls may be associated with the contents, panning or zooming based on a location and size of content within the video data. The device may determine a number of pixels associated with content and may frame the content to be a certain percentage of the edited video data, such as a close-up shot where a subject is displayed as 50% of the viewing frame. The device may identify an event of interest, may determine multiple frames associated with the event of interest and may pan and zoom between the multiple frames based on a size/location of the content within the multiple frames.Type: GrantFiled: June 29, 2015Date of Patent: May 15, 2018Assignee: Amazon Technologies, Inc.Inventors: Yinfei Yang, William Evan Welbourne, Ross David Roessler, Paul Aksenti Savastinuk, Cheng-Hao Kuo, Jim Oommen Thomas, Tsz Ho Yu
-
Publication number: 20160381306Abstract: Devices, systems and methods are disclosed for identifying content in video data and creating content-based zooming and panning effects to emphasize the content. Contents may be detected and analyzed in the video data using computer vision, machine learning algorithms or specified through a user interface. Panning and zooming controls may be associated with the contents, panning or zooming based on a location and size of content within the video data. The device may determine a number of pixels associated with content and may frame the content to be a certain percentage of the edited video data, such as a close-up shot where a subject is displayed as 50% of the viewing frame. The device may identify an event of interest, may determine multiple frames associated with the event of interest and may pan and zoom between the multiple frames based on a size/location of the content within the multiple frames.Type: ApplicationFiled: June 29, 2015Publication date: December 29, 2016Inventors: Yinfei Yang, William Evan Welbourne, Ross David Roessler, Paul Aksenti Savastinuk, Cheng-Hao Kuo, Jim Oommen Thomas, Tsz Ho Yu