Patents by Inventor Chengyuan Yan

Chengyuan Yan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11966701
    Abstract: In one embodiment, a method includes rendering a first output image comprising one or more augmented-reality (AR) objects for displays of an AR rendering device of an AR system associated with a first user. The method further includes accessing sensor signals associated with the first user. The one or more sensor signals may be captured by sensors of the AR system. The method further includes detecting a change in a context of the first user with respect to a real-world environment based on the sensor signals. The method further includes rendering a second output image comprising the AR objects for the displays of the AR rendering device. One or more of the AR objects may be adapted based on the detected change in the context of the first user.
    Type: Grant
    Filed: August 2, 2021
    Date of Patent: April 23, 2024
    Assignee: Meta Platforms, Inc.
    Inventors: Yiming Pu, Christopher E Balmes, Gabrielle Catherine Moskey, John Jacob Blakeley, Amy Lawson Bearman, Alireza Dirafzoon, Matthew Dan Feiszli, Ganesh Venkatesh, Babak Damavandi, Jiwen Ren, Chengyuan Yan, Guangqiang Dong
  • Publication number: 20240103626
    Abstract: The disclosed apparatus may include a wearable haptic ring that features input capabilities relative to a computing system. In various examples, the wearable haptic ring may be designed to curve around a human finger of a wearer with a touchpad that is seamlessly integrated with the ring. For example, the seamlessly integrated touchpad may be operable by another finger of the wearer. Moreover, the haptic ring may include a haptic feedback unit designed to provide haptic feedback in response to input from the wearer. As such, the haptic ring may enable a wide range of user inputs while appearing like a typical ring rather than a computer input/output device. Various other implementations are also disclosed.
    Type: Application
    Filed: September 20, 2023
    Publication date: March 28, 2024
    Inventors: Dan Kun-yi Chen, Chengyuan Yan
  • Publication number: 20230281938
    Abstract: The various implementations described herein include methods and systems for providing input capabilities at various fidelity levels. In one aspect, a method includes receiving, from an application, a request identifying an input capability for making an input operation available within the application. The method further includes, in response to receiving the request: identifying techniques that the artificial-reality system can use to make the requested input capability available to the application using data from one or more devices; selecting a first technique for making the requested input capability available to the application; and using the first technique to provide to the application data to allow for performance of the requested input capability.
    Type: Application
    Filed: February 27, 2023
    Publication date: September 7, 2023
    Inventors: Chengyuan YAN, Joseph Davis GREER, Sheng SHEN, Anurag SHARMA
  • Patent number: 11670082
    Abstract: Systems and methods for providing a virtual space for multiple devices can include a first device having at least one sensor configured to acquire a spatial information of a physical space of the first device. The first device may include at least one processor configured to establish, according to the acquired spatial information, a virtual space corresponding to the physical space, that is accessible by a user of the first device via the first device. The at least one processor may further be configured to register a second device within the physical space, to allow a user of the second device to access the virtual space via the second device.
    Type: Grant
    Filed: December 6, 2021
    Date of Patent: June 6, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Chengyuan Yan, Amrutha Hakkare Arunachala, Chengyuan Lin, Anush Mohan, Ke Huo
  • Publication number: 20230076068
    Abstract: A method of interpreting a digit-to-digit gesture based on roll values for a wrist-wearable device is disclosed. The method includes, receiving an indication at a first point in time that a user donning the wrist-wearable device is providing a digit-to-digit gesture in which one of the user's digits touches another of the user's digits without contacting the display of the wearable device. The method further includes, in accordance with a determination that the digit-to-digit gesture is provided while data indicates that the wrist-wearable device has a first roll value, causing a target device in communication with the wearable device to perform a first input command and receiving another indication at a second point in time that is after the first point in time that the user is providing the digit-to-digit gesture again, causing the target device to perform a second input command that is distinct from the first input command.
    Type: Application
    Filed: August 31, 2022
    Publication date: March 9, 2023
    Inventors: Yinglin Li, Qian Chen, Chengyuan Yan
  • Patent number: 11521356
    Abstract: Systems and methods for maintaining a shared interactive environment include receiving, by a server, requests to register a first input device of a first user and a second input device of a second user with a shared interactive environment. The first input device may be for a first modality involving user input for an augmented reality (AR) environment, and the second input device may be for a second modality involving user input for a personal computer (PC) based virtual environment or a virtual reality (VR) environment. The server may register the first and second input device with the shared interactive environment. The server may receive inputs from a first adapter for the first modality and from a second adapter for the second modality. The inputs may be for the first and second user to use the shared interactive environment.
    Type: Grant
    Filed: October 10, 2019
    Date of Patent: December 6, 2022
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Chengyuan Yan, Ke Huo, Amrutha Hakkare Arunachala, Chengyuan Lin, Anush Mohan
  • Publication number: 20220374130
    Abstract: In one embodiment, a method includes rendering a first output image comprising one or more augmented-reality (AR) objects for displays of an AR rendering device of an AR system associated with a first user. The method further includes accessing sensor signals associated with the first user. The one or more sensor signals may be captured by sensors of the AR system. The method further includes detecting a change in a context of the first user with respect to a real-world environment based on the sensor signals. The method further includes rendering a second output image comprising the AR objects for the displays of the AR rendering device. One or more of the AR objects may be adapted based on the detected change in the context of the first user.
    Type: Application
    Filed: August 2, 2021
    Publication date: November 24, 2022
    Inventors: Yiming Pu, Christopher E. Balmes, Gabrielle Catherine Moskey, John Jacob Blakeley, Amy Lawson Bearman, Alireza Dirafzoon, Matthew Dan Feiszli, Ganesh Venkatesh, Babak Damavandi, Jiwen Ren, Chengyuan Yan, Guangqiang Dong
  • Patent number: 11500413
    Abstract: In one embodiment, a method includes receiving, from a controller, a data packet including (1) a plurality of samples each corresponding to measurements from a motion sensor and (2) a timestamp corresponding to a measurement time of one of the samples as measured by a clock of the controller; determining, based on the timestamp, an estimated measurement time relative to a local clock for each of the plurality of samples that is not associated with the timestamp; and converting each of the timestamp and the estimated measurement times to a corresponding synchronization time using a learned relationship relating the clock of the controller and the local clock. The learned relationship is iteratively learned based on previously received data packets from the controller. The synchronization time associated with each of the plurality of samples represents an estimated time, relative to the local clock, at which the sample was measured.
    Type: Grant
    Filed: August 31, 2020
    Date of Patent: November 15, 2022
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Boyang Zhang, Adrian Brian Ratter, Chengyuan Yan, Jack Hood Profit, Jr., Paul Austin Buckley
  • Publication number: 20220253131
    Abstract: In one embodiment, a method includes capturing, using one or more cameras implemented in a wearable device worn by a user, a first image depicting at least a part of a hand of the user holding a controller in an environment, identifying one or more features from the first image to estimate a pose of the hand of the user, estimating a first pose of the controller based on the pose of the hand of the user and an estimated grip that defines a relative pose between the hand of the user and the controller, receiving IMU data of the controller, and estimating a second pose of the controller by updating the first pose of the controller using the IMU data of the controller. The method utilizes multiple data sources to track the controller under various conditions of the environment to provide an accurate controller tracking consistently.
    Type: Application
    Filed: April 13, 2022
    Publication date: August 11, 2022
    Inventors: Tsz Ho Yu, Chengyuan Yan, Christian Forster
  • Patent number: 11320896
    Abstract: In one embodiment, a method includes capturing, using one or more cameras implemented in a wearable device worn by a user, a first image depicting at least a part of a hand of the user holding a controller in an environment, identifying one or more features from the first image to estimate a pose of the hand of the user, estimating a first pose of the controller based on the pose of the hand of the user and an estimated grip that defines a relative pose between the hand of the user and the controller, receiving IMU data of the controller, and estimating a second pose of the controller by updating the first pose of the controller using the IMU data of the controller. The method utilizes multiple data sources to track the controller under various conditions of the environment to provide an accurate controller tracking consistently.
    Type: Grant
    Filed: August 3, 2020
    Date of Patent: May 3, 2022
    Assignee: Facebook Technologies, LLC.
    Inventors: Tsz Ho Yu, Chengyuan Yan, Christian Forster
  • Publication number: 20220035441
    Abstract: In one embodiment, a method includes capturing, using one or more cameras implemented in a wearable device worn by a user, a first image depicting at least a part of a hand of the user holding a controller in an environment, identifying one or more features from the first image to estimate a pose of the hand of the user, estimating a first pose of the controller based on the pose of the hand of the user and an estimated grip that defines a relative pose between the hand of the user and the controller, receiving IMU data of the controller, and estimating a second pose of the controller by updating the first pose of the controller using the IMU data of the controller. The method utilizes multiple data sources to track the controller under various conditions of the environment to provide an accurate controller tracking consistently.
    Type: Application
    Filed: August 3, 2020
    Publication date: February 3, 2022
    Inventors: Tsz Ho Yu, Chengyuan Yan, Christian Forster
  • Patent number: 11195020
    Abstract: Systems and methods for providing a virtual space for multiple devices can include a first device having at least one sensor configured to acquire a spatial information of a physical space of the first device. The first device may include at least one processor configured to establish, according to the acquired spatial information, a virtual space corresponding to the physical space, that is accessible by a user of the first device via the first device. The at least one processor may further be configured to register a second device within the physical space, to allow a user of the second device to access the virtual space via the second device.
    Type: Grant
    Filed: October 29, 2019
    Date of Patent: December 7, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Chengyuan Yan, Amrutha Hakkare Arunachala, Chengyuan Lin, Anush Mohan, Ke Huo
  • Publication number: 20210289336
    Abstract: The disclosed systems may include systems and methods for clock synchronization under random transmission delay conditions. Additionally, systems and methods for horizon leveling for wrist captured images may be disclosed. In addition, the disclosed may include methods, systems, and devices for batch message transfer. The disclosed methods may also include a mobile computing device receiving an indication to initiate an emergency voice call by a user of the mobile computing device and initiating an Internet Protocol Multimedia Subsystem (IMS) emergency call. In addition, systems, methods, and devices for automatic content display may be disclosed. Various other related methods and systems are also disclosed.
    Type: Application
    Filed: June 1, 2021
    Publication date: September 16, 2021
    Inventors: Zhong Zhang, Jiansong Wang, Sixue Chen, Insoo Hwang, Swaminathan Balakrishnan, Ran Rubin, Johnny Kallacheril John, Philip Richard Pottier, James Leon Garrison, Brian Richard Costabile, Chengyuan Yan, Yue Kwen Justin Yip
  • Publication number: 20210110609
    Abstract: Systems and methods for maintaining a shared interactive environment include receiving, by a server, requests to register a first input device of a first user and a second input device of a second user with a shared interactive environment. The first input device may be for a first modality involving user input for an augmented reality (AR) environment, and the second input device may be for a second modality involving user input for a personal computer (PC) based virtual environment or a virtual reality (VR) environment. The server may register the first and second input device with the shared interactive environment. The server may receive inputs from a first adapter for the first modality and from a second adapter for the second modality. The inputs may be for the first and second user to use the shared interactive environment.
    Type: Application
    Filed: October 10, 2019
    Publication date: April 15, 2021
    Inventors: Chengyuan Yan, Ke Huo, Amrutha Hakkare Arunachala, Chengyuan Lin, Anush Mohan
  • Patent number: 10884505
    Abstract: The disclosed computer-implemented method may include tracking, using a low-order degree-of-freedom (DOF) mode, an orientation of a device based on input from an inertial measurement unit (IMU) of the device. The method may also include determining, using a magnetometer, that the device has entered a magnetic tracking volume defined by at least one magnet and in response to determining that the device has entered the magnetic tracking volume, transitioning from the low-order DOF mode to a high-order DOF mode that tracks a higher number of DOFs than the low-order DOF mode. The method may also include tracking, using the high-order DOF mode, the position and orientation of the device based on input from both the IMU and the magnetometer. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Grant
    Filed: November 7, 2018
    Date of Patent: January 5, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Ke Huo, Chengyuan Yan
  • Publication number: 20200401181
    Abstract: In one embodiment, a method includes receiving, from a controller, a data packet including (1) a plurality of samples each corresponding to measurements from a motion sensor and (2) a timestamp corresponding to a measurement time of one of the samples as measured by a clock of the controller; determining, based on the timestamp, an estimated measurement time relative to a local clock for each of the plurality of samples that is not associated with the timestamp; and converting each of the timestamp and the estimated measurement times to a corresponding synchronization time using a learned relationship relating the clock of the controller and the local clock. The learned relationship is iteratively learned based on previously received data packets from the controller. The synchronization time associated with each of the plurality of samples represents an estimated time, relative to the local clock, at which the sample was measured.
    Type: Application
    Filed: August 31, 2020
    Publication date: December 24, 2020
    Inventors: Boyang Zhang, Adrian Brian Ratter, Chengyuan Yan, Jack Hood Profit, JR., Paul Austin Buckley
  • Patent number: 10853991
    Abstract: An artificial reality system is described includes a hand-held controller tracking sub-system having two components, a Field-of-View (FOV) tracker and a non-FOV tracker that applies specialized motion models when one or more of controllers are not trackable within the field of view. In particular, under typical operating conditions, the FOV tracker receives state data for a Head Mounted Display (HMD) and controller state data (velocity, acceleration etc.) of a controller to compute estimated poses for the controller. If the controller is trackable (e.g., within the field of view and not occluded), then the pose as computed by the FOV tracker is used and the non-FOV tracker is bypassed. If the controller is not trackable within the field of view and the controller state data meets activation conditions for one or more corner tracking cases, then the non-FOV tracker applies one or more of specialized motion models to compute a controller pose for the controller.
    Type: Grant
    Filed: May 20, 2019
    Date of Patent: December 1, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: Chengyuan Yan, Oskar Linde
  • Publication number: 20200372702
    Abstract: An artificial reality system is described includes a hand-held controller tracking sub-system having two components, a Field-of-View (FOV) tracker and a non-FOV tracker that applies specialized motion models when one or more of controllers are not trackable within the field of view. In particular, under typical operating conditions, the FOV tracker receives state data for a Head Mounted Display (HMD) and controller state data (velocity, acceleration etc.) of a controller to compute estimated poses for the controller. If the controller is trackable (e.g., within the field of view and not occluded), then the pose as computed by the FOV tracker is used and the non-FOV tracker is bypassed. If the controller is not trackable within the field of view and the controller state data meets activation conditions for one or more corner tracking cases, then the non-FOV tracker applies one or more of specialized motion models to compute a controller pose for the controller.
    Type: Application
    Filed: May 20, 2019
    Publication date: November 26, 2020
    Inventors: Chengyuan Yan, Oskar Linde
  • Patent number: 10809760
    Abstract: In one embodiment, a method includes receiving, from a controller, a data packet including (1) a plurality of samples each corresponding to measurements from a motion sensor and (2) a timestamp corresponding to a measurement time of one of the samples as measured by a clock of the controller; determining, based on the timestamp, an estimated measurement time relative to a local clock for each of the plurality of samples that is not associated with the timestamp; and converting each of the timestamp and the estimated measurement times to a corresponding synchronization time using a learned relationship relating the clock of the controller and the local clock. The learned relationship is iteratively learned based on previously received data packets from the controller. The synchronization time associated with each of the plurality of samples represents an estimated time, relative to the local clock, at which the sample was measured.
    Type: Grant
    Filed: October 29, 2018
    Date of Patent: October 20, 2020
    Assignee: Facebook, Inc.
    Inventors: Boyang Zhang, Adrian Brian Ratter, Chengyuan Yan, Jack Hood Profit, Jr., Paul Austin Buckley
  • Patent number: 10558261
    Abstract: In one embodiment, a method includes receiving motion data from a motion sensor during a packet-transmission interval of a wireless protocol. The motion data corresponds to a first pre-determined number of samples measured at a first sampling frequency. Each sample is associated with a first timestamp corresponding to a measurement time of that sample during the packet-transmission interval. The method also includes converting the motion data to correspond to a second pre-determined number of samples. The second pre-determined number is fewer than the first pre-determined number. The method also includes determining a second timestamp for each of the second pre-determined number of samples. The second timestamps are within the packet-transmission interval and represent measurement times at a second sampling frequency that is lower than the first sampling frequency. The method also includes combining the converted motion data and the corresponding second timestamps into a first data packet.
    Type: Grant
    Filed: October 29, 2018
    Date of Patent: February 11, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: Boyang Zhang, Adrian Brian Ratter, Chengyuan Yan, Jack Hood Profit, Jr., Paul Austin Buckley