Patents by Inventor Adrian Brian Ratter

Adrian Brian Ratter has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11914836
    Abstract: A method includes accessing an image of a physical environment of a user, the image depicting a physical input device and a physical hand of the user, determining that a contrast between the physical input device and the physical hand depicted in the image is lower than a predetermined threshold, modifying the image to increase the contrast, determining a pose of the physical input device, generating a three-dimensional model representing the physical hand of the user, generating an image mask by projecting the three-dimensional model onto an image plane, generating, a cropped image depicting at least the physical hand of the user in the image, rendering, based on the perspective of the user and the pose of the physical input device, a virtual input device to represent the physical input device, and displaying the cropped image depicting at least the physical hand over the rendered virtual input device.
    Type: Grant
    Filed: December 20, 2022
    Date of Patent: February 27, 2024
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Adrian Brian Ratter, Alessia Marra, Yugeng He, Panya Inversin
  • Publication number: 20230131667
    Abstract: A method includes accessing an image of a physical environment of a user, the image depicting a physical input device and a physical hand of the user, determining that a contrast between the physical input device and the physical hand depicted in the image is lower than a predetermined threshold, modifying the image to increase the contrast, determining a pose of the physical input device, generating a three-dimensional model representing the physical hand of the user, generating an image mask by projecting the three-dimensional model onto an image plane, generating, a cropped image depicting at least the physical hand of the user in the image, rendering, based on the perspective of the user and the pose of the physical input device, a virtual input device to represent the physical input device, and displaying the cropped image depicting at least the physical hand over the rendered virtual input device.
    Type: Application
    Filed: December 20, 2022
    Publication date: April 27, 2023
    Inventors: Adrian Brian Ratter, Alessia Marra, Yugeng He, Panya Inversin
  • Patent number: 11537258
    Abstract: In one embodiment, a method includes a computer system accessing an image of a physical environment of a user, the image being associated with a perspective of the user and depicting a physical input device and a physical hand of the user, determining a pose of the physical input device, generating a three-dimensional model representing the physical hand of the user, generating an image mask by projecting the three-dimensional model onto an image plane associated with the perspective of the user, generating, by applying the image mask to the image, a cropped image depicting at least the physical hand of the user in the image, rendering, based on the perspective of the user and the pose of the physical input device, a virtual input device to represent the physical input device, and displaying the cropped image depicting at least the physical hand over the rendered virtual input device.
    Type: Grant
    Filed: October 16, 2020
    Date of Patent: December 27, 2022
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Adrian Brian Ratter, Alessia Marra, Yugeng He, Panya Inversin
  • Patent number: 11500413
    Abstract: In one embodiment, a method includes receiving, from a controller, a data packet including (1) a plurality of samples each corresponding to measurements from a motion sensor and (2) a timestamp corresponding to a measurement time of one of the samples as measured by a clock of the controller; determining, based on the timestamp, an estimated measurement time relative to a local clock for each of the plurality of samples that is not associated with the timestamp; and converting each of the timestamp and the estimated measurement times to a corresponding synchronization time using a learned relationship relating the clock of the controller and the local clock. The learned relationship is iteratively learned based on previously received data packets from the controller. The synchronization time associated with each of the plurality of samples represents an estimated time, relative to the local clock, at which the sample was measured.
    Type: Grant
    Filed: August 31, 2020
    Date of Patent: November 15, 2022
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Boyang Zhang, Adrian Brian Ratter, Chengyuan Yan, Jack Hood Profit, Jr., Paul Austin Buckley
  • Publication number: 20220121343
    Abstract: In one embodiment, a method includes a computer system accessing an image of a physical environment of a user, the image being associated with a perspective of the user and depicting a physical input device and a physical hand of the user, determining a pose of the physical input device, generating a three-dimensional model representing the physical hand of the user, generating an image mask by projecting the three-dimensional model onto an image plane associated with the perspective of the user, generating, by applying the image mask to the image, a cropped image depicting at least the physical hand of the user in the image, rendering, based on the perspective of the user and the pose of the physical input device, a virtual input device to represent the physical input device, and displaying the cropped image depicting at least the physical hand over the rendered virtual input device.
    Type: Application
    Filed: October 16, 2020
    Publication date: April 21, 2022
    Inventors: Adrian Brian Ratter, Alessia Marra, Yugeng He, Panya Inversin
  • Publication number: 20220026984
    Abstract: Disclosed herein are related to a system and a method for porting a physical object in a physical space into a virtual reality. In one approach, the method includes detecting an input device in a physical space relative to a user of the input device. In one approach, the method includes presenting, by a display device to the user, a virtual model of the detected input device in a virtual space at a location and an orientation. The location and the orientation of the virtual model in the virtual space may correspond to a location and an orientation of the input device in the physical space relative to the user. In one approach, the method includes visually providing relative to the virtual model in the virtual space, through the display device, spatial feedback on the user's interaction with the input device in the physical space.
    Type: Application
    Filed: October 5, 2021
    Publication date: January 27, 2022
    Inventors: Jan Herling, Adrian Brian Ratter
  • Patent number: 11144115
    Abstract: Disclosed herein are related to a system and a method for porting a physical object in a physical space into a virtual reality. In one approach, the method includes detecting an input device in a physical space relative to a user of the input device. In one approach, the method includes presenting, by a display device to the user, a virtual model of the detected input device in a virtual space at a location and an orientation. The location and the orientation of the virtual model in the virtual space may correspond to a location and an orientation of the input device in the physical space relative to the user. In one approach, the method includes visually providing relative to the virtual model in the virtual space, through the display device, spatial feedback on the user's interaction with the input device in the physical space.
    Type: Grant
    Filed: November 1, 2019
    Date of Patent: October 12, 2021
    Assignee: FACEBOOK TECHNOLOGIES, LLC
    Inventors: Jan Herling, Adrian Brian Ratter
  • Publication number: 20210132683
    Abstract: Disclosed herein are related to a system and a method for porting a physical object in a physical space into a virtual reality. In one approach, the method includes detecting an input device in a physical space relative to a user of the input device. In one approach, the method includes presenting, by a display device to the user, a virtual model of the detected input device in a virtual space at a location and an orientation. The location and the orientation of the virtual model in the virtual space may correspond to a location and an orientation of the input device in the physical space relative to the user. In one approach, the method includes visually providing relative to the virtual model in the virtual space, through the display device, spatial feedback on the user's interaction with the input device in the physical space.
    Type: Application
    Filed: November 1, 2019
    Publication date: May 6, 2021
    Inventors: Jan Herling, Adrian Brian Ratter
  • Publication number: 20200401181
    Abstract: In one embodiment, a method includes receiving, from a controller, a data packet including (1) a plurality of samples each corresponding to measurements from a motion sensor and (2) a timestamp corresponding to a measurement time of one of the samples as measured by a clock of the controller; determining, based on the timestamp, an estimated measurement time relative to a local clock for each of the plurality of samples that is not associated with the timestamp; and converting each of the timestamp and the estimated measurement times to a corresponding synchronization time using a learned relationship relating the clock of the controller and the local clock. The learned relationship is iteratively learned based on previously received data packets from the controller. The synchronization time associated with each of the plurality of samples represents an estimated time, relative to the local clock, at which the sample was measured.
    Type: Application
    Filed: August 31, 2020
    Publication date: December 24, 2020
    Inventors: Boyang Zhang, Adrian Brian Ratter, Chengyuan Yan, Jack Hood Profit, JR., Paul Austin Buckley
  • Patent number: 10809760
    Abstract: In one embodiment, a method includes receiving, from a controller, a data packet including (1) a plurality of samples each corresponding to measurements from a motion sensor and (2) a timestamp corresponding to a measurement time of one of the samples as measured by a clock of the controller; determining, based on the timestamp, an estimated measurement time relative to a local clock for each of the plurality of samples that is not associated with the timestamp; and converting each of the timestamp and the estimated measurement times to a corresponding synchronization time using a learned relationship relating the clock of the controller and the local clock. The learned relationship is iteratively learned based on previously received data packets from the controller. The synchronization time associated with each of the plurality of samples represents an estimated time, relative to the local clock, at which the sample was measured.
    Type: Grant
    Filed: October 29, 2018
    Date of Patent: October 20, 2020
    Assignee: Facebook, Inc.
    Inventors: Boyang Zhang, Adrian Brian Ratter, Chengyuan Yan, Jack Hood Profit, Jr., Paul Austin Buckley
  • Patent number: 10558261
    Abstract: In one embodiment, a method includes receiving motion data from a motion sensor during a packet-transmission interval of a wireless protocol. The motion data corresponds to a first pre-determined number of samples measured at a first sampling frequency. Each sample is associated with a first timestamp corresponding to a measurement time of that sample during the packet-transmission interval. The method also includes converting the motion data to correspond to a second pre-determined number of samples. The second pre-determined number is fewer than the first pre-determined number. The method also includes determining a second timestamp for each of the second pre-determined number of samples. The second timestamps are within the packet-transmission interval and represent measurement times at a second sampling frequency that is lower than the first sampling frequency. The method also includes combining the converted motion data and the corresponding second timestamps into a first data packet.
    Type: Grant
    Filed: October 29, 2018
    Date of Patent: February 11, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: Boyang Zhang, Adrian Brian Ratter, Chengyuan Yan, Jack Hood Profit, Jr., Paul Austin Buckley