Patents by Inventor Adrian Brian Ratter
Adrian Brian Ratter has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11914836Abstract: A method includes accessing an image of a physical environment of a user, the image depicting a physical input device and a physical hand of the user, determining that a contrast between the physical input device and the physical hand depicted in the image is lower than a predetermined threshold, modifying the image to increase the contrast, determining a pose of the physical input device, generating a three-dimensional model representing the physical hand of the user, generating an image mask by projecting the three-dimensional model onto an image plane, generating, a cropped image depicting at least the physical hand of the user in the image, rendering, based on the perspective of the user and the pose of the physical input device, a virtual input device to represent the physical input device, and displaying the cropped image depicting at least the physical hand over the rendered virtual input device.Type: GrantFiled: December 20, 2022Date of Patent: February 27, 2024Assignee: Meta Platforms Technologies, LLCInventors: Adrian Brian Ratter, Alessia Marra, Yugeng He, Panya Inversin
-
Publication number: 20230131667Abstract: A method includes accessing an image of a physical environment of a user, the image depicting a physical input device and a physical hand of the user, determining that a contrast between the physical input device and the physical hand depicted in the image is lower than a predetermined threshold, modifying the image to increase the contrast, determining a pose of the physical input device, generating a three-dimensional model representing the physical hand of the user, generating an image mask by projecting the three-dimensional model onto an image plane, generating, a cropped image depicting at least the physical hand of the user in the image, rendering, based on the perspective of the user and the pose of the physical input device, a virtual input device to represent the physical input device, and displaying the cropped image depicting at least the physical hand over the rendered virtual input device.Type: ApplicationFiled: December 20, 2022Publication date: April 27, 2023Inventors: Adrian Brian Ratter, Alessia Marra, Yugeng He, Panya Inversin
-
Patent number: 11537258Abstract: In one embodiment, a method includes a computer system accessing an image of a physical environment of a user, the image being associated with a perspective of the user and depicting a physical input device and a physical hand of the user, determining a pose of the physical input device, generating a three-dimensional model representing the physical hand of the user, generating an image mask by projecting the three-dimensional model onto an image plane associated with the perspective of the user, generating, by applying the image mask to the image, a cropped image depicting at least the physical hand of the user in the image, rendering, based on the perspective of the user and the pose of the physical input device, a virtual input device to represent the physical input device, and displaying the cropped image depicting at least the physical hand over the rendered virtual input device.Type: GrantFiled: October 16, 2020Date of Patent: December 27, 2022Assignee: Meta Platforms Technologies, LLCInventors: Adrian Brian Ratter, Alessia Marra, Yugeng He, Panya Inversin
-
Patent number: 11500413Abstract: In one embodiment, a method includes receiving, from a controller, a data packet including (1) a plurality of samples each corresponding to measurements from a motion sensor and (2) a timestamp corresponding to a measurement time of one of the samples as measured by a clock of the controller; determining, based on the timestamp, an estimated measurement time relative to a local clock for each of the plurality of samples that is not associated with the timestamp; and converting each of the timestamp and the estimated measurement times to a corresponding synchronization time using a learned relationship relating the clock of the controller and the local clock. The learned relationship is iteratively learned based on previously received data packets from the controller. The synchronization time associated with each of the plurality of samples represents an estimated time, relative to the local clock, at which the sample was measured.Type: GrantFiled: August 31, 2020Date of Patent: November 15, 2022Assignee: Meta Platforms Technologies, LLCInventors: Boyang Zhang, Adrian Brian Ratter, Chengyuan Yan, Jack Hood Profit, Jr., Paul Austin Buckley
-
Publication number: 20220121343Abstract: In one embodiment, a method includes a computer system accessing an image of a physical environment of a user, the image being associated with a perspective of the user and depicting a physical input device and a physical hand of the user, determining a pose of the physical input device, generating a three-dimensional model representing the physical hand of the user, generating an image mask by projecting the three-dimensional model onto an image plane associated with the perspective of the user, generating, by applying the image mask to the image, a cropped image depicting at least the physical hand of the user in the image, rendering, based on the perspective of the user and the pose of the physical input device, a virtual input device to represent the physical input device, and displaying the cropped image depicting at least the physical hand over the rendered virtual input device.Type: ApplicationFiled: October 16, 2020Publication date: April 21, 2022Inventors: Adrian Brian Ratter, Alessia Marra, Yugeng He, Panya Inversin
-
Publication number: 20220026984Abstract: Disclosed herein are related to a system and a method for porting a physical object in a physical space into a virtual reality. In one approach, the method includes detecting an input device in a physical space relative to a user of the input device. In one approach, the method includes presenting, by a display device to the user, a virtual model of the detected input device in a virtual space at a location and an orientation. The location and the orientation of the virtual model in the virtual space may correspond to a location and an orientation of the input device in the physical space relative to the user. In one approach, the method includes visually providing relative to the virtual model in the virtual space, through the display device, spatial feedback on the user's interaction with the input device in the physical space.Type: ApplicationFiled: October 5, 2021Publication date: January 27, 2022Inventors: Jan Herling, Adrian Brian Ratter
-
Patent number: 11144115Abstract: Disclosed herein are related to a system and a method for porting a physical object in a physical space into a virtual reality. In one approach, the method includes detecting an input device in a physical space relative to a user of the input device. In one approach, the method includes presenting, by a display device to the user, a virtual model of the detected input device in a virtual space at a location and an orientation. The location and the orientation of the virtual model in the virtual space may correspond to a location and an orientation of the input device in the physical space relative to the user. In one approach, the method includes visually providing relative to the virtual model in the virtual space, through the display device, spatial feedback on the user's interaction with the input device in the physical space.Type: GrantFiled: November 1, 2019Date of Patent: October 12, 2021Assignee: FACEBOOK TECHNOLOGIES, LLCInventors: Jan Herling, Adrian Brian Ratter
-
Publication number: 20210132683Abstract: Disclosed herein are related to a system and a method for porting a physical object in a physical space into a virtual reality. In one approach, the method includes detecting an input device in a physical space relative to a user of the input device. In one approach, the method includes presenting, by a display device to the user, a virtual model of the detected input device in a virtual space at a location and an orientation. The location and the orientation of the virtual model in the virtual space may correspond to a location and an orientation of the input device in the physical space relative to the user. In one approach, the method includes visually providing relative to the virtual model in the virtual space, through the display device, spatial feedback on the user's interaction with the input device in the physical space.Type: ApplicationFiled: November 1, 2019Publication date: May 6, 2021Inventors: Jan Herling, Adrian Brian Ratter
-
Publication number: 20200401181Abstract: In one embodiment, a method includes receiving, from a controller, a data packet including (1) a plurality of samples each corresponding to measurements from a motion sensor and (2) a timestamp corresponding to a measurement time of one of the samples as measured by a clock of the controller; determining, based on the timestamp, an estimated measurement time relative to a local clock for each of the plurality of samples that is not associated with the timestamp; and converting each of the timestamp and the estimated measurement times to a corresponding synchronization time using a learned relationship relating the clock of the controller and the local clock. The learned relationship is iteratively learned based on previously received data packets from the controller. The synchronization time associated with each of the plurality of samples represents an estimated time, relative to the local clock, at which the sample was measured.Type: ApplicationFiled: August 31, 2020Publication date: December 24, 2020Inventors: Boyang Zhang, Adrian Brian Ratter, Chengyuan Yan, Jack Hood Profit, JR., Paul Austin Buckley
-
Patent number: 10809760Abstract: In one embodiment, a method includes receiving, from a controller, a data packet including (1) a plurality of samples each corresponding to measurements from a motion sensor and (2) a timestamp corresponding to a measurement time of one of the samples as measured by a clock of the controller; determining, based on the timestamp, an estimated measurement time relative to a local clock for each of the plurality of samples that is not associated with the timestamp; and converting each of the timestamp and the estimated measurement times to a corresponding synchronization time using a learned relationship relating the clock of the controller and the local clock. The learned relationship is iteratively learned based on previously received data packets from the controller. The synchronization time associated with each of the plurality of samples represents an estimated time, relative to the local clock, at which the sample was measured.Type: GrantFiled: October 29, 2018Date of Patent: October 20, 2020Assignee: Facebook, Inc.Inventors: Boyang Zhang, Adrian Brian Ratter, Chengyuan Yan, Jack Hood Profit, Jr., Paul Austin Buckley
-
Patent number: 10558261Abstract: In one embodiment, a method includes receiving motion data from a motion sensor during a packet-transmission interval of a wireless protocol. The motion data corresponds to a first pre-determined number of samples measured at a first sampling frequency. Each sample is associated with a first timestamp corresponding to a measurement time of that sample during the packet-transmission interval. The method also includes converting the motion data to correspond to a second pre-determined number of samples. The second pre-determined number is fewer than the first pre-determined number. The method also includes determining a second timestamp for each of the second pre-determined number of samples. The second timestamps are within the packet-transmission interval and represent measurement times at a second sampling frequency that is lower than the first sampling frequency. The method also includes combining the converted motion data and the corresponding second timestamps into a first data packet.Type: GrantFiled: October 29, 2018Date of Patent: February 11, 2020Assignee: Facebook Technologies, LLCInventors: Boyang Zhang, Adrian Brian Ratter, Chengyuan Yan, Jack Hood Profit, Jr., Paul Austin Buckley