Patents by Inventor Alexander Jay Bruen Trevor
Alexander Jay Bruen Trevor has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11957807Abstract: A cleaning robot may determine a three-dimensional model of a physical environment based on data collected from one or more sensors. The cleaning robot may then identify a surface within the physical environment to clean. Having identified that surface, the robot may autonomously navigate to a location proximate to the surface, position an ultraviolet light source in proximity to the surface, and activate the ultraviolet light source for a period of time.Type: GrantFiled: March 22, 2021Date of Patent: April 16, 2024Assignee: Robust AI, Inc.Inventors: Rodney Allen Brooks, Dylan Bourgeois, Crystal Chao, Alexander Jay Bruen Trevor, Mohamed Rabie Amer, Anthony Sean Jules, Gary Fred Marcus
-
Patent number: 11960533Abstract: Provided are mechanisms and processes for performing visual search using multi-view digital media representations, such as surround views. In one example, a process includes receiving a visual search query that includes a surround view of an object to be searched, where the surround view includes spatial information, scale information, and different viewpoint images of the object. The surround view is compared to stored surround views by comparing spatial information and scale information of the surround view to spatial information and scale information of the stored surround views. A correspondence measure is then generated indicating the degree of similarity between the surround view and a possible match. At least one search result is then transmitted with a corresponding image in response to the visual search query.Type: GrantFiled: July 25, 2022Date of Patent: April 16, 2024Assignee: FYUSION, INC.Inventors: Stefan Johannes Josef Holzer, Abhishek Kar, Alexander Jay Bruen Trevor, Pantelis Kalogiros, Ioannis Spanos, Radu Bogdan Rusu
-
Publication number: 20240098233Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation can be generated from live images captured from a camera. The live images can include an object. An angular view of the object captured in the live images can be estimated using sensor data from an inertial measurement unit. The determined angular views can be used to select from among the live images. The multi-view interactive digital media representation can include a plurality of images where each of the plurality of images includes the object from a different camera view. When the plurality of images is output to a display, the object can appear to undergo a 3-D rotation through the determined angular view where the 3-D rotation of the object is generated without a 3-D polygon model of the object.Type: ApplicationFiled: November 27, 2023Publication date: March 21, 2024Applicant: Fyusion, Inc.Inventors: Alexander Jay Bruen Trevor, Chris Beall, Vladimir Glavtchev, Stefan Johannes Josef Holzer, Radu Bogdan Rusu
-
Publication number: 20240054718Abstract: Various embodiments of the present disclosure relate generally to systems and methods for generating multi-view interactive digital media representations in a virtual reality environment. According to particular embodiments, a plurality of images is fused into a first content model and a first context model, both of which include multi-view interactive digital media representations of objects. Next, a virtual reality environment is generated using the first content model and the first context model. The virtual reality environment includes a first layer and a second layer. The user can navigate through and within the virtual reality environment to switch between multiple viewpoints of the content model via corresponding physical movements. The first layer includes the first content model and the second layer includes a second content model and wherein selection of the first layer provides access to the second layer with the second content model.Type: ApplicationFiled: August 18, 2023Publication date: February 15, 2024Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef HOLZER, Stephen David Miller, Radu Bogdan Rusu, Alexander Jay Bruen Trevor, Krunal Ketan Chande
-
Patent number: 11876948Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation can be generated from live images captured from a camera. The live images can include an object. An angular view of the object captured in the live images can be estimated using sensor data from an inertial measurement unit. The determined angular views can be used to select from among the live images. The multi-view interactive digital media representation can include a plurality of images where each of the plurality of images includes the object from a different camera view. When the plurality of images is output to a display, the object can appear to undergo a 3-D rotation through the determined angular view where the 3-D rotation of the object is generated without a 3-D polygon model of the object.Type: GrantFiled: July 25, 2022Date of Patent: January 16, 2024Assignee: Fyusion, Inc.Inventors: Alexander Jay Bruen Trevor, Chris Beall, Vladimir Glavtchev, Stefan Johannes Josef Holzer, Radu Bogdan Rusu
-
Publication number: 20230402067Abstract: Various embodiments of the present invention relate generally to systems and methods for integrating audio into a multi-view interactive digital media representation. According to particular embodiments, one process includes retrieving a multi-view interactive digital media representation that includes numerous images fused together into content and context models. The process next includes retrieving and processing audio data to be integrated into the multi-view interactive digital media representation. A first segment of audio data may be associated with a first position in the multi-view interactive digital media representation. In other examples, a first segment of audio data may be associated with a visual position or the location of a camera in the multi-view interactive digital media representation.Type: ApplicationFiled: August 29, 2023Publication date: December 14, 2023Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Vladimir Roumenov Glavtchev, Alexander Jay Bruen Trevor
-
Patent number: 11783864Abstract: Various embodiments of the present invention relate generally to systems and methods for integrating audio into a multi-view interactive digital media representation. According to particular embodiments, one process includes retrieving a multi-view interactive digital media representation that includes numerous images fused together into content and context models. The process next includes retrieving and processing audio data to be integrated into the multi-view interactive digital media representation. A first segment of audio data may be associated with a first position in the multi-view interactive digital media representation. In other examples, a first segment of audio data may be associated with a visual position or the location of a camera in the multi-view interactive digital media representation.Type: GrantFiled: September 22, 2015Date of Patent: October 10, 2023Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Vladimir Roumenov Glavtchev, Alexander Jay Bruen Trevor
-
Patent number: 11776199Abstract: Various embodiments of the present disclosure relate generally to systems and methods for generating multi-view interactive digital media representations in a virtual reality environment. According to particular embodiments, a plurality of images is fused into a first content model and a first context model, both of which include multi-view interactive digital media representations of objects. Next, a virtual reality environment is generated using the first content model and the first context model. The virtual reality environment includes a first layer and a second layer. The user can navigate through and within the virtual reality environment to switch between multiple viewpoints of the content model via corresponding physical movements. The first layer includes the first content model and the second layer includes a second content model and wherein selection of the first layer provides access to the second layer with the second content model.Type: GrantFiled: July 25, 2022Date of Patent: October 3, 2023Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Radu Bogdan Rusu, Alexander Jay Bruen Trevor, Krunal Ketan Chande
-
Patent number: 11717587Abstract: A model of a physical environment may be determined based at least in part on sensor data collected by one or more sensors at a robot. The model may include a plurality of constraints and a plurality of data values. A trajectory through the physical environment may be determined for an ultraviolet end effector coupled with the robot to clean one or more surfaces in the physical environment. The ultraviolet end effector may include one or more ultraviolet light sources. The ultraviolet end effector may be moved along the trajectory.Type: GrantFiled: March 19, 2021Date of Patent: August 8, 2023Assignee: Robust AI, Inc.Inventors: Alexander Jay Bruen Trevor, Dylan Bourgeois, Marina Kollmitz, Crystal Chao
-
Publication number: 20230083609Abstract: Various embodiments of the present disclosure relate generally to systems and methods for generating multi-view interactive digital media representations in a virtual reality environment. According to particular embodiments, a plurality of images is fused into a first content model and a first context model, both of which include multi-view interactive digital media representations of objects. Next, a virtual reality environment is generated using the first content model and the first context model. The virtual reality environment includes a first layer and a second layer. The user can navigate through and within the virtual reality environment to switch between multiple viewpoints of the content model via corresponding physical movements. The first layer includes the first content model and the second layer includes a second content model and wherein selection of the first layer provides access to the second layer with the second content model.Type: ApplicationFiled: July 25, 2022Publication date: March 16, 2023Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef HOLZER, Stephen David Miller, Radu Bogdan Rusu, Alexander Jay Bruen Trevor, Krunal Ketan Chande
-
Publication number: 20230083213Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation can be generated from live images captured from a camera. The live images can include an object. An angular view of the object captured in the live images can be estimated using sensor data from an inertial measurement unit. The determined angular views can be used to select from among the live images. The multi-view interactive digital media representation can include a plurality of images where each of the plurality of images includes the object from a different camera view. When the plurality of images is output to a display, the object can appear to undergo a 3-D rotation through the determined angular view where the 3-D rotation of the object is generated without a 3-D polygon model of the object.Type: ApplicationFiled: July 25, 2022Publication date: March 16, 2023Applicant: Fyusion, Inc.Inventors: Alexander Jay Bruen Trevor, Chris Beall, Vladimir Glavtchev, Stefan Johannes Josef Holzer, Radu Bogdan Rusu
-
Publication number: 20230080005Abstract: Provided are mechanisms and processes for performing visual search using multi-view digital media representations, such as surround views. In one example, a process includes receiving a visual search query that includes a surround view of an object to be searched, where the surround view includes spatial information, scale information, and different viewpoint images of the object. The surround view is compared to stored surround views by comparing spatial information and scale information of the surround view to spatial information and scale information of the stored surround views. A correspondence measure is then generated indicating the degree of similarity between the surround view and a possible match. At least one search result is then transmitted with a corresponding image in response to the visual search query.Type: ApplicationFiled: July 25, 2022Publication date: March 16, 2023Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef HOLZER, Abhishek Kar, Alexander Jay Bruen Trevor, Pantelis Kalogiros, Ioannis Spanos, Radu Bogdan Rusu
-
Publication number: 20220408019Abstract: A set of images may be captured by a camera as the camera moves along a path through space around an object. Then, a smoothed function (e.g., a polynomial) may be fitted to the translational and/or rotational position in space. For example, positions in a Cartesian coordinates pace may be determined for the images. The positions may then be transformed to a polar coordinate space, in which a trajectory along the points may be determined, and the trajectory transformed back into the Cartesian space. Similarly, the rotational position of the images may be smoothed, for instance by fitting a loss function. Finally, one or more images may be transformed to more closely align a viewpoint of the image with the fitted translational and/or rotational positions.Type: ApplicationFiled: June 17, 2021Publication date: December 22, 2022Applicant: Fyusion, Inc.Inventors: Krunal Ketan Chande, Stefan Johannes Josef Holzer, Wook Yeon Hwang, Alexander Jay Bruen Trevor, Shane Griffith
-
Publication number: 20220406003Abstract: Three-dimensional points may be projected onto first locations in a first image of an object captured from a first position in three-dimensional space relative to the object and projected onto second locations a virtual camera position located at a second position in three-dimensional space relative to the object. First transformations linking the first and second locations may then be determined. Second transformations transforming first coordinates for the first image to second coordinates for the second image may be determined based on the first transformations. Based on these second transformations and on the first image, a second image of the object from the virtual camera position.Type: ApplicationFiled: October 15, 2021Publication date: December 22, 2022Applicant: Fyusion, Inc.Inventors: Rodrigo Ortiz Cayon, Krunal Ketan Chande, Stefan Johannes Josef Holzer, Wook Yeon Hwang, Alexander Jay Bruen Trevor, Shane Griffith
-
Patent number: 11436275Abstract: Provided are mechanisms and processes for performing visual search using multi-view digital media representations, such as surround views. In one example, a process includes receiving a visual search query that includes a surround view of an object to be searched, where the surround view includes spatial information, scale information, and different viewpoint images of the object. The surround view is compared to stored surround views by comparing spatial information and scale information of the surround view to spatial information and scale information of the stored surround views. A correspondence measure is then generated indicating the degree of similarity between the surround view and a possible match. At least one search result is then transmitted with a corresponding image in response to the visual search query.Type: GrantFiled: August 29, 2019Date of Patent: September 6, 2022Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Abhishek Kar, Alexander Jay Bruen Trevor, Pantelis Kalogiros, Ioannis Spanos, Radu Bogdan Rusu
-
Patent number: 11435869Abstract: Various embodiments of the present disclosure relate generally to systems and methods for generating multi-view interactive digital media representations in a virtual reality environment. According to particular embodiments, a plurality of images is fused into a first content model and a first context model, both of which include multi-view interactive digital media representations of objects. Next, a virtual reality environment is generated using the first content model and the first context model. The virtual reality environment includes a first layer and a second layer. The user can navigate through and within the virtual reality environment to switch between multiple viewpoints of the content model via corresponding physical movements. The first layer includes the first content model and the second layer includes a second content model and wherein selection of the first layer provides access to the second layer with the second content model.Type: GrantFiled: December 23, 2019Date of Patent: September 6, 2022Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Radu Bogdan Rusu, Alexander Jay Bruen Trevor, Krunal Ketan Chande
-
Patent number: 11438565Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation can be generated from live images captured from a camera. The live images can include an object. An angular view of the object captured in the live images can be estimated using sensor data from an inertial measurement unit. The determined angular views can be used to select from among the live images. The multi-view interactive digital media representation can include a plurality of images where each of the plurality of images includes the object from a different camera view. When the plurality of images is output to a display, the object can appear to undergo a 3-D rotation through the determined angular view where the 3-D rotation of the object is generated without a 3-D polygon model of the object.Type: GrantFiled: April 19, 2019Date of Patent: September 6, 2022Assignee: Fyusion, Inc.Inventors: Alexander Jay Bruen Trevor, Chris Beall, Vladimir Glavtchev, Stefan Johannes Josef Holzer, Radu Bogdan Rusu
-
Publication number: 20210347048Abstract: A model of a physical environment may be determined based at least in part on sensor data collected by one or more sensors at a robot. The model may include a plurality of constraints and a plurality of data values. A trajectory through the physical environment may be determined for an ultraviolet end effector coupled with the robot to clean one or more surfaces in the physical environment. The ultraviolet end effector may include one or more ultraviolet light sources. The ultraviolet end effector may be moved along the trajectory.Type: ApplicationFiled: March 19, 2021Publication date: November 11, 2021Applicant: Robust AI, Inc.Inventors: Alexander Jay Bruen Trevor, Dylan Bourgeois, Marina Kollmitz, Crystal Chao
-
Publication number: 20210346543Abstract: A cleaning robot may determine a three-dimensional model of a physical environment based on data collected from one or more sensors. The cleaning robot may then identify a surface within the physical environment to clean. Having identified that surface, the robot may autonomously navigate to a location proximate to the surface, position an ultraviolet light source in proximity to the surface, and activate the ultraviolet light source for a period of time.Type: ApplicationFiled: March 22, 2021Publication date: November 11, 2021Applicant: Robust AI, Inc.Inventors: Rodney Allen Brooks, Dylan Bourgeois, Crystal Chao, Alexander Jay Bruen Trevor, Mohamed Rabie Amer, Anthony Sean Jules, Gary Fred Marcus
-
Publication number: 20210346557Abstract: A robot may identify a human located proximate to the robot in a physical environment based on sensor data captured from one or more sensors on the robot. A trajectory of the human through space may be predicted. When the predicted trajectory of the human intersects with a current path of the robot, an updated path to a destination location in the environment may be determined so as to avoid a collision between the robot and the human along the predicted trajectory. The robot may then move along the determined path.Type: ApplicationFiled: March 19, 2021Publication date: November 11, 2021Applicant: Robust AI, Inc.Inventors: Rodney Allen Brooks, Dylan Bourgeois, Crystal Chao, Alexander Jay Bruen Trevor, Mohamed Rabie Amer, Anthony Sean Jules, Gary Fred Marcus, Michelle Ho