Patents by Inventor Alexander Jay Bruen Trevor

Alexander Jay Bruen Trevor has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11957807
    Abstract: A cleaning robot may determine a three-dimensional model of a physical environment based on data collected from one or more sensors. The cleaning robot may then identify a surface within the physical environment to clean. Having identified that surface, the robot may autonomously navigate to a location proximate to the surface, position an ultraviolet light source in proximity to the surface, and activate the ultraviolet light source for a period of time.
    Type: Grant
    Filed: March 22, 2021
    Date of Patent: April 16, 2024
    Assignee: Robust AI, Inc.
    Inventors: Rodney Allen Brooks, Dylan Bourgeois, Crystal Chao, Alexander Jay Bruen Trevor, Mohamed Rabie Amer, Anthony Sean Jules, Gary Fred Marcus
  • Patent number: 11960533
    Abstract: Provided are mechanisms and processes for performing visual search using multi-view digital media representations, such as surround views. In one example, a process includes receiving a visual search query that includes a surround view of an object to be searched, where the surround view includes spatial information, scale information, and different viewpoint images of the object. The surround view is compared to stored surround views by comparing spatial information and scale information of the surround view to spatial information and scale information of the stored surround views. A correspondence measure is then generated indicating the degree of similarity between the surround view and a possible match. At least one search result is then transmitted with a corresponding image in response to the visual search query.
    Type: Grant
    Filed: July 25, 2022
    Date of Patent: April 16, 2024
    Assignee: FYUSION, INC.
    Inventors: Stefan Johannes Josef Holzer, Abhishek Kar, Alexander Jay Bruen Trevor, Pantelis Kalogiros, Ioannis Spanos, Radu Bogdan Rusu
  • Publication number: 20240098233
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation can be generated from live images captured from a camera. The live images can include an object. An angular view of the object captured in the live images can be estimated using sensor data from an inertial measurement unit. The determined angular views can be used to select from among the live images. The multi-view interactive digital media representation can include a plurality of images where each of the plurality of images includes the object from a different camera view. When the plurality of images is output to a display, the object can appear to undergo a 3-D rotation through the determined angular view where the 3-D rotation of the object is generated without a 3-D polygon model of the object.
    Type: Application
    Filed: November 27, 2023
    Publication date: March 21, 2024
    Applicant: Fyusion, Inc.
    Inventors: Alexander Jay Bruen Trevor, Chris Beall, Vladimir Glavtchev, Stefan Johannes Josef Holzer, Radu Bogdan Rusu
  • Publication number: 20240054718
    Abstract: Various embodiments of the present disclosure relate generally to systems and methods for generating multi-view interactive digital media representations in a virtual reality environment. According to particular embodiments, a plurality of images is fused into a first content model and a first context model, both of which include multi-view interactive digital media representations of objects. Next, a virtual reality environment is generated using the first content model and the first context model. The virtual reality environment includes a first layer and a second layer. The user can navigate through and within the virtual reality environment to switch between multiple viewpoints of the content model via corresponding physical movements. The first layer includes the first content model and the second layer includes a second content model and wherein selection of the first layer provides access to the second layer with the second content model.
    Type: Application
    Filed: August 18, 2023
    Publication date: February 15, 2024
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef HOLZER, Stephen David Miller, Radu Bogdan Rusu, Alexander Jay Bruen Trevor, Krunal Ketan Chande
  • Patent number: 11876948
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation can be generated from live images captured from a camera. The live images can include an object. An angular view of the object captured in the live images can be estimated using sensor data from an inertial measurement unit. The determined angular views can be used to select from among the live images. The multi-view interactive digital media representation can include a plurality of images where each of the plurality of images includes the object from a different camera view. When the plurality of images is output to a display, the object can appear to undergo a 3-D rotation through the determined angular view where the 3-D rotation of the object is generated without a 3-D polygon model of the object.
    Type: Grant
    Filed: July 25, 2022
    Date of Patent: January 16, 2024
    Assignee: Fyusion, Inc.
    Inventors: Alexander Jay Bruen Trevor, Chris Beall, Vladimir Glavtchev, Stefan Johannes Josef Holzer, Radu Bogdan Rusu
  • Publication number: 20230402067
    Abstract: Various embodiments of the present invention relate generally to systems and methods for integrating audio into a multi-view interactive digital media representation. According to particular embodiments, one process includes retrieving a multi-view interactive digital media representation that includes numerous images fused together into content and context models. The process next includes retrieving and processing audio data to be integrated into the multi-view interactive digital media representation. A first segment of audio data may be associated with a first position in the multi-view interactive digital media representation. In other examples, a first segment of audio data may be associated with a visual position or the location of a camera in the multi-view interactive digital media representation.
    Type: Application
    Filed: August 29, 2023
    Publication date: December 14, 2023
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Vladimir Roumenov Glavtchev, Alexander Jay Bruen Trevor
  • Patent number: 11783864
    Abstract: Various embodiments of the present invention relate generally to systems and methods for integrating audio into a multi-view interactive digital media representation. According to particular embodiments, one process includes retrieving a multi-view interactive digital media representation that includes numerous images fused together into content and context models. The process next includes retrieving and processing audio data to be integrated into the multi-view interactive digital media representation. A first segment of audio data may be associated with a first position in the multi-view interactive digital media representation. In other examples, a first segment of audio data may be associated with a visual position or the location of a camera in the multi-view interactive digital media representation.
    Type: Grant
    Filed: September 22, 2015
    Date of Patent: October 10, 2023
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Vladimir Roumenov Glavtchev, Alexander Jay Bruen Trevor
  • Patent number: 11776199
    Abstract: Various embodiments of the present disclosure relate generally to systems and methods for generating multi-view interactive digital media representations in a virtual reality environment. According to particular embodiments, a plurality of images is fused into a first content model and a first context model, both of which include multi-view interactive digital media representations of objects. Next, a virtual reality environment is generated using the first content model and the first context model. The virtual reality environment includes a first layer and a second layer. The user can navigate through and within the virtual reality environment to switch between multiple viewpoints of the content model via corresponding physical movements. The first layer includes the first content model and the second layer includes a second content model and wherein selection of the first layer provides access to the second layer with the second content model.
    Type: Grant
    Filed: July 25, 2022
    Date of Patent: October 3, 2023
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Radu Bogdan Rusu, Alexander Jay Bruen Trevor, Krunal Ketan Chande
  • Patent number: 11717587
    Abstract: A model of a physical environment may be determined based at least in part on sensor data collected by one or more sensors at a robot. The model may include a plurality of constraints and a plurality of data values. A trajectory through the physical environment may be determined for an ultraviolet end effector coupled with the robot to clean one or more surfaces in the physical environment. The ultraviolet end effector may include one or more ultraviolet light sources. The ultraviolet end effector may be moved along the trajectory.
    Type: Grant
    Filed: March 19, 2021
    Date of Patent: August 8, 2023
    Assignee: Robust AI, Inc.
    Inventors: Alexander Jay Bruen Trevor, Dylan Bourgeois, Marina Kollmitz, Crystal Chao
  • Publication number: 20230083609
    Abstract: Various embodiments of the present disclosure relate generally to systems and methods for generating multi-view interactive digital media representations in a virtual reality environment. According to particular embodiments, a plurality of images is fused into a first content model and a first context model, both of which include multi-view interactive digital media representations of objects. Next, a virtual reality environment is generated using the first content model and the first context model. The virtual reality environment includes a first layer and a second layer. The user can navigate through and within the virtual reality environment to switch between multiple viewpoints of the content model via corresponding physical movements. The first layer includes the first content model and the second layer includes a second content model and wherein selection of the first layer provides access to the second layer with the second content model.
    Type: Application
    Filed: July 25, 2022
    Publication date: March 16, 2023
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef HOLZER, Stephen David Miller, Radu Bogdan Rusu, Alexander Jay Bruen Trevor, Krunal Ketan Chande
  • Publication number: 20230083213
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation can be generated from live images captured from a camera. The live images can include an object. An angular view of the object captured in the live images can be estimated using sensor data from an inertial measurement unit. The determined angular views can be used to select from among the live images. The multi-view interactive digital media representation can include a plurality of images where each of the plurality of images includes the object from a different camera view. When the plurality of images is output to a display, the object can appear to undergo a 3-D rotation through the determined angular view where the 3-D rotation of the object is generated without a 3-D polygon model of the object.
    Type: Application
    Filed: July 25, 2022
    Publication date: March 16, 2023
    Applicant: Fyusion, Inc.
    Inventors: Alexander Jay Bruen Trevor, Chris Beall, Vladimir Glavtchev, Stefan Johannes Josef Holzer, Radu Bogdan Rusu
  • Publication number: 20230080005
    Abstract: Provided are mechanisms and processes for performing visual search using multi-view digital media representations, such as surround views. In one example, a process includes receiving a visual search query that includes a surround view of an object to be searched, where the surround view includes spatial information, scale information, and different viewpoint images of the object. The surround view is compared to stored surround views by comparing spatial information and scale information of the surround view to spatial information and scale information of the stored surround views. A correspondence measure is then generated indicating the degree of similarity between the surround view and a possible match. At least one search result is then transmitted with a corresponding image in response to the visual search query.
    Type: Application
    Filed: July 25, 2022
    Publication date: March 16, 2023
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef HOLZER, Abhishek Kar, Alexander Jay Bruen Trevor, Pantelis Kalogiros, Ioannis Spanos, Radu Bogdan Rusu
  • Publication number: 20220408019
    Abstract: A set of images may be captured by a camera as the camera moves along a path through space around an object. Then, a smoothed function (e.g., a polynomial) may be fitted to the translational and/or rotational position in space. For example, positions in a Cartesian coordinates pace may be determined for the images. The positions may then be transformed to a polar coordinate space, in which a trajectory along the points may be determined, and the trajectory transformed back into the Cartesian space. Similarly, the rotational position of the images may be smoothed, for instance by fitting a loss function. Finally, one or more images may be transformed to more closely align a viewpoint of the image with the fitted translational and/or rotational positions.
    Type: Application
    Filed: June 17, 2021
    Publication date: December 22, 2022
    Applicant: Fyusion, Inc.
    Inventors: Krunal Ketan Chande, Stefan Johannes Josef Holzer, Wook Yeon Hwang, Alexander Jay Bruen Trevor, Shane Griffith
  • Publication number: 20220406003
    Abstract: Three-dimensional points may be projected onto first locations in a first image of an object captured from a first position in three-dimensional space relative to the object and projected onto second locations a virtual camera position located at a second position in three-dimensional space relative to the object. First transformations linking the first and second locations may then be determined. Second transformations transforming first coordinates for the first image to second coordinates for the second image may be determined based on the first transformations. Based on these second transformations and on the first image, a second image of the object from the virtual camera position.
    Type: Application
    Filed: October 15, 2021
    Publication date: December 22, 2022
    Applicant: Fyusion, Inc.
    Inventors: Rodrigo Ortiz Cayon, Krunal Ketan Chande, Stefan Johannes Josef Holzer, Wook Yeon Hwang, Alexander Jay Bruen Trevor, Shane Griffith
  • Patent number: 11436275
    Abstract: Provided are mechanisms and processes for performing visual search using multi-view digital media representations, such as surround views. In one example, a process includes receiving a visual search query that includes a surround view of an object to be searched, where the surround view includes spatial information, scale information, and different viewpoint images of the object. The surround view is compared to stored surround views by comparing spatial information and scale information of the surround view to spatial information and scale information of the stored surround views. A correspondence measure is then generated indicating the degree of similarity between the surround view and a possible match. At least one search result is then transmitted with a corresponding image in response to the visual search query.
    Type: Grant
    Filed: August 29, 2019
    Date of Patent: September 6, 2022
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Abhishek Kar, Alexander Jay Bruen Trevor, Pantelis Kalogiros, Ioannis Spanos, Radu Bogdan Rusu
  • Patent number: 11435869
    Abstract: Various embodiments of the present disclosure relate generally to systems and methods for generating multi-view interactive digital media representations in a virtual reality environment. According to particular embodiments, a plurality of images is fused into a first content model and a first context model, both of which include multi-view interactive digital media representations of objects. Next, a virtual reality environment is generated using the first content model and the first context model. The virtual reality environment includes a first layer and a second layer. The user can navigate through and within the virtual reality environment to switch between multiple viewpoints of the content model via corresponding physical movements. The first layer includes the first content model and the second layer includes a second content model and wherein selection of the first layer provides access to the second layer with the second content model.
    Type: Grant
    Filed: December 23, 2019
    Date of Patent: September 6, 2022
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Radu Bogdan Rusu, Alexander Jay Bruen Trevor, Krunal Ketan Chande
  • Patent number: 11438565
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation can be generated from live images captured from a camera. The live images can include an object. An angular view of the object captured in the live images can be estimated using sensor data from an inertial measurement unit. The determined angular views can be used to select from among the live images. The multi-view interactive digital media representation can include a plurality of images where each of the plurality of images includes the object from a different camera view. When the plurality of images is output to a display, the object can appear to undergo a 3-D rotation through the determined angular view where the 3-D rotation of the object is generated without a 3-D polygon model of the object.
    Type: Grant
    Filed: April 19, 2019
    Date of Patent: September 6, 2022
    Assignee: Fyusion, Inc.
    Inventors: Alexander Jay Bruen Trevor, Chris Beall, Vladimir Glavtchev, Stefan Johannes Josef Holzer, Radu Bogdan Rusu
  • Publication number: 20210347048
    Abstract: A model of a physical environment may be determined based at least in part on sensor data collected by one or more sensors at a robot. The model may include a plurality of constraints and a plurality of data values. A trajectory through the physical environment may be determined for an ultraviolet end effector coupled with the robot to clean one or more surfaces in the physical environment. The ultraviolet end effector may include one or more ultraviolet light sources. The ultraviolet end effector may be moved along the trajectory.
    Type: Application
    Filed: March 19, 2021
    Publication date: November 11, 2021
    Applicant: Robust AI, Inc.
    Inventors: Alexander Jay Bruen Trevor, Dylan Bourgeois, Marina Kollmitz, Crystal Chao
  • Publication number: 20210346543
    Abstract: A cleaning robot may determine a three-dimensional model of a physical environment based on data collected from one or more sensors. The cleaning robot may then identify a surface within the physical environment to clean. Having identified that surface, the robot may autonomously navigate to a location proximate to the surface, position an ultraviolet light source in proximity to the surface, and activate the ultraviolet light source for a period of time.
    Type: Application
    Filed: March 22, 2021
    Publication date: November 11, 2021
    Applicant: Robust AI, Inc.
    Inventors: Rodney Allen Brooks, Dylan Bourgeois, Crystal Chao, Alexander Jay Bruen Trevor, Mohamed Rabie Amer, Anthony Sean Jules, Gary Fred Marcus
  • Publication number: 20210346557
    Abstract: A robot may identify a human located proximate to the robot in a physical environment based on sensor data captured from one or more sensors on the robot. A trajectory of the human through space may be predicted. When the predicted trajectory of the human intersects with a current path of the robot, an updated path to a destination location in the environment may be determined so as to avoid a collision between the robot and the human along the predicted trajectory. The robot may then move along the determined path.
    Type: Application
    Filed: March 19, 2021
    Publication date: November 11, 2021
    Applicant: Robust AI, Inc.
    Inventors: Rodney Allen Brooks, Dylan Bourgeois, Crystal Chao, Alexander Jay Bruen Trevor, Mohamed Rabie Amer, Anthony Sean Jules, Gary Fred Marcus, Michelle Ho