Patents by Inventor Alexander Jay Bruen Trevor

Alexander Jay Bruen Trevor has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11024093
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a surround view for presentation on a device. A visual guide can provided for capturing the multiple images used in the surround view. The visual guide can be a synthetic object that is rendered in real-time into the images output to a display of an image capture device. The visual guide can help user keep the image capture device moving along a desired trajectory.
    Type: Grant
    Filed: September 9, 2019
    Date of Patent: June 1, 2021
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Alexander Jay Bruen Trevor, Michelle Jung-Ah Ho, David Klein, Stephen David Miller, Shuichi Tsutsumi, Radu Bogdan Rusu
  • Patent number: 10950032
    Abstract: Pixels in a visual representation of an object that includes one or more perspective view images may be mapped to a standard view of the object. Based on the mapping, a portion of the object captured in the visual representation of the object may be identified. A user interface on a display device may indicate the identified object portion.
    Type: Grant
    Filed: July 22, 2019
    Date of Patent: March 16, 2021
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Abhishek Kar, Matteo Munaro, Aidas Liaudanskas, Matthias Reso, Alexander Jay Bruen Trevor, Radu Bogdan Rusu
  • Patent number: 10887582
    Abstract: Images of an object may be analyzed to determine individual damage maps of the object. Each damage map may represent damage to an object depicted in one of the images. The damage may be represented in a standard view of the object. An aggregated damage map for the object may be determined based on the individual damage maps.
    Type: Grant
    Filed: October 8, 2019
    Date of Patent: January 5, 2021
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Abhishek Kar, Alexander Jay Bruen Trevor, Pavel Hanchar, Matteo Munaro, Aidas Liaudanskas, Radu Bogdan Rusu
  • Patent number: 10863210
    Abstract: Provided are mechanisms and processes for performing live filtering in a camera view via client-server communication. In one example, a first video frame in a raw video stream is transmitted from a client device to a server. The client device receives a filter processing message associated with the first video frame that includes filter data for applying a filter to the first video frame. A processor at the client device creates a filtered video stream by applying the filter to a second video frame that occurs in the video stream later than the first video frame.
    Type: Grant
    Filed: August 7, 2018
    Date of Patent: December 8, 2020
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Matteo Munaro, Abhishek Kar, Alexander Jay Bruen Trevor, Krunal Ketan Chande, Radu Bogdan Rusu
  • Patent number: 10855936
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation (MVIDMR) of a person can be generated from live images of a person captured from a hand-held camera. Using the image data from the live images, a skeleton of the person and a boundary between the person and a background can be determined from different viewing angles and across multiple images. Using the skeleton and the boundary data, effects can be added to the person, such as wings. The effects can change from image to image to account for the different viewing angles of the person captured in each image.
    Type: Grant
    Filed: September 23, 2019
    Date of Patent: December 1, 2020
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Stephen David Miller, Pantelis Kalogiros, Matteo Munaro, Alexander Jay Bruen Trevor, Nico Gregor Sebastian Blodow, Luo Yi Tan, Mike Penz, Martin Markus Hubert Wawro, Matthias Reso, Chris Beall, Yusuke Tomoto, Krunal Ketan Chande
  • Publication number: 20200349757
    Abstract: Pixels in a visual representation of an object that includes one or more perspective view images may be mapped to a standard view of the object. Based on the mapping, a portion of the object captured in the visual representation of the object may be identified. A user interface on a display device may indicate the identified object portion.
    Type: Application
    Filed: July 22, 2019
    Publication date: November 5, 2020
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Abhishek Kar, Matteo Munaro, Aidas Liaudanskas, Matthias Reso, Alexander Jay Bruen Trevor, Radu Bogdan Rusu
  • Publication number: 20200258309
    Abstract: A live camera feed may be analyzed to determine the identify of an object, and augmented reality overlay data may be determined based on that identity. The overlay data may include one or more tags that are each associated with a respective location on the object. The live camera feed may be presented on a display screen with the tags being positioned as the respective location.
    Type: Application
    Filed: April 28, 2020
    Publication date: August 13, 2020
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Matteo Munaro, Alexander Jay Bruen Trevor, Aidas Liaudanskas, Radu Bogdan Rusu
  • Patent number: 10725609
    Abstract: Various embodiments of the present disclosure relate generally to systems and methods for generating multi-view interactive digital media representations in a virtual reality environment. According to particular embodiments, a plurality of images is fused into a first content model and a first context model, both of which include multi-view interactive digital media representations of s. Next, a virtual reality environment is generated using the first content model and the first context model. The virtual reality environment includes a first layer and a second layer. The user can navigate through and within the virtual reality environment to switch between multiple viewpoints of the content model via corresponding physical movements. The first layer includes the first content model and the second layer includes a second content model and wherein selection of the first layer provides access to the second layer with the second content model.
    Type: Grant
    Filed: October 3, 2017
    Date of Patent: July 28, 2020
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Radu Bogdan Rusu, Alexander Jay Bruen Trevor, Krunal Ketan Chande
  • Patent number: 10726560
    Abstract: Various embodiments describe systems and processes for generating AR/VR content. In one aspect, a method for generating a 3D projection of an object in a virtual reality or augmented reality environment comprises obtaining a sequence of images along a camera translation using a single lens camera. Each image contains a portion of overlapping subject matter, including the object. The object is segmented from the sequence of images using a trained segmenting neural network to form a sequence of segmented object images, to which an art-style transfer is applied using a trained transfer neural network. On-the-fly interpolation parameters are computed and stereoscopic pairs are generated for points along the camera translation from the refined sequence of segmented object images for displaying the object as a 3D projection in a virtual reality or augmented reality environment. Segmented image indices are mapped to a rotation range for display in the virtual reality or augmented reality environment.
    Type: Grant
    Filed: February 7, 2017
    Date of Patent: July 28, 2020
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Yuheng Ren, Abhishek Kar, Alexander Jay Bruen Trevor, Krunal Ketan Chande, Martin Josef Nikolaus Saelzle, Radu Bogdan Rusu
  • Publication number: 20200236343
    Abstract: Images of an object may be analyzed to determine individual damage maps of the object. Each damage map may represent damage to an object depicted in one of the images. The damage may be represented in a standard view of the object. An aggregated damage map for the object may be determined based on the individual damage maps.
    Type: Application
    Filed: October 8, 2019
    Publication date: July 23, 2020
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Abhishek Kar, Alexander Jay Bruen Trevor, Pavel Hanchar, Matteo Munaro, Aidas Liaudanskas, Radu Bogdan Rusu
  • Patent number: 10719939
    Abstract: Various embodiments describe systems and processes for generating AR/VR content. In one aspect, a method for generating a three-dimensional (3D) projection of an object is provided. A sequence of images along a camera translation may be obtained using a single lens camera. Each image contains at least a portion of overlapping subject matter, which includes the object. The object is semantically segmented from the sequence of images using a trained neural network to form a sequence of segmented object images, which are then refined using fine-grained segmentation. On-the-fly interpolation parameters are computed and stereoscopic pairs are generated for points along the camera translation from the refined sequence of segmented object images for displaying the object as a 3D projection in a virtual reality or augmented reality environment. Segmented image indices are then mapped to a rotation range for display in the virtual reality or augmented reality environment.
    Type: Grant
    Filed: February 8, 2017
    Date of Patent: July 21, 2020
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Yuheng Ren, Abhishek Kar, Alexander Jay Bruen Trevor, Krunal Ketan Chande, Martin Josef Nikolaus Saelzle, Radu Bogdan Rusu
  • Patent number: 10713851
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a surround view for presentation on a device. A real object can be tracked in the live image data for the purposes of creating a surround view using a number of tracking points. As a camera is moved around the real object, virtual objects can be rendered into live image data to create synthetic images where a position of the tracking points can be used to position the virtual object in the synthetic image. The synthetic images can be output in real-time. Further, virtual objects in the synthetic images can be incorporated into surround views.
    Type: Grant
    Filed: November 12, 2018
    Date of Patent: July 14, 2020
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Alexander Jay Bruen Trevor, Martin Saelzle, Stephen David Miller, Radu Bogdan Rusu
  • Patent number: 10687046
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation (MVIDMR) of object can be generated from live images of the object captured from a hand-held camera. Methods are described where image data associated with the images capture from the hand-held camera are manipulated to generate a more desirable MVIDMR of the object. In particular, the image data can be manipulated so that it appears as if the camera traveled a smoother trajectory during the capture of the images which can provide a smoother output of the MVIDMR. In embodiment, key point matching within the image data and, optionally, IMU data from a sensor package on the camera can be used to generate constraints used in a factor graph optimization that is used to generate a smoother trajectory of the camera.
    Type: Grant
    Filed: November 1, 2018
    Date of Patent: June 16, 2020
    Assignee: Fyusion, Inc.
    Inventors: Alexander Jay Bruen Trevor, Chris Beall, Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Krunal Ketan Chande
  • Patent number: 10665024
    Abstract: Various embodiments of the present invention relate generally to systems and methods for collecting, analyzing, and manipulating images and video. According to particular embodiments, live images captured by a camera on a mobile device may be analyzed as the mobile device moves along a path. The live images may be compared with a target view. A visual indicator may be provided to guide the alteration of the positioning of the mobile device to more closely align with the target view.
    Type: Grant
    Filed: July 5, 2018
    Date of Patent: May 26, 2020
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Alexander Jay Bruen Trevor, Michelle Jung-Ah Ho, David Klein, Stephen David Miller, Shuichi Tsutsumi, Radu Bogdan Rusu
  • Patent number: 10659686
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation can be generated from live images captured from a camera as the camera moves along a path. Then, a sequence of the images can be selected based upon sensor data from an inertial measurement unit and upon image data such that one of the live images is selected for each of a plurality of poses along the path. A multi-view interactive digital media representation may be created from the sequence of images, and the images may be encoded as a video via a designated encoding format.
    Type: Grant
    Filed: March 23, 2018
    Date of Patent: May 19, 2020
    Assignee: Fyusion, Inc.
    Inventors: Alexander Jay Bruen Trevor, Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Stephen Miller
  • Patent number: 10645371
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation can be generated from live images captured from a camera. The live images can include an object. An angular view of the object captured in the live images can be estimated using sensor data from an inertial measurement unit. The multi-view interactive digital media representation can include a plurality of images where each of the plurality of images includes the object from a different camera view. When the plurality of images is output to a display, the object can appear to undergo a 3-D rotation through the determined angular view where the 3-D rotation of the object is generated without a 3-D polygon model of the object.
    Type: Grant
    Filed: October 11, 2019
    Date of Patent: May 5, 2020
    Assignee: Fyusion, Inc.
    Inventors: Alexander Jay Bruen Trevor, Chris Beall, Stefan Johannes Josef Holzer, Radu Bogdan Rusu
  • Publication number: 20200133462
    Abstract: Various embodiments of the present disclosure relate generally to systems and methods for generating multi-view interactive digital media representations in a virtual reality environment. According to particular embodiments, a plurality of images is fused into a first content model and a first context model, both of which include multi-view interactive digital media representations of objects. Next, a virtual reality environment is generated using the first content model and the first context model. The virtual reality environment includes a first layer and a second layer. The user can navigate through and within the virtual reality environment to switch between multiple viewpoints of the content model via corresponding physical movements. The first layer includes the first content model and the second layer includes a second content model and wherein selection of the first layer provides access to the second layer with the second content model.
    Type: Application
    Filed: December 23, 2019
    Publication date: April 30, 2020
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Radu Bogdan Rusu, Alexander Jay Bruen Trevor, Krunal Ketan Chande
  • Patent number: 10628675
    Abstract: Provided are mechanisms and processes for performing skeleton detection and tracking via client-server communication. In one example, a server transmits a skeleton detection message that includes position data for a skeleton representing the structure of an object depicted in a first video frame in a raw video stream at a client device. Based on the initial position data, a processor identifies intervening position data for the skeleton in one or more intervening video frames that are temporally located after the first video frame in the raw video stream. A filtered video stream is then presented by altering the raw video stream based at least in part on the first position data and the intervening position data.
    Type: Grant
    Filed: February 7, 2017
    Date of Patent: April 21, 2020
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Matteo Munaro, Abhishek Kar, Alexander Jay Bruen Trevor, Krunal Ketan Chande, Radu Bogdan Rusu
  • Publication number: 20200045300
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation can be generated from live images captured from a camera. The live images can include an object. An angular view of the object captured in the live images can be estimated using sensor data from an inertial measurement unit. The multi-view interactive digital media representation can include a plurality of images where each of the plurality of images includes the object from a different camera view. When the plurality of images is output to a display, the object can appear to undergo a 3-D rotation through the determined angular view where the 3-D rotation of the object is generated without a 3-D polygon model of the object.
    Type: Application
    Filed: October 11, 2019
    Publication date: February 6, 2020
    Applicant: Fyusion, Inc.
    Inventors: Alexander Jay Bruen Trevor, Chris Beall, Stefan Johannes Josef Holzer, Radu Bogdan Rusu
  • Publication number: 20200021752
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation (MVIDMR) of a person can be generated from live images of a person captured from a hand-held camera. Using the image data from the live images, a skeleton of the person and a boundary between the person and a background can be determined from different viewing angles and across multiple images. Using the skeleton and the boundary data, effects can be added to the person, such as wings. The effects can change from image to image to account for the different viewing angles of the person captured in each image.
    Type: Application
    Filed: September 23, 2019
    Publication date: January 16, 2020
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Stephen David Miller, Pantelis Kalogiros, Matteo Munaro, Alexander Jay Bruen Trevor, Nico Gregor Sebastian Blodow, Luo Yi Tan, Mike Penz, Martin Markus Hubert Wawro, Matthias Reso, Chris Beall, Yusuke Tomoto, Krunal Ketan Chande