Patents by Inventor Alexander Jay Bruen Trevor

Alexander Jay Bruen Trevor has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20180046357
    Abstract: Various embodiments of the present disclosure relate generally to systems and methods for generating multi-view interactive digital media representations in a virtual reality environment. According to particular embodiments, a plurality of images is fused into a first content model and a first context model, both of which include multi-view interactive digital media representations of objects. Next, a virtual reality environment is generated using the first content model and the first context model. The virtual reality environment includes a first layer and a second layer. The user can navigate through and within the virtual reality environment to switch between multiple viewpoints of the content model via corresponding physical movements. The first layer includes the first content model and the second layer includes a second content model and wherein selection of the first layer provides access to the second layer with the second content model.
    Type: Application
    Filed: October 3, 2017
    Publication date: February 15, 2018
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef HOLZER, Stephen David Miller, Radu Bogdan Rusu, Alexander Jay Bruen Trevor, Krunal Ketan Chande
  • Publication number: 20170148222
    Abstract: Various embodiments describe systems and processes for generating AR/VR content. In one aspect, a method for generating a 3D projection of an object in a virtual reality or augmented reality environment comprises obtaining a sequence of images along a camera translation using a single lens camera. Each image contains a portion of overlapping subject matter, including the object. The object is segmented from the sequence of images using a trained segmenting neural network to form a sequence of segmented object images, to which an art-style transfer is applied using a trained transfer neural network. On-the-fly interpolation parameters are computed and stereoscopic pairs are generated for points along the camera translation from the refined sequence of segmented object images for displaying the object as a 3D projection in a virtual reality or augmented reality environment. Segmented image indices are mapped to a rotation range for display in the virtual reality or augmented reality environment.
    Type: Application
    Filed: February 7, 2017
    Publication date: May 25, 2017
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Yuheng Ren, Abhishek Kar, Alexander Jay Bruen Trevor, Krunal Ketan Chande, Martin Josef Nikolaus Saelzle, Radu Bogdan Rusu
  • Publication number: 20170148223
    Abstract: Various embodiments describe systems and processes for generating AR/VR content. In one aspect, a method for generating a three-dimensional (3D) projection of an object is provided. A sequence of images along a camera translation may be obtained using a single lens camera. Each image contains at least a portion of overlapping subject matter, which includes the object. The object is semantically segmented from the sequence of images using a trained neural network to form a sequence of segmented object images, which are then refined using fine-grained segmentation. On-the-fly interpolation parameters are computed and stereoscopic pairs are generated for points along the camera translation from the refined sequence of segmented object images for displaying the object as a 3D projection in a virtual reality or augmented reality environment. Segmented image indices are then mapped to a rotation range for display in the virtual reality or augmented reality environment.
    Type: Application
    Filed: February 8, 2017
    Publication date: May 25, 2017
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef HOLZER, Yuheng Ren, Abhishek Kar, Alexander Jay Bruen Trevor, Krunal Ketan Chande, Martin Josef Nikolaus Saelzle, Radu Bogdan Rusu
  • Publication number: 20170109930
    Abstract: Provided are mechanisms and processes for augmenting multi-view image data with synthetic objects using inertial measurement unit (IMU) and image data. In one example, a process includes receiving a selection of an anchor location in a reference image for a synthetic object to be placed within a multi-view image. Movements between the reference image and a target image are computed using visual tracking information associated with the multi-view image, device orientation corresponding to the multi-view image, and an estimate of the camera's intrinsic parameters. A first synthetic image is then generated by placing the synthetic object at the anchor location using visual tracking information in the multi-view image, orienting the synthetic object using the inverse of the movements computed between the reference image and the target image, and projecting the synthetic object along a ray into a target view associated with the target image.
    Type: Application
    Filed: January 28, 2016
    Publication date: April 20, 2017
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Alexander Jay Bruen Trevor, Martin Saelzle, Radu Bogdan Rusu
  • Publication number: 20170084293
    Abstract: Various embodiments of the present invention relate generally to systems and methods for integrating audio into a multi-view interactive digital media representation. According to particular embodiments, one process includes retrieving a multi-view interactive digital media representation that includes numerous images fused together into content and context models. The process next includes retrieving and processing audio data to be integrated into the multi-view interactive digital media representation. A first segment of audio data may be associated with a first position in the multi-view interactive digital media representation. In other examples, a first segment of audio data may be associated with a visual position or the location of a camera in the multi-view interactive digital media representation.
    Type: Application
    Filed: September 22, 2015
    Publication date: March 23, 2017
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Vladimir Roumenov Glavtchev, Alexander Jay Bruen Trevor