Patents by Inventor Stefan Johannes Josef HOLZER

Stefan Johannes Josef HOLZER has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200226736
    Abstract: A sampling density for capturing a plurality of two-dimensional images of a three-dimensional scene may be determined. The sampling density may be below the Nyquist rate. However, the sampling density may be sufficiently high such that captured images may be promoted to multiplane images and used to generate novel viewpoints in a light field reconstruction framework. Recording guidance may be provided at a display screen on a mobile computing device based on the determined sampling density. The recording guidance identifying a plurality of camera poses at which to position a camera to capture images of the three-dimensional scene. A plurality of images captured via the camera based on the recording guidance may be stored on a storage device.
    Type: Application
    Filed: September 18, 2019
    Publication date: July 16, 2020
    Applicant: Fyusion, Inc.
    Inventors: Abhishek Kar, Rodrigo Ortiz Cayon, Ben Mildenhall, Stefan Johannes Josef Holzer, Radu Bogdan Rusu
  • Patent number: 10713851
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a surround view for presentation on a device. A real object can be tracked in the live image data for the purposes of creating a surround view using a number of tracking points. As a camera is moved around the real object, virtual objects can be rendered into live image data to create synthetic images where a position of the tracking points can be used to position the virtual object in the synthetic image. The synthetic images can be output in real-time. Further, virtual objects in the synthetic images can be incorporated into surround views.
    Type: Grant
    Filed: November 12, 2018
    Date of Patent: July 14, 2020
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Alexander Jay Bruen Trevor, Martin Saelzle, Stephen David Miller, Radu Bogdan Rusu
  • Publication number: 20200213578
    Abstract: Various embodiments of the present disclosure relate generally to systems and methods for drone-based systems and methods for capturing a multi- media representation of an entity. In some embodiments, the multi-media representation is digital, or multi-view, or interactive, and/or the combinations thereof. According to particular embodiments, a drone having a camera to is controlled or operated to obtain a plurality of images having location information. The plurality of images, including at least a portion of overlapping subject matter, are fused to form multi-view interactive digital media representations.
    Type: Application
    Filed: March 9, 2020
    Publication date: July 2, 2020
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Radu Bogdan Rusu
  • Patent number: 10698558
    Abstract: Various embodiments of the present disclosure relate generally to systems and methods for automatic tagging of objects on a multi-view interactive digital media representation of a dynamic entity. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a multi-view interactive digital media representation for presentation on a device. Multi-view interactive digital media representations correspond to multi-view interactive digital media representations of the dynamic objects in backgrounds. A first multi-view interactive digital media representation of a dynamic object is obtained. Next, the dynamic object is tagged. Then, a second multi-view interactive digital media representation of the dynamic object is generated. Finally, the dynamic object in the second multi-view interactive digital media representation is automatically identified and tagged.
    Type: Grant
    Filed: June 12, 2017
    Date of Patent: June 30, 2020
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Radu Bogdan Rusu
  • Patent number: 10687046
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation (MVIDMR) of object can be generated from live images of the object captured from a hand-held camera. Methods are described where image data associated with the images capture from the hand-held camera are manipulated to generate a more desirable MVIDMR of the object. In particular, the image data can be manipulated so that it appears as if the camera traveled a smoother trajectory during the capture of the images which can provide a smoother output of the MVIDMR. In embodiment, key point matching within the image data and, optionally, IMU data from a sensor package on the camera can be used to generate constraints used in a factor graph optimization that is used to generate a smoother trajectory of the camera.
    Type: Grant
    Filed: November 1, 2018
    Date of Patent: June 16, 2020
    Assignee: Fyusion, Inc.
    Inventors: Alexander Jay Bruen Trevor, Chris Beall, Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Krunal Ketan Chande
  • Publication number: 20200167570
    Abstract: A multi-view interactive digital media representation (MVIDMR) of an object can be generated from live images of an object captured from a camera. Selectable tags can be placed at locations on the object in the MVIDMR. When the selectable tags are selected, media content can be output which shows details of the object at location where the selectable tag is placed. A machine learning algorithm can be used to automatically recognize landmarks on the object in the frames of the MVIDMR and a structure from motion calculation can be used to determine 3-D positions associated with the landmarks. A 3-D skeleton associated with the object can be assembled from the 3-D positions and projected into the frames associated with the MVIDMR. The 3-D skeleton can be used to determine the selectable tag locations in the frames of the MVIDMR of the object.
    Type: Application
    Filed: January 31, 2020
    Publication date: May 28, 2020
    Applicant: Fyusion, Inc.
    Inventors: Chris Beall, Abhishek Kar, Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Pavel Hanchar
  • Patent number: 10665024
    Abstract: Various embodiments of the present invention relate generally to systems and methods for collecting, analyzing, and manipulating images and video. According to particular embodiments, live images captured by a camera on a mobile device may be analyzed as the mobile device moves along a path. The live images may be compared with a target view. A visual indicator may be provided to guide the alteration of the positioning of the mobile device to more closely align with the target view.
    Type: Grant
    Filed: July 5, 2018
    Date of Patent: May 26, 2020
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Alexander Jay Bruen Trevor, Michelle Jung-Ah Ho, David Klein, Stephen David Miller, Shuichi Tsutsumi, Radu Bogdan Rusu
  • Patent number: 10659686
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation can be generated from live images captured from a camera as the camera moves along a path. Then, a sequence of the images can be selected based upon sensor data from an inertial measurement unit and upon image data such that one of the live images is selected for each of a plurality of poses along the path. A multi-view interactive digital media representation may be created from the sequence of images, and the images may be encoded as a video via a designated encoding format.
    Type: Grant
    Filed: March 23, 2018
    Date of Patent: May 19, 2020
    Assignee: Fyusion, Inc.
    Inventors: Alexander Jay Bruen Trevor, Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Stephen Miller
  • Patent number: 10650574
    Abstract: Various embodiments of the present disclosure relate generally to systems and processes for generating stereo pairs for virtual reality. According to particular embodiments, a method comprises obtaining a monocular sequence of images using the single lens camera during a capture mode. The sequence of images is captured along a camera translation. Each image in the sequence of images contains at least a portion of overlapping subject matter, which includes an object. The method further comprises generating stereo pairs, for one or more points along the camera translation, for virtual reality using the sequence of images. Generating the stereo pairs may include: selecting frames for each stereo pair based on a spatial baseline; interpolating virtual images in between captured images in the sequence of images; correcting selected frames by rotating the images; and rendering the selected frames by assigning each image in the selected frames to left and right eyes.
    Type: Grant
    Filed: January 17, 2017
    Date of Patent: May 12, 2020
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Yuheng Ren
  • Patent number: 10645371
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation can be generated from live images captured from a camera. The live images can include an object. An angular view of the object captured in the live images can be estimated using sensor data from an inertial measurement unit. The multi-view interactive digital media representation can include a plurality of images where each of the plurality of images includes the object from a different camera view. When the plurality of images is output to a display, the object can appear to undergo a 3-D rotation through the determined angular view where the 3-D rotation of the object is generated without a 3-D polygon model of the object.
    Type: Grant
    Filed: October 11, 2019
    Date of Patent: May 5, 2020
    Assignee: Fyusion, Inc.
    Inventors: Alexander Jay Bruen Trevor, Chris Beall, Stefan Johannes Josef Holzer, Radu Bogdan Rusu
  • Publication number: 20200133462
    Abstract: Various embodiments of the present disclosure relate generally to systems and methods for generating multi-view interactive digital media representations in a virtual reality environment. According to particular embodiments, a plurality of images is fused into a first content model and a first context model, both of which include multi-view interactive digital media representations of objects. Next, a virtual reality environment is generated using the first content model and the first context model. The virtual reality environment includes a first layer and a second layer. The user can navigate through and within the virtual reality environment to switch between multiple viewpoints of the content model via corresponding physical movements. The first layer includes the first content model and the second layer includes a second content model and wherein selection of the first layer provides access to the second layer with the second content model.
    Type: Application
    Filed: December 23, 2019
    Publication date: April 30, 2020
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Radu Bogdan Rusu, Alexander Jay Bruen Trevor, Krunal Ketan Chande
  • Patent number: 10628675
    Abstract: Provided are mechanisms and processes for performing skeleton detection and tracking via client-server communication. In one example, a server transmits a skeleton detection message that includes position data for a skeleton representing the structure of an object depicted in a first video frame in a raw video stream at a client device. Based on the initial position data, a processor identifies intervening position data for the skeleton in one or more intervening video frames that are temporally located after the first video frame in the raw video stream. A filtered video stream is then presented by altering the raw video stream based at least in part on the first position data and the intervening position data.
    Type: Grant
    Filed: February 7, 2017
    Date of Patent: April 21, 2020
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Matteo Munaro, Abhishek Kar, Alexander Jay Bruen Trevor, Krunal Ketan Chande, Radu Bogdan Rusu
  • Patent number: 10592747
    Abstract: A multi-view interactive digital media representation (MVIDMR) of an object can be generated from live images of an object captured from a camera. Selectable tags can be placed at locations on the object in the MVIDMR. When the selectable tags are selected, media content can be output which shows details of the object at location where the selectable tag is placed. A machine learning algorithm can be used to automatically recognize landmarks on the object in the frames of the MVIDMR and a structure from motion calculation can be used to determine 3-D positions associated with the landmarks. A 3-D skeleton associated with the object can be assembled from the 3-D positions and projected into the frames associated with the MVIDMR. The 3-D skeleton can be used to determine the selectable tag locations in the frames of the MVIDMR of the object.
    Type: Grant
    Filed: November 2, 2018
    Date of Patent: March 17, 2020
    Assignee: Fyusion, Inc.
    Inventors: Chris Beall, Abhishek Kar, Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Pavel Hanchar
  • Patent number: 10585547
    Abstract: Various examples of the present disclosure include techniques and mechanisms for providing a customizable visual and functional experience for a user of an application or service. According to various examples, a system includes a first visual interface that is mapped to a first feature set to operate together as a first user interface that is presented throughout the application or service when selected. The system further includes a second visual interface that is mapped to a second feature set to operate together as a second user interface that is presented throughout the application or service when selected. The first feature set and second feature set differ from each other and both the first user interface and second user interface are customizable.
    Type: Grant
    Filed: July 14, 2015
    Date of Patent: March 10, 2020
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Michelle Ho, Pantelis Kalogiros, Radu Bogdan Rusu
  • Patent number: 10586378
    Abstract: The present disclosure describes systems and processes for image sequence stabilization. According to particular embodiments, a sequence of images is obtained using a camera which captures the sequence of images along a camera translation. Each image contains at least a portion of overlapping subject matter. A plurality of keypoints is identified on a first image of the sequence of images. Each keypoint from the first image are kept track of to a second image. Using a predetermined algorithm, a camera rotation value and a focal length value are calculated from two randomly sampled keypoints on the first image and two corresponding keypoints on the second image. An optimal camera rotation and focal length pair corresponding to an optimal transformation for producing an image warp for image sequence stabilization is determined. The image warp for image sequence stabilization is constructed using the optimal camera and focal length pair.
    Type: Grant
    Filed: January 17, 2017
    Date of Patent: March 10, 2020
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Yuheng Ren
  • Publication number: 20200045300
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation can be generated from live images captured from a camera. The live images can include an object. An angular view of the object captured in the live images can be estimated using sensor data from an inertial measurement unit. The multi-view interactive digital media representation can include a plurality of images where each of the plurality of images includes the object from a different camera view. When the plurality of images is output to a display, the object can appear to undergo a 3-D rotation through the determined angular view where the 3-D rotation of the object is generated without a 3-D polygon model of the object.
    Type: Application
    Filed: October 11, 2019
    Publication date: February 6, 2020
    Applicant: Fyusion, Inc.
    Inventors: Alexander Jay Bruen Trevor, Chris Beall, Stefan Johannes Josef Holzer, Radu Bogdan Rusu
  • Publication number: 20200036963
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a multi-view interactive digital media representation for presentation on a device. Once a multi-view interactive digital media representation is generated, a user can provide navigational inputs, such via tilting of the device, which alter the presentation state of the multi-view interactive digital media representation. The navigational inputs can be analyzed to determine metrics which indicate a user's interest in the multi-view interactive digital media representation.
    Type: Application
    Filed: October 7, 2019
    Publication date: January 30, 2020
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Stephen David Miller, Pantelis Kalogiros, George Haber
  • Publication number: 20200027263
    Abstract: This present disclosure relates to systems and processes for interpolating images of an object from a multi-directional structured image array. In particular embodiments, a plurality of images corresponding to a light field is obtained using a camera. Each image contains at least a portion of overlapping subject matter with another image. First, second, and third images are determined, which are the closest three images in the plurality of images to a desired image location in the light field. A first set of candidate transformations is identified between the first and second images, and a second set of candidate transformations is identified between the first and third images. For each pixel location in the desired image location, first and second best pixel values are calculated using the first and second set of candidate transformations, respectively, and the first and second best pixel values are blended to form an interpolated pixel.
    Type: Application
    Filed: September 27, 2019
    Publication date: January 23, 2020
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Yuheng Ren
  • Patent number: 10540773
    Abstract: Various embodiments of the present invention relate generally to systems and processes for interpolating images of an object. According to particular embodiments, a sequence of images is obtained using a camera which captures the sequence of images along a camera translation. Each image contains at least a portion of overlapping subject matter. A plurality of keypoints is identified on a first image of the sequence of images. Each keypoint from the first image are kept track of to a second image. Using a predetermined algorithm, a plurality of transformations are computed using two randomly sampled keypoint correspondences, each of which includes a keypoint on the first image and a corresponding keypoint on the second image. An optimal subset of transformations is determined from the plurality of transformations based on predetermined criteria, and transformation parameters corresponding to the optimal subset of transformations is calculated and stored for on-the-fly interpolation.
    Type: Grant
    Filed: April 15, 2019
    Date of Patent: January 21, 2020
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Radu Bogdan Rusu, Yuheng Ren
  • Publication number: 20200021752
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation (MVIDMR) of a person can be generated from live images of a person captured from a hand-held camera. Using the image data from the live images, a skeleton of the person and a boundary between the person and a background can be determined from different viewing angles and across multiple images. Using the skeleton and the boundary data, effects can be added to the person, such as wings. The effects can change from image to image to account for the different viewing angles of the person captured in each image.
    Type: Application
    Filed: September 23, 2019
    Publication date: January 16, 2020
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Stephen David Miller, Pantelis Kalogiros, Matteo Munaro, Alexander Jay Bruen Trevor, Nico Gregor Sebastian Blodow, Luo Yi Tan, Mike Penz, Martin Markus Hubert Wawro, Matthias Reso, Chris Beall, Yusuke Tomoto, Krunal Ketan Chande