Patents by Inventor Stefan Johannes Josef HOLZER

Stefan Johannes Josef HOLZER has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190073834
    Abstract: Provided are mechanisms and processes for augmenting multi-view image data with synthetic objects using inertial measurement unit (IMU) and image data. In one example, a process includes receiving a selection of an anchor location in a reference image for a synthetic object to be placed within a multi-view image. Movements between the reference image and a target image are computed using visual tracking information associated with the multi-view image, device orientation corresponding to the multi-view image, and an estimate of the camera's intrinsic parameters. A first synthetic image is then generated by placing the synthetic object at the anchor location using visual tracking information in the multi-view image, orienting the synthetic object using the inverse of the movements computed between the reference image and the target image, and projecting the synthetic object along a ray into a target view associated with the target image.
    Type: Application
    Filed: November 5, 2018
    Publication date: March 7, 2019
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Alexander Jay Bruen Trevor, Martin Saelzle, Radu Bogdan Rusu
  • Patent number: 10222932
    Abstract: Various embodiments of the present disclosure relate generally to systems and methods for generating multi-view interactive digital media representations in a virtual reality environment. According to particular embodiments, a plurality of images is fused into a first content model and a first context model, both of which include multi-view interactive digital media representations of objects. Next, a virtual reality environment is generated using the first content model and the first context model. The virtual reality environment includes a first layer and a second layer. The user can navigate through and within the virtual reality environment to switch between multiple viewpoints of the content model via corresponding physical movements. The first layer includes the first content model and the second layer includes a second content model and wherein selection of the first layer provides access to the second layer with the second content model.
    Type: Grant
    Filed: August 21, 2017
    Date of Patent: March 5, 2019
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Radu Bogdan Rusu, Alexander Jay Bruen Trevor, Krunal Ketan Chande
  • Patent number: 10210662
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a surround view for presentation on a device. A real object can be tracked in the live image data for the purposes of creating a surround view using a number of tracking points. As a camera is moved around the real object, virtual objects can be rendered into live image data to create synthetic images where a position of the tracking points can be used to position the virtual object in the synthetic image. The synthetic images can be output in real-time. Further, virtual objects in the synthetic images can be incorporated into surround views.
    Type: Grant
    Filed: December 9, 2016
    Date of Patent: February 19, 2019
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Alexander Jay Bruen Trevor, Martin Saelzle, Stephen David Miller, Radu Bogdan Rusu
  • Patent number: 10200677
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation can be generated from live images captured from a camera. The live images can include an object. An angular view of the object captured in the live images can be estimated using sensor data from an inertial measurement unit. The multi-view interactive digital media representation can include a plurality of images where each of the plurality of images includes the object from a different camera view. When the plurality of images is output to a display, the object can appear to undergo a 3-D rotation through the determined angular view where the 3-D rotation of the object is generated without a 3-D polygon model of the object.
    Type: Grant
    Filed: May 22, 2017
    Date of Patent: February 5, 2019
    Assignee: Fyusion, Inc.
    Inventors: Alexander Jay Bruen Trevor, Chris Beall, Stefan Johannes Josef Holzer, Radu Bogdan Rusu
  • Patent number: 10176592
    Abstract: This present disclosure relates to systems and processes for capturing an unstructured light field in a plurality of images. In particular embodiments, a plurality of keypoints are identified on a first keyframe in a plurality of captured images. A first convex hull is computed from all keypoints in the first keyframe and merged with previous convex hulls corresponding to previous keyframes to form a convex hull union. Each keypoint is tracked from the first keyframe to a second image. The second image is adjusted to compensate for camera rotation during capture, and a second convex hull is computed from all keypoints in the second image. If the overlapping region between the second convex hull and the convex hull union is equal to, or less than, a predetermined size, the second image is designated as a new keyframe, and the convex hull union is augmented with the second convex hull.
    Type: Grant
    Filed: February 7, 2017
    Date of Patent: January 8, 2019
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Yuheng Ren
  • Patent number: 10169911
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a surround view. In particular embodiments, a surround view can be generated by combining a panoramic view of an object with a panoramic view of a distant scene, such that the object panorama is placed in a foreground position relative to the distant scene panorama. Such combined panoramas can enhance the interactive and immersive viewing experience of the surround view.
    Type: Grant
    Filed: July 12, 2018
    Date of Patent: January 1, 2019
    Assignee: FYUSION, INC.
    Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Radu Bogdan Rusu
  • Publication number: 20180374273
    Abstract: Provided are mechanisms and processes for inserting a visual element into a multi-view digital media representation (MVIDMR). In one example, a process includes analyzing an MVIDMR to determine if there is an appropriate location to insert a visual element. Once a location is found, the type of visual element appropriate for the location is determined, where the type of visual element includes either a three-dimensional object to be inserted in the MVIDMR or a two-dimensional image to be inserted as or projected onto a background or object in the MVIDMR. A visual element that is appropriate for the location is then retrieved and inserted into the MVIDMR, such that the visual element is integrated into the MVIDMR and navigable by a user.
    Type: Application
    Filed: June 26, 2017
    Publication date: December 27, 2018
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef HOLZER, Stephen David Miller, Pantelis Kalogiros, George Haber, Radu Bogdan Rusu
  • Patent number: 10152825
    Abstract: Provided are mechanisms and processes for augmenting multi-view image data with synthetic objects using inertial measurement unit (IMU) and image data. In one example, a process includes receiving a selection of an anchor location in a reference image for a synthetic object to be placed within a multi-view image. Movements between the reference image and a target image are computed using visual tracking information associated with the multi-view image, device orientation corresponding to the multi-view image, and an estimate of the camera's intrinsic parameters. A first synthetic image is then generated by placing the synthetic object at the anchor location using visual tracking information in the multi-view image, orienting the synthetic object using the inverse of the movements computed between the reference image and the target image, and projecting the synthetic object along a ray into a target view associated with the target image.
    Type: Grant
    Filed: January 28, 2016
    Date of Patent: December 11, 2018
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Alexander Jay Bruen Trevor, Martin Saelzle, Radu Bogdan Rusu
  • Patent number: 10147211
    Abstract: Various embodiments of the present invention relate generally to systems and methods for artificially rendering images using viewpoint interpolation and extrapolation. According to particular embodiments, a method includes moving a set of control points perpendicular to a trajectory between a first frame and a second frame, where the first frame includes a first image captured from a first location and the second frame includes a second image captured from a second location. The set of control points is associated with a layer and each control point is moved based on an associated depth of the control point. The method also includes generating an artificially rendered image corresponding to a third location outside of the trajectory by extrapolating individual control points using the set of control points for the third location and extrapolating pixel locations using the individual control points.
    Type: Grant
    Filed: July 15, 2015
    Date of Patent: December 4, 2018
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Martin Saelzle, Radu Bogdan Rusu
  • Publication number: 20180341808
    Abstract: Provided are mechanisms and processes for visual feature tagging in multi-view interactive digital media representations (MIDMRs). In one example, a process includes receiving a visual feature tagging request that includes an MIDMR of an object to be searched, where the MIDMR includes spatial information, scale information, and different viewpoint images of the object. A visual feature in the MIDMR is identified, and visual feature correspondence information is created that links information identifying the visual feature with locations in the viewpoint images. At least one image associated with the MIDMR is transmitted in response to the feature tagging request.
    Type: Application
    Filed: May 25, 2017
    Publication date: November 29, 2018
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef HOLZER, Stephen David Miller, Pantelis Kalogiros, Radu Bogdan Rusu
  • Publication number: 20180338128
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation can be generated from live images captured from a camera. The live images can include an object. An angular view of the object captured in the live images can be estimated using sensor data from an inertial measurement unit. The multi-view interactive digital media representation can include a plurality of images where each of the plurality of images includes the object from a different camera view. When the plurality of images is output to a display, the object can appear to undergo a 3-D rotation through the determined angular view where the 3-D rotation of the object is generated without a 3-D polygon model of the object.
    Type: Application
    Filed: May 22, 2017
    Publication date: November 22, 2018
    Applicant: Fyusion, Inc.
    Inventors: Alexander Jay Bruen Trevor, Chris Beall, Stefan Johannes Josef Holzer, Radu Bogdan Rusu
  • Publication number: 20180338126
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation can be generated from live images captured from a camera. The live images can include an object. An angular view of the object captured in the live images can be estimated using sensor data from an inertial measurement unit. The determined angular views can be used to select from among the live images. The multi-view interactive digital media representation can include a plurality of images where each of the plurality of images includes the object from a different camera view. When the plurality of images is output to a display, the object can appear to undergo a 3-D rotation through the determined angular view where the 3-D rotation of the object is generated without a 3-D polygon model of the object.
    Type: Application
    Filed: May 22, 2017
    Publication date: November 22, 2018
    Applicant: Fyusion, Inc.
    Inventors: Alexander Jay Bruen Trevor, Chris Beall, Vladimir Glavtchev, Stefan Johannes Josef Holzer, Radu Bogdan Rusu
  • Publication number: 20180338083
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation can be generated from live images captured from a camera. The live images can include an object. An angular view of the object captured in the live images can be estimated using sensor data from an inertial measurement unit. The determined angular views can be used to select from among the live images and determine when a three hundred sixty degree view of the object has been captured. When the plurality of images is output to a display, the object can appear to undergo a 3-D rotation, such as a three hundred sixty degree rotation, through the determined angular view where the 3-D rotation of the object is generated without a 3-D polygon model of the object.
    Type: Application
    Filed: May 22, 2017
    Publication date: November 22, 2018
    Applicant: Fyusion, Inc.
    Inventors: Alexander Jay Bruen Trevor, Chris Beall, Stefan Johannes Josef Holzer, Radu Bogdan Rusu
  • Publication number: 20180330537
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a surround view. In particular embodiments, a surround view can be generated by combining a panoramic view of an object with a panoramic view of a distant scene, such that the object panorama is placed in a foreground position relative to the distant scene panorama. Such combined panoramas can enhance the interactive and immersive viewing experience of the surround view.
    Type: Application
    Filed: July 12, 2018
    Publication date: November 15, 2018
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Radu Bogdan Rusu
  • Publication number: 20180260972
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a surround view for presentation on a device. A visual guide can provided for capturing the multiple images used in the surround view. The visual guide can be a synthetic object that is rendered in real-time into the images output to a display of an image capture device. The visual guide can help user keep the image capture device moving along a desired trajectory.
    Type: Application
    Filed: May 14, 2018
    Publication date: September 13, 2018
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Alexander Jay Bruen Trevor, Michelle Jung-Ah Ho, David Klein, Stephen David Miller, Shuichi Tsutsumi, Radu Bogdan Rusu
  • Publication number: 20180253827
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a multi-view interactive digital media representation for presentation on a device. Once a multi-view interactive digital media representation is generated, a user can provide navigational inputs, such via tilting of the device, which alter the presentation state of the multi-view interactive digital media representation. The navigational inputs can be analyzed to determine metrics which indicate a user's interest in the multi-view interactive digital media representation.
    Type: Application
    Filed: March 3, 2017
    Publication date: September 6, 2018
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef HOLZER, Radu Bogdan Rusu, Stephen David Miller, Pantelis Kalogiros, George Haber
  • Publication number: 20180255290
    Abstract: Various embodiments describe systems and processes for capturing and generating multi-view interactive digital media representations (MIDMRs). In one aspect, a method for automatically generating a MIDMR comprises obtaining a first MIDMR and a second MIDMR. The first MIDMR includes a convex or concave motion capture using a recording device and is a general object MIDMR. The second MIDMR is a specific feature MIDMR. The first and second MIDMRs may be obtained using different capture motions. A third MIDMR is generated from the first and second MIDMRs, and is a combined embedded MIDMR. The combined embedded MIDMR may comprise the second MIDMR being embedded in the first MIDMR, forming an embedded second MIDMR. The third MIDMR may include a general view in which the first MIDMR is displayed for interactive viewing by a user on a user device. The embedded second MIDMR may not be viewable in the general view.
    Type: Application
    Filed: May 2, 2018
    Publication date: September 6, 2018
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Dave Morrison, Radu Bogdan Rusu, George Haber, Keith Martin
  • Publication number: 20180253819
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a multi-view interactive digital media representation for presentation on a device. Once a multi-view interactive digital media representation is generated, a user can provide navigational inputs, such via tilting of the device, which alter the presentation state of the multi-view interactive digital media representation. The navigational inputs can be analyzed to determine metrics which indicate a user's interest in the multi-view interactive digital media representation.
    Type: Application
    Filed: March 3, 2017
    Publication date: September 6, 2018
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef HOLZER, Radu Bogdan Rusu, Stephen David Miller, Pantelis Kalogiros, George Haber
  • Publication number: 20180255284
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a multi-view interactive digital media representation for presentation on a device. Once a multi-view interactive digital media representation is generated, a user can provide navigational inputs, such via tilting of the device, which alter the presentation state of the multi-view interactive digital media representation. The navigational inputs can be analyzed to determine metrics which indicate a user's interest in the multi-view interactive digital media representation.
    Type: Application
    Filed: March 3, 2017
    Publication date: September 6, 2018
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef HOLZER, Radu Bogdan Rusu, Stephen David Miller, Pantelis Kalogiros, George Haber
  • Patent number: 10068316
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a multi-view interactive digital media representation for presentation on a device. Once a multi-view interactive digital media representation is generated, a user can provide navigational inputs, such via tilting of the device, which alter the presentation state of the multi-view interactive digital media representation. The navigational inputs can be analyzed to determine metrics which indicate a user's interest in the multi-view interactive digital media representation.
    Type: Grant
    Filed: March 3, 2017
    Date of Patent: September 4, 2018
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Stephen David Miller, Pantelis Kalogiros, George Haber