Patents by Inventor Radu Bogdan Rusu

Radu Bogdan Rusu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10535197
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a surround view for presentation on a device. A visual guide can provided for capturing the multiple images used in the surround view. The visual guide can be a synthetic object that is rendered in real-time into the images output to a display of an image capture device. The visual guide can help user keep the image capture device moving along a desired trajectory.
    Type: Grant
    Filed: May 14, 2018
    Date of Patent: January 14, 2020
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Alexander Jay Bruen Trevor, Michelle Jung-Ah Ho, David Klein, Stephen David Miller, Shuichi Tsutsumi, Radu Bogdan Rusu
  • Patent number: 10521954
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a surround view. In particular embodiments, a surround view can be generated by combining a panoramic view of an object with a panoramic view of a distant scene, such that the object panorama is placed in a foreground position relative to the distant scene panorama. Such combined panoramas can enhance the interactive and immersive viewing experience of the surround view.
    Type: Grant
    Filed: December 31, 2018
    Date of Patent: December 31, 2019
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Radu Bogdan Rusu
  • Publication number: 20190392650
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a surround view for presentation on a device. A visual guide can provided for capturing the multiple images used in the surround view. The visual guide can be a synthetic object that is rendered in real-time into the images output to a display of an image capture device. The visual guide can help user keep the image capture device moving along a desired trajectory.
    Type: Application
    Filed: September 9, 2019
    Publication date: December 26, 2019
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Alexander Jay Bruen Trevor, Michelle Jung-Ah Ho, David Klein, Stephen David Miller, Shuichi Tsutsumi, Radu Bogdan Rusu
  • Patent number: 10514820
    Abstract: Various embodiments of the present disclosure relate generally to systems and methods for generating multi-view interactive digital media representations in a virtual reality environment. According to particular embodiments, a plurality of images is fused into a first content model and a first context model, both of which include multi-view interactive digital media representations of objects. Next, a virtual reality environment is generated using the first content model and the first context model. The virtual reality environment includes a first layer and a second layer. The user can navigate through and within the virtual reality environment to switch between multiple viewpoints of the content model via corresponding physical movements. The first layer includes the first content model and the second layer includes a second content model and wherein selection of the first layer provides access to the second layer with the second content model.
    Type: Grant
    Filed: October 3, 2017
    Date of Patent: December 24, 2019
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Radu Bogdan Rusu, Alexander Jay Bruen Trevor, Krunal Ketan Chande
  • Publication number: 20190384787
    Abstract: Provided are mechanisms and processes for performing visual search using multi-view digital media representations, such as surround views. In one example, a process includes receiving a visual search query that includes a surround view of an object to be searched, where the surround view includes spatial information, scale information, and different viewpoint images of the object. The surround view is compared to stored surround views by comparing spatial information and scale information of the surround view to spatial information and scale information of the stored surround views. A correspondence measure is then generated indicating the degree of similarity between the surround view and a possible match. At least one search result is then transmitted with a corresponding image in response to the visual search query.
    Type: Application
    Filed: August 29, 2019
    Publication date: December 19, 2019
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef HOLZER, Abhishek Kar, Alexander Jay Bruen Trevor, Pantelis Kalogiros, Ioannis Spanos, Radu Bogdan Rusu
  • Patent number: 10504293
    Abstract: Provided are mechanisms and processes for augmenting multi-view image data with synthetic objects using inertial measurement unit (IMU) and image data. In one example, a process includes receiving a selection of an anchor location in a reference image for a synthetic object to be placed within a multi-view image. Movements between the reference image and a target image are computed using visual tracking information associated with the multi-view image, device orientation corresponding to the multi-view image, and an estimate of the camera's intrinsic parameters. A first synthetic image is then generated by placing the synthetic object at the anchor location using visual tracking information in the multi-view image, orienting the synthetic object using the inverse of the movements computed between the reference image and the target image, and projecting the synthetic object along a ray into a target view associated with the target image.
    Type: Grant
    Filed: November 5, 2018
    Date of Patent: December 10, 2019
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Alexander Jay Bruen Trevor, Martin Saelzle, Radu Bogdan Rusu
  • Patent number: 10506159
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation can be generated from live images captured from a camera. The live images can include an object. An angular view of the object captured in the live images can be estimated using sensor data from an inertial measurement unit. The determined angular views can be used to select from among the live images and determine when a three hundred sixty degree view of the object has been captured. When the plurality of images is output to a display, the object can appear to undergo a 3-D rotation, such as a three hundred sixty degree rotation, through the determined angular view where the 3-D rotation of the object is generated without a 3-D polygon model of the object.
    Type: Grant
    Filed: January 25, 2019
    Date of Patent: December 10, 2019
    Assignee: Fyusion, Inc.
    Inventors: Alexander Jay Bruen Trevor, Chris Beall, Stefan Johannes Josef Holzer, Radu Bogdan Rusu
  • Patent number: 10484669
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation can be generated from live images captured from a camera. The live images can include an object. An angular view of the object captured in the live images can be estimated using sensor data from an inertial measurement unit. The multi-view interactive digital media representation can include a plurality of images where each of the plurality of images includes the object from a different camera view. When the plurality of images is output to a display, the object can appear to undergo a 3-D rotation through the determined angular view where the 3-D rotation of the object is generated without a 3-D polygon model of the object.
    Type: Grant
    Filed: December 19, 2018
    Date of Patent: November 19, 2019
    Assignee: Fyusion, Inc.
    Inventors: Alexander Jay Bruen Trevor, Chris Beall, Stefan Johannes Josef Holzer, Radu Bogdan Rusu
  • Patent number: 10469768
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation (MVIDMR) of a person can be generated from live images of a person captured from a hand-held camera. Using the image data from the live images, a skeleton of the person and a boundary between the person and a background can be determined from different viewing angles and across multiple images. Using the skeleton and the boundary data, effects can be added to the person, such as wings. The effects can change from image to image to account for the different viewing angles of the person captured in each image.
    Type: Grant
    Filed: March 26, 2018
    Date of Patent: November 5, 2019
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Stephen David Miller, Pantelis Kalogiros, Matteo Munaro, Alexander Jay Bruen Trevor, Nico Gregor Sebastian Blodow, Luo Yi Tan, Mike Penz, Martin Markus Hubert Wawro, Matthias Reso, Chris Beall, Yusuke Tomoto, Krunal Ketan Chande
  • Publication number: 20190335156
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation (MVIDMR) of an object can be generated from live images of an object captured from a camera. After the MVIDMR of the object is generated, a tag can be placed at a location on the object in the MVIDMR. The locations of the tag in the frames of the MVIDMR can vary from frame to frame as the view of the object changes. When the tag is selected, media content can be output which shows details of the object at location where the tag is placed. In one embodiment, the object can be car and tags can be used to link to media content showing details of the car at the locations where the tags are placed.
    Type: Application
    Filed: June 25, 2019
    Publication date: October 31, 2019
    Applicant: Fyusion, Inc.
    Inventors: Radu Bogdan Rusu, Dave Morrison, Keith Martin, Stephen David Miller, Pantelis Kalogiros, Mike Penz, Martin Markus Hubert Wawro, Bojana Dumeljic, Jai Chaudhry, Luke Parham, Julius Santiago, Stefan Johannes Josef Holzer
  • Publication number: 20190332866
    Abstract: A multi-view interactive digital media representation (MVIDMR) of an object can be generated from live images of an object captured from a camera. Selectable tags can be placed at locations on the object in the MVIDMR. When the selectable tags are selected, media content can be output which shows details of the object at location where the selectable tag is placed. A machine learning algorithm can be used to automatically recognize landmarks on the object in the frames of the MVIDMR and a structure from motion calculation can be used to determine 3-D positions associated with the landmarks. A 3-D skeleton associated with the object can be assembled from the 3-D positions and projected into the frames associated with the MVIDMR. The 3-D skeleton can be used to determine the selectable tag locations in the frames of the MVIDMR of the object.
    Type: Application
    Filed: November 2, 2018
    Publication date: October 31, 2019
    Applicant: Fyusion, Inc.
    Inventors: Chris Beall, Abhishek Kar, Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Pavel Hanchar
  • Publication number: 20190313085
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation (MVIDMR) of object can be generated from live images of the object captured from a hand-held camera. Methods are described where image data associated with the images capture from the hand-held camera are manipulated to generate a more desirable MVIDMR of the object. In particular, the image data can be manipulated so that it appears as if the camera traveled a smoother trajectory during the capture of the images which can provide a smoother output of the MVIDMR. In embodiment, key point matching within the image data and, optionally, IMU data from a sensor package on the camera can be used to generate constraints used in a factor graph optimization that is used to generate a smoother trajectory of the camera.
    Type: Application
    Filed: November 1, 2018
    Publication date: October 10, 2019
    Applicant: Fyusion, Inc.
    Inventors: Alexander Jay Bruen Trevor, Chris Beall, Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Krunal Ketan Chande
  • Patent number: 10437879
    Abstract: Provided are mechanisms and processes for performing visual search using multi-view digital media representations, such as surround views. In one example, a process includes receiving a visual search query that includes a surround view of an object to be searched, where the surround view includes spatial information, scale information, and different viewpoint images of the object. The surround view is compared to stored surround views by comparing spatial information and scale information of the surround view to spatial information and scale information of the stored surround views. A correspondence measure is then generated indicating the degree of similarity between the surround view and a possible match. At least one search result is then transmitted with a corresponding image in response to the visual search query.
    Type: Grant
    Filed: January 18, 2017
    Date of Patent: October 8, 2019
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Abhishek Kar, Alexander Jay Bruen Trevor, Pantelis Kalogiros, Ioannis Spanos, Radu Bogdan Rusu
  • Patent number: 10440351
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a multi-view interactive digital media representation for presentation on a device. Once a multi-view interactive digital media representation is generated, a user can provide navigational inputs, such via tilting of the device, which alter the presentation state of the multi-view interactive digital media representation. The navigational inputs can be analyzed to determine metrics which indicate a user's interest in the multi-view interactive digital media representation.
    Type: Grant
    Filed: March 3, 2017
    Date of Patent: October 8, 2019
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Stephen David Miller, Pantelis Kalogiros, George Haber
  • Patent number: 10430995
    Abstract: This present disclosure relates to systems and processes for interpolating images of an object from a multi-directional structured image array. In particular embodiments, a plurality of images corresponding to a light field is obtained using a camera. Each image contains at least a portion of overlapping subject matter with another image. First, second, and third images are determined, which are the closest three images in the plurality of images to a desired image location in the light field. A first set of candidate transformations is identified between the first and second images, and a second set of candidate transformations is identified between the first and third images. For each pixel location in the desired image location, first and second best pixel values are calculated using the first and second set of candidate transformations, respectively, and the first and second best pixel values are blended to form an interpolated pixel.
    Type: Grant
    Filed: April 29, 2019
    Date of Patent: October 1, 2019
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Yuheng Ren
  • Publication number: 20190297258
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation can be generated from live images captured from a camera as the camera moves along a path. Then, a sequence of the images can be selected based upon sensor data from an inertial measurement unit and upon image data such that one of the live images is selected for each of a plurality of poses along the path. A multi-view interactive digital media representation may be created from the sequence of images, and the images may be encoded as a video via a designated encoding format.
    Type: Application
    Filed: March 23, 2018
    Publication date: September 26, 2019
    Applicant: Fyusion, Inc.
    Inventors: Alexander Jay Bruen Trevor, Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Stephen Miller
  • Publication number: 20190281271
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation can be generated from live images captured from a camera. The live images can include an object. An angular view of the object captured in the live images can be estimated using sensor data from an inertial measurement unit. The determined angular views can be used to select from among the live images. The multi-view interactive digital media representation can include a plurality of images where each of the plurality of images includes the object from a different camera view. When the plurality of images is output to a display, the object can appear to undergo a 3-D rotation through the determined angular view where the 3-D rotation of the object is generated without a 3-D polygon model of the object.
    Type: Application
    Filed: April 19, 2019
    Publication date: September 12, 2019
    Applicant: Fyusion, Inc.
    Inventors: Alexander Jay Bruen Trevor, Chris Beall, Vladimir Glavtchev, Stefan Johannes Josef Holzer, Radu Bogdan Rusu
  • Publication number: 20190278434
    Abstract: Various embodiments of the present disclosure relate generally to systems and methods for automatic tagging of objects on a multi-view interactive digital media representation of a dynamic entity. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a multi-view interactive digital media representation for presentation on a device. Multi-view interactive digital media representations correspond to multi-view interactive digital media representations of the dynamic objects in backgrounds. A first multi-view interactive digital media representation of a dynamic object is obtained. Next, the dynamic object is tagged. Then, a second multi-view interactive digital media representation of the dynamic object is generated. Finally, the dynamic object in the second multi-view interactive digital media representation is automatically identified and tagged.
    Type: Application
    Filed: May 30, 2019
    Publication date: September 12, 2019
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Radu Bogdan Rusu
  • Publication number: 20190251738
    Abstract: This present disclosure relates to systems and processes for interpolating images of an object from a multi-directional structured image array. In particular embodiments, a plurality of images corresponding to a light field is obtained using a camera. Each image contains at least a portion of overlapping subject matter with another image. First, second, and third images are determined, which are the closest three images in the plurality of images to a desired image location in the light field. A first set of candidate transformations is identified between the first and second images, and a second set of candidate transformations is identified between the first and third images. For each pixel location in the desired image location, first and second best pixel values are calculated using the first and second set of candidate transformations, respectively, and the first and second best pixel values are blended to form an interpolated pixel.
    Type: Application
    Filed: April 29, 2019
    Publication date: August 15, 2019
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Yuheng Ren
  • Patent number: 10382739
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation (MVIDMR) of an object can be generated from live images of an object captured from a camera. After the MVIDMR of the object is generated, a tag can be placed at a location on the object in the MVIDMR. The locations of the tag in the frames of the MVIDMR can vary from frame to frame as the view of the object changes. When the tag is selected, media content can be output which shows details of the object at location where the tag is placed. In one embodiment, the object can be car and tags can be used to link to media content showing details of the car at the locations where the tags are placed.
    Type: Grant
    Filed: April 26, 2018
    Date of Patent: August 13, 2019
    Assignee: Fyusion, Inc.
    Inventors: Radu Bogdan Rusu, Dave Morrison, Keith Martin, Stephen David Miller, Pantelis Kalogiros, Mike Penz, Martin Markus Hubert Wawro, Bojana Dumeljic, Jai Chaudhry, Luke Parham, Julius Santiago, Stefan Johannes Josef Holzer