Patents by Inventor Radu Bogdan

Radu Bogdan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190141358
    Abstract: Provided are mechanisms and processes for performing live filtering in a camera view via client-server communication. In one example, a first video frame in a raw video stream is transmitted from a client device to a server. The client device receives a filter processing message associated with the first video frame that includes filter data for applying a filter to the first video frame. A processor at the client device creates a filtered video stream by applying the filter to a second video frame that occurs in the video stream later than the first video frame.
    Type: Application
    Filed: August 7, 2018
    Publication date: May 9, 2019
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Matteo Munaro, Abhishek Kar, Alexander Jay Bruen Trevor, Krunal Ketan Chande, Radu Bogdan Rusu
  • Patent number: 10275935
    Abstract: This present disclosure relates to systems and processes for interpolating images of an object from a multi-directional structured image array. In particular embodiments, a plurality of images corresponding to a light field is obtained using a camera. Each image contains at least a portion of overlapping subject matter with another image. First, second, and third images are determined, which are the closest three images in the plurality of images to a desired image location in the light field. A first set of candidate transformations is identified between the first and second images, and a second set of candidate transformations is identified between the first and third images. For each pixel location in the desired image location, first and second best pixel values are calculated using the first and second set of candidate transformations, respectively, and the first and second best pixel values are blended to form an interpolated pixel.
    Type: Grant
    Filed: February 6, 2017
    Date of Patent: April 30, 2019
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Yuheng Ren
  • Publication number: 20190114836
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation (MVIDMR) of a person can be generated from live images of a person captured from a hand-held camera. Using the image data from the live images, a skeleton of the person and a boundary between the person and a background can be determined from different viewing angles and across multiple images. Using the skeleton and the boundary data, effects can be added to the person, such as wings. The effects can change from image to image to account for the different viewing angles of the person captured in each image.
    Type: Application
    Filed: March 26, 2018
    Publication date: April 18, 2019
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Stephen David Miller, Pantelis Kalogiros, Matteo Munaro, Alexander Jay Bruen Trevor, Nico Gregor Sebastian Blodow, Luo Yi Tan, Mike Penz, Martin Markus Hubert Wawro, Matthias Reso, Chris Beall, Yusuke Tomoto, Krunal Ketan Chande
  • Publication number: 20190116322
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation (MVIDMR) of a person can be generated from live images of a person captured from a hand-held camera. Using the image data from the live images, a skeleton of the person and a boundary between the person and a background can be determined from different viewing angles and across multiple images. Using the skeleton and the boundary data, effects can be added to the person, such as wings. The effects can change from image to image to account for the different viewing angles of the person captured in each image.
    Type: Application
    Filed: March 26, 2018
    Publication date: April 18, 2019
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Stephen David Miller, Pantelis Kalogiros, Matteo Munaro, Alexander Jay Bruen Trevor, Nico Gregor Sebastian Blodow, Luo Yi Tan, Mike Penz, Martin Markus Hubert Wawro, Matthias Reso, Chris Beall, Yusuke Tomoto, Krunal Ketan Chande
  • Patent number: 10262426
    Abstract: Various embodiments of the present invention relate generally to systems and processes for interpolating images of an object. According to particular embodiments, a sequence of images is obtained using a camera which captures the sequence of images along a camera translation. Each image contains at least a portion of overlapping subject matter. A plurality of keypoints is identified on a first image of the sequence of images. Each keypoint from the first image are kept track of to a second image. Using a predetermined algorithm, a plurality of transformations are computed using two randomly sampled keypoint correspondences, each of which includes a keypoint on the first image and a corresponding keypoint on the second image. An optimal subset of transformations is determined from the plurality of transformations based on predetermined criteria, and transformation parameters corresponding to the optimal subset of transformations is calculated and stored for on-the-fly interpolation.
    Type: Grant
    Filed: February 6, 2017
    Date of Patent: April 16, 2019
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Radu Bogdan Rusu, Yuheng Ren
  • Publication number: 20190096137
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a surround view for presentation on a device. A real object can be tracked in the live image data for the purposes of creating a surround view using a number of tracking points. As a camera is moved around the real object, virtual objects can be rendered into live image data to create synthetic images where a position of the tracking points can be used to position the virtual object in the synthetic image. The synthetic images can be output in real-time. Further, virtual objects in the synthetic images can be incorporated into surround views.
    Type: Application
    Filed: November 12, 2018
    Publication date: March 28, 2019
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Alexander Jay Bruen Trevor, Martin Saelzle, Stephen David Miller, Radu Bogdan Rusu
  • Patent number: 10242474
    Abstract: Various embodiments of the present invention relate generally to mechanisms and processes relating to artificially rendering images using viewpoint interpolation and extrapolation. According to particular embodiments, a method includes applying a transform to estimate a path outside the trajectory between a first frame and a second frame, where the first frame includes a first image captured from a first location and the second frame includes a second image captured from a second location. The process also includes generating an artificially rendered image corresponding to a third location positioned on the path.
    Type: Grant
    Filed: July 15, 2015
    Date of Patent: March 26, 2019
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Martin Saelzle, Radu Bogdan Rusu
  • Patent number: 10237477
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation can be generated from live images captured from a camera. The live images can include an object. An angular view of the object captured in the live images can be estimated using sensor data from an inertial measurement unit. The determined angular views can be used to select from among the live images and determine when a three hundred sixty degree view of the object has been captured. When the plurality of images is output to a display, the object can appear to undergo a 3-D rotation, such as a three hundred sixty degree rotation, through the determined angular view where the 3-D rotation of the object is generated without a 3-D polygon model of the object.
    Type: Grant
    Filed: May 22, 2017
    Date of Patent: March 19, 2019
    Assignee: Fyusion, Inc.
    Inventors: Alexander Jay Bruen Trevor, Chris Beall, Stefan Johannes Josef Holzer, Radu Bogdan Rusu
  • Publication number: 20190080499
    Abstract: Various embodiments of the present invention relate generally to systems and methods for artificially rendering images using viewpoint interpolation and extrapolation. According to particular embodiments, a method includes moving a set of control points perpendicular to a trajectory between a first frame and a second frame, where the first frame includes a first image captured from a first location and the second frame includes a second image captured from a second location. The set of control points is associated with a layer and each control point is moved based on an associated depth of the control point. The method also includes generating an artificially rendered image corresponding to a third location outside of the trajectory by extrapolating individual control points using the set of control points for the third location and extrapolating pixel locations using the individual control points.
    Type: Application
    Filed: November 2, 2018
    Publication date: March 14, 2019
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Martin Saelzle, Radu Bogdan Rusu
  • Publication number: 20190073834
    Abstract: Provided are mechanisms and processes for augmenting multi-view image data with synthetic objects using inertial measurement unit (IMU) and image data. In one example, a process includes receiving a selection of an anchor location in a reference image for a synthetic object to be placed within a multi-view image. Movements between the reference image and a target image are computed using visual tracking information associated with the multi-view image, device orientation corresponding to the multi-view image, and an estimate of the camera's intrinsic parameters. A first synthetic image is then generated by placing the synthetic object at the anchor location using visual tracking information in the multi-view image, orienting the synthetic object using the inverse of the movements computed between the reference image and the target image, and projecting the synthetic object along a ray into a target view associated with the target image.
    Type: Application
    Filed: November 5, 2018
    Publication date: March 7, 2019
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Alexander Jay Bruen Trevor, Martin Saelzle, Radu Bogdan Rusu
  • Patent number: 10222932
    Abstract: Various embodiments of the present disclosure relate generally to systems and methods for generating multi-view interactive digital media representations in a virtual reality environment. According to particular embodiments, a plurality of images is fused into a first content model and a first context model, both of which include multi-view interactive digital media representations of objects. Next, a virtual reality environment is generated using the first content model and the first context model. The virtual reality environment includes a first layer and a second layer. The user can navigate through and within the virtual reality environment to switch between multiple viewpoints of the content model via corresponding physical movements. The first layer includes the first content model and the second layer includes a second content model and wherein selection of the first layer provides access to the second layer with the second content model.
    Type: Grant
    Filed: August 21, 2017
    Date of Patent: March 5, 2019
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Radu Bogdan Rusu, Alexander Jay Bruen Trevor, Krunal Ketan Chande
  • Patent number: 10210662
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a surround view for presentation on a device. A real object can be tracked in the live image data for the purposes of creating a surround view using a number of tracking points. As a camera is moved around the real object, virtual objects can be rendered into live image data to create synthetic images where a position of the tracking points can be used to position the virtual object in the synthetic image. The synthetic images can be output in real-time. Further, virtual objects in the synthetic images can be incorporated into surround views.
    Type: Grant
    Filed: December 9, 2016
    Date of Patent: February 19, 2019
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Alexander Jay Bruen Trevor, Martin Saelzle, Stephen David Miller, Radu Bogdan Rusu
  • Patent number: 10200677
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation can be generated from live images captured from a camera. The live images can include an object. An angular view of the object captured in the live images can be estimated using sensor data from an inertial measurement unit. The multi-view interactive digital media representation can include a plurality of images where each of the plurality of images includes the object from a different camera view. When the plurality of images is output to a display, the object can appear to undergo a 3-D rotation through the determined angular view where the 3-D rotation of the object is generated without a 3-D polygon model of the object.
    Type: Grant
    Filed: May 22, 2017
    Date of Patent: February 5, 2019
    Assignee: Fyusion, Inc.
    Inventors: Alexander Jay Bruen Trevor, Chris Beall, Stefan Johannes Josef Holzer, Radu Bogdan Rusu
  • Patent number: 10176592
    Abstract: This present disclosure relates to systems and processes for capturing an unstructured light field in a plurality of images. In particular embodiments, a plurality of keypoints are identified on a first keyframe in a plurality of captured images. A first convex hull is computed from all keypoints in the first keyframe and merged with previous convex hulls corresponding to previous keyframes to form a convex hull union. Each keypoint is tracked from the first keyframe to a second image. The second image is adjusted to compensate for camera rotation during capture, and a second convex hull is computed from all keypoints in the second image. If the overlapping region between the second convex hull and the convex hull union is equal to, or less than, a predetermined size, the second image is designated as a new keyframe, and the convex hull union is augmented with the second convex hull.
    Type: Grant
    Filed: February 7, 2017
    Date of Patent: January 8, 2019
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Yuheng Ren
  • Patent number: 10169911
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a surround view. In particular embodiments, a surround view can be generated by combining a panoramic view of an object with a panoramic view of a distant scene, such that the object panorama is placed in a foreground position relative to the distant scene panorama. Such combined panoramas can enhance the interactive and immersive viewing experience of the surround view.
    Type: Grant
    Filed: July 12, 2018
    Date of Patent: January 1, 2019
    Assignee: FYUSION, INC.
    Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Radu Bogdan Rusu
  • Publication number: 20180374273
    Abstract: Provided are mechanisms and processes for inserting a visual element into a multi-view digital media representation (MVIDMR). In one example, a process includes analyzing an MVIDMR to determine if there is an appropriate location to insert a visual element. Once a location is found, the type of visual element appropriate for the location is determined, where the type of visual element includes either a three-dimensional object to be inserted in the MVIDMR or a two-dimensional image to be inserted as or projected onto a background or object in the MVIDMR. A visual element that is appropriate for the location is then retrieved and inserted into the MVIDMR, such that the visual element is integrated into the MVIDMR and navigable by a user.
    Type: Application
    Filed: June 26, 2017
    Publication date: December 27, 2018
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef HOLZER, Stephen David Miller, Pantelis Kalogiros, George Haber, Radu Bogdan Rusu
  • Patent number: 10152825
    Abstract: Provided are mechanisms and processes for augmenting multi-view image data with synthetic objects using inertial measurement unit (IMU) and image data. In one example, a process includes receiving a selection of an anchor location in a reference image for a synthetic object to be placed within a multi-view image. Movements between the reference image and a target image are computed using visual tracking information associated with the multi-view image, device orientation corresponding to the multi-view image, and an estimate of the camera's intrinsic parameters. A first synthetic image is then generated by placing the synthetic object at the anchor location using visual tracking information in the multi-view image, orienting the synthetic object using the inverse of the movements computed between the reference image and the target image, and projecting the synthetic object along a ray into a target view associated with the target image.
    Type: Grant
    Filed: January 28, 2016
    Date of Patent: December 11, 2018
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Alexander Jay Bruen Trevor, Martin Saelzle, Radu Bogdan Rusu
  • Patent number: 10147211
    Abstract: Various embodiments of the present invention relate generally to systems and methods for artificially rendering images using viewpoint interpolation and extrapolation. According to particular embodiments, a method includes moving a set of control points perpendicular to a trajectory between a first frame and a second frame, where the first frame includes a first image captured from a first location and the second frame includes a second image captured from a second location. The set of control points is associated with a layer and each control point is moved based on an associated depth of the control point. The method also includes generating an artificially rendered image corresponding to a third location outside of the trajectory by extrapolating individual control points using the set of control points for the third location and extrapolating pixel locations using the individual control points.
    Type: Grant
    Filed: July 15, 2015
    Date of Patent: December 4, 2018
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Martin Saelzle, Radu Bogdan Rusu
  • Patent number: 10149172
    Abstract: Systems and methods are disclosed for networking planning for fixed backhaul networks comprising a plurality of hubs, each serving one or more RBMs. The method comprises one or more of terrain pathloss (PL) and antenna gain prediction; network design comprising site association, hub dimensioning and pointing; and optimization of small cell (SC) deployment. PL prediction is based on correlation of user input parameters with reference use cases for channel models for each of downtown, urban, and suburban deployment scenarios. Rapid and effective network planning is achieved with limited input data, even in the absence of high resolution digital maps or building polygons, by selecting the channel model having a highest correlation with available environmental parameters. Optimization of network topology design, system design, and SC deployment, with both access link and backhaul link evaluation, is based on optimization of a sum-utility function across all links for feasible SC site locations.
    Type: Grant
    Filed: March 10, 2015
    Date of Patent: December 4, 2018
    Assignee: BLiNQ Wireless Inc.
    Inventors: Ho Ting Cheng, Terasan Niyomsataya, Radu Bogdan Selea
  • Publication number: 20180341808
    Abstract: Provided are mechanisms and processes for visual feature tagging in multi-view interactive digital media representations (MIDMRs). In one example, a process includes receiving a visual feature tagging request that includes an MIDMR of an object to be searched, where the MIDMR includes spatial information, scale information, and different viewpoint images of the object. A visual feature in the MIDMR is identified, and visual feature correspondence information is created that links information identifying the visual feature with locations in the viewpoint images. At least one image associated with the MIDMR is transmitted in response to the feature tagging request.
    Type: Application
    Filed: May 25, 2017
    Publication date: November 29, 2018
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef HOLZER, Stephen David Miller, Pantelis Kalogiros, Radu Bogdan Rusu