Patents by Inventor Radu Bogdan Rusu

Radu Bogdan Rusu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10070154
    Abstract: Provided are mechanisms and processes for performing live filtering in a camera view via client-server communication. In one example, a first video frame in a raw video stream is transmitted from a client device to a server. The client device receives a filter processing message associated with the first video frame that includes filter data for applying a filter to the first video frame. A processor at the client device creates a filtered video stream by applying the filter to a second video frame that occurs in the video stream later than the first video frame.
    Type: Grant
    Filed: February 7, 2017
    Date of Patent: September 4, 2018
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Matteo Munaro, Abhishek Kar, Alexander Jay Bruen Trevor, Krunal Ketan Chande, Radu Bogdan Rusu
  • Publication number: 20180227601
    Abstract: Provided are mechanisms and processes for performing live filtering in a camera view via client-server communication. In one example, a first video frame in a raw video stream is transmitted from a client device to a server. The client device receives a filter processing message associated with the first video frame that includes filter data for applying a filter to the first video frame. A processor at the client device creates a filtered video stream by applying the filter to a second video frame that occurs in the video stream later than the first video frame.
    Type: Application
    Filed: February 7, 2017
    Publication date: August 9, 2018
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef HOLZER, Matteo Munaro, Abhishek Kar, Alexander Jay Bruen Trevor, Krunal Ketan Chande, Radu Bogdan Rusu
  • Publication number: 20180227569
    Abstract: Various embodiments disclosed herein relate to systems and methods for analyzing and manipulating images and video. Methods as disclosed herein may include retrieving, using a processor, a multi-view interactive digital media representation (MIDMR) from a storage location, the MIDMR including a content model and a context model, the content model characterizing an object, and the context model characterizing scenery surrounding the object. The methods may also include receiving, using the processor, at least one dynamic content input associated with the retrieved MIDMR, the dynamic content input being received while a user is interacting with the MIDMR. The methods may further include implementing, using the processor, one or more modifications associated with the MIDMR based, at least in part, on the received at least one dynamic content input, the one or more modifications modifying a presentation and functionality of the MIDMR.
    Type: Application
    Filed: May 26, 2017
    Publication date: August 9, 2018
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef HOLZER, Stephen David Miller, Pantelis Kalogiros, George Haber, Radu Bogdan Rusu
  • Publication number: 20180225517
    Abstract: Provided are mechanisms and processes for performing skeleton detection and tracking via client-server communication. In one example, a server transmits a skeleton detection message that includes position data for a skeleton representing the structure of an object depicted in a first video frame in a raw video stream at a client device. Based on the initial position data, a processor identifies intervening position data for the skeleton in one or more intervening video frames that are temporally located after the first video frame in the raw video stream. A filtered video stream is then presented by altering the raw video stream based at least in part on the first position data and the intervening position data.
    Type: Application
    Filed: February 7, 2017
    Publication date: August 9, 2018
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef HOLZER, Matteo Munaro, Abhishek Kar, Alexander Jay Bruen Trevor, Krunal Ketan Chande, Radu Bogdan Rusu
  • Publication number: 20180227482
    Abstract: Provided are mechanisms and processes for scene-aware selection of filters and effects for visual digital media content. In one example, a digital media item is analyzed with a processor to identify one or more characteristics associated with the digital media item, where the characteristics include a physical object represented in the digital media item. Based on the identified characteristics, a digital media modification is selected from a plurality of digital media modifications for application to the digital media item. The digital media modification may then be provided for presentation in a user interface for selection by a user.
    Type: Application
    Filed: February 7, 2017
    Publication date: August 9, 2018
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef HOLZER, Matteo Munaro, Abhishek Kar, Alexander Jay Bruen Trevor, Krunal Ketan Chande, Michelle Jung-Ah Ho, Radu Bogdan Rusu
  • Publication number: 20180218235
    Abstract: Various embodiments of the present invention relate generally to systems and processes for artificially rendering images using interpolation of tracked control points. According to particular embodiments, a set of control points is tracked between a first frame and a second frame, where the first frame includes a first image captured from a first location and the second frame includes a second image captured from a second location. An artificially rendered image corresponding to a third location is then generated by interpolating individual control points for the third location using the set of control points and interpolating pixel locations using the individual control points. The individual control points are used to transform image data.
    Type: Application
    Filed: March 26, 2018
    Publication date: August 2, 2018
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Martin Saelzle, Radu Bogdan Rusu
  • Publication number: 20180218236
    Abstract: Various embodiments of the present invention relate generally to systems and processes for artificially rendering images using interpolation of tracked control points. According to particular embodiments, a set of control points is tracked between a first frame and a second frame, where the first frame includes a first image captured from a first location and the second frame includes a second image captured from a second location. An artificially rendered image corresponding to a third location is then generated by interpolating individual control points for the third location using the set of control points and interpolating pixel locations using the individual control points. The individual control points are used to transform image data.
    Type: Application
    Filed: March 26, 2018
    Publication date: August 2, 2018
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Martin Saelzle, Radu Bogdan Rusu
  • Publication number: 20180211131
    Abstract: Various embodiments of the present invention relate generally to systems and processes for artificially rendering images using interpolation of tracked control points. According to particular embodiments, a set of control points is tracked between a first frame and a second frame, where the first frame includes a first image captured from a first location and the second frame includes a second image captured from a second location. An artificially rendered image corresponding to a third location is then generated by interpolating individual control points for the third location using the set of control points and interpolating pixel locations using the individual control points. The individual control points are used to transform image data.
    Type: Application
    Filed: March 26, 2018
    Publication date: July 26, 2018
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Martin Saelzle, Radu Bogdan Rusu
  • Publication number: 20180203880
    Abstract: Provided are mechanisms and processes for performing live search using multi-view digital media representations. In one example, a process includes receiving a visual search query from a device for an object to be searched, where the visual search query includes a first set of viewpoints of the object obtained during capture of a first surround view of the object during a live search session. Next, additional recommended viewpoints of the object are identified for the device to capture, where the additional recommended viewpoints are chosen to provide more information about the object. A first set of search results based on the first set of viewpoints and additional recommended viewpoints of the object are transmitted to the device. In response, a second set of viewpoints of the object captured using image capture capabilities of the device are received. A second set of search results with enhanced matches for the object based on the first and second sets of viewpoints are then transmitted to the device.
    Type: Application
    Filed: January 18, 2017
    Publication date: July 19, 2018
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Abhishek Kar, Pantelis Kalogiros, Ioannis Spanos, Luke Parham, Radu Bogdan Rusu
  • Publication number: 20180203877
    Abstract: Provided are mechanisms and processes for performing visual search using multi-view digital media representations, such as surround views. In one example, a process includes receiving a visual search query that includes a surround view of an object to be searched, where the surround view includes spatial information, scale information, and different viewpoint images of the object. The surround view is compared to stored surround views by comparing spatial information and scale information of the surround view to spatial information and scale information of the stored surround views. A correspondence measure is then generated indicating the degree of similarity between the surround view and a possible match. At least one search result is then transmitted with a corresponding image in response to the visual search query.
    Type: Application
    Filed: January 18, 2017
    Publication date: July 19, 2018
    Inventors: Stefan Johannes Josef HOLZER, Abhishek KAR, Alexander Jay Bruen TREVOR, Pantelis KALOGIROS, Ioannis SPANOS, Radu Bogdan RUSU
  • Patent number: 10026219
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a surround view. In particular embodiments, a surround view can be generated by combining a panoramic view of an object with a panoramic view of a distant scene, such that the object panorama is placed in a foreground position relative to the distant scene panorama. Such combined panoramas can enhance the interactive and immersive viewing experience of the surround view.
    Type: Grant
    Filed: November 2, 2017
    Date of Patent: July 17, 2018
    Assignee: FYUSION, INC.
    Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Radu Bogdan Rusu
  • Publication number: 20180199025
    Abstract: Various embodiments of the present disclosure relate generally to systems and methods for drone-based systems and methods for capturing a multi-media representation of an entity. In some embodiments, the multi-media representation is digital, or multi-view, or interactive, and/or the combinations thereof. According to particular embodiments, a drone having a camera to is controlled or operated to obtain a plurality of images having location information. The plurality of images, including at least a portion of overlapping subject matter, are fused to form multi-view interactive digital media representations.
    Type: Application
    Filed: March 5, 2018
    Publication date: July 12, 2018
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef HOLZER, Stephen David MILLER, Radu Bogdan RUSU
  • Publication number: 20180165879
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a surround view for presentation on a device. A real object can be tracked in the live image data for the purposes of creating a surround view using a number of tracking points. As a camera is moved around the real object, virtual objects can be rendered into live image data to create synthetic images where a position of the tracking points can be used to position the virtual object in the synthetic image. The synthetic images can be output in real-time. Further, virtual objects in the synthetic images can be incorporated into surround views.
    Type: Application
    Filed: December 9, 2016
    Publication date: June 14, 2018
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Alexander Jay Bruen Trevor, Martin Saelzle, Stephen David Miller, Radu Bogdan Rusu
  • Publication number: 20180165827
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a surround view for presentation on a device. A visual guide can provided for capturing the multiple images used in the surround view. The visual guide can be a synthetic object that is rendered in real-time into the images output to a display of an image capture device. The visual guide can help user keep the image capture device moving along a desired trajectory.
    Type: Application
    Filed: December 12, 2016
    Publication date: June 14, 2018
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef HOLZER, Alexander Jay Bruen Trevor, Michelle Jung-Ah Ho, David Klein, Stephen David Miller, Shuichi Tsutsumi, Radu Bogdan Rusu
  • Patent number: 9996945
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a surround view for presentation on a device. A visual guide can provided for capturing the multiple images used in the surround view. The visual guide can be a synthetic object that is rendered in real-time into the images output to a display of an image capture device. The visual guide can help user keep the image capture device moving along a desired trajectory.
    Type: Grant
    Filed: December 12, 2016
    Date of Patent: June 12, 2018
    Assignee: FYUSION, INC.
    Inventors: Stefan Johannes Josef Holzer, Alexander Jay Bruen Trevor, Michelle Jung-Ah Ho, David Klein, Stephen David Miller, Shuichi Tsutsumi, Radu Bogdan Rusu
  • Publication number: 20180103213
    Abstract: Various embodiments of the present invention relate generally to systems and processes for transforming a style of video data. In one embodiment, a neural network is used to interpolate native video data received from a camera system on a mobile device in real-time. The interpolation converts the live native video data into a particular style. For example, the style can be associated with a particular artist or a particular theme. The stylized video data can viewed on a display of the mobile device in a manner similar to which native live video data is output to the display. Thus, the stylized video data, which is viewed on the display, is consistent with a current position and orientation of the camera system on the display.
    Type: Application
    Filed: September 27, 2017
    Publication date: April 12, 2018
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef HOLZER, Abhishek Kar, Pavel Gonchar, Radu Bogdan Rusu, Martin Saelzie, Shuichi Tsutsumi, Stephen David Miller, George Haber
  • Patent number: 9940541
    Abstract: Various embodiments of the present invention relate generally to systems and processes for artificially rendering images using interpolation of tracked control points. According to particular embodiments, a set of control points is tracked between a first frame and a second frame, where the first frame includes a first image captured from a first location and the second frame includes a second image captured from a second location. An artificially rendered image corresponding to a third location is then generated by interpolating individual control points for the third location using the set of control points and interpolating pixel locations using the individual control points. The individual control points are used to transform image data.
    Type: Grant
    Filed: July 15, 2015
    Date of Patent: April 10, 2018
    Assignee: FYUSION, INC.
    Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Martin Saelzle, Radu Bogdan Rusu
  • Publication number: 20180068485
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a surround view. In particular embodiments, a surround view can be generated by combining a panoramic view of an object with a panoramic view of a distant scene, such that the object panorama is placed in a foreground position relative to the distant scene panorama. Such combined panoramas can enhance the interactive and immersive viewing experience of the surround view.
    Type: Application
    Filed: November 2, 2017
    Publication date: March 8, 2018
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Radu Bogdan Rusu
  • Publication number: 20180046356
    Abstract: Various embodiments of the present disclosure relate generally to systems and methods for generating multi-view interactive digital media representations in a virtual reality environment. According to particular embodiments, a plurality of images is fused into a first content model and a first context model, both of which include multi-view interactive digital media representations of objects. Next, a virtual reality environment is generated using the first content model and the first context model. The virtual reality environment includes a first layer and a second layer. The user can navigate through and within the virtual reality environment to switch between multiple viewpoints of the content model via corresponding physical movements. The first layer includes the first content model and the second layer includes a second content model and wherein selection of the first layer provides access to the second layer with the second content model.
    Type: Application
    Filed: October 3, 2017
    Publication date: February 15, 2018
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef HOLZER, Stephen David Miller, Radu Bogdan Rusu, Alexander Jay Bruen Trevor, Krunal Ketan Chande
  • Publication number: 20180046357
    Abstract: Various embodiments of the present disclosure relate generally to systems and methods for generating multi-view interactive digital media representations in a virtual reality environment. According to particular embodiments, a plurality of images is fused into a first content model and a first context model, both of which include multi-view interactive digital media representations of objects. Next, a virtual reality environment is generated using the first content model and the first context model. The virtual reality environment includes a first layer and a second layer. The user can navigate through and within the virtual reality environment to switch between multiple viewpoints of the content model via corresponding physical movements. The first layer includes the first content model and the second layer includes a second content model and wherein selection of the first layer provides access to the second layer with the second content model.
    Type: Application
    Filed: October 3, 2017
    Publication date: February 15, 2018
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef HOLZER, Stephen David Miller, Radu Bogdan Rusu, Alexander Jay Bruen Trevor, Krunal Ketan Chande