Patents by Inventor Radu Bogdan
Radu Bogdan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20200234398Abstract: According to various embodiments, component information may be identified for each input image of an object. The component information may indicate a portion of the input image in which a particular component of the object is depicted. A viewpoint may be determined for each input image that indicates a camera pose for the input image relative to the object. A three-dimensional skeleton of the object may be determined based on the viewpoints and the component information. A multi-view panel corresponding to the designated component of the object that is navigable in three dimensions and that the portions of the input images in which the designated component of the object is depicted may be stored on a storage device.Type: ApplicationFiled: July 22, 2019Publication date: July 23, 2020Applicant: Fyusion, IncInventors: Stefan Johannes Josef Holzer, Matteo Munaro, Martin Markus Hubert Wawro, Abhishek Kar, Pavel Hanchar, Krunal Ketan Chande, Radu Bogdan Rusu
-
Patent number: 10719939Abstract: Various embodiments describe systems and processes for generating AR/VR content. In one aspect, a method for generating a three-dimensional (3D) projection of an object is provided. A sequence of images along a camera translation may be obtained using a single lens camera. Each image contains at least a portion of overlapping subject matter, which includes the object. The object is semantically segmented from the sequence of images using a trained neural network to form a sequence of segmented object images, which are then refined using fine-grained segmentation. On-the-fly interpolation parameters are computed and stereoscopic pairs are generated for points along the camera translation from the refined sequence of segmented object images for displaying the object as a 3D projection in a virtual reality or augmented reality environment. Segmented image indices are then mapped to a rotation range for display in the virtual reality or augmented reality environment.Type: GrantFiled: February 8, 2017Date of Patent: July 21, 2020Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Yuheng Ren, Abhishek Kar, Alexander Jay Bruen Trevor, Krunal Ketan Chande, Martin Josef Nikolaus Saelzle, Radu Bogdan Rusu
-
Patent number: 10719733Abstract: Various embodiments of the present invention relate generally to systems and processes for artificially rendering images using interpolation of tracked control points. According to particular embodiments, a set of control points is tracked between a first frame and a second frame, where the first frame includes a first image captured from a first location and the second frame includes a second image captured from a second location. An artificially rendered image corresponding to a third location is then generated by interpolating individual control points for the third location using the set of control points and interpolating pixel locations using the individual control points. The individual control points are used to transform image data.Type: GrantFiled: March 26, 2018Date of Patent: July 21, 2020Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Martin Saelzle, Radu Bogdan Rusu
-
Patent number: 10719732Abstract: Various embodiments of the present invention relate generally to systems and processes for artificially rendering images using interpolation of tracked control points. According to particular embodiments, a set of control points is tracked between a first frame and a second frame, where the first frame includes a first image captured from a first location and the second frame includes a second image captured from a second location. An artificially rendered image corresponding to a third location is then generated by interpolating individual control points for the third location using the set of control points and interpolating pixel locations using the individual control points. The individual control points are used to transform image data.Type: GrantFiled: March 26, 2018Date of Patent: July 21, 2020Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Martin Saelzle, Radu Bogdan Rusu
-
Publication number: 20200226816Abstract: A respective target viewpoint may be rendered for each of a plurality of multiplane images of a three-dimensional scene. Each of the multiplane images may be associated with a respective single plane image of the three-dimensional scene captured from a respective viewpoint. Each of the multiplane images may include a respective plurality of depth planes. Each of the depth planes may include a respective plurality of pixels from the respective single plane image. Each of the pixels in the depth plane may be positioned at approximately the same distance from the respective viewpoint. A weighted combination of the target viewpoint renderings may be determined, where the sampling density of the single plane images is sufficiently high that the weighted combination satisfies the inequality in Equation (7). The weighted combination of the target viewpoint renderings may be transmitted as a novel viewpoint image.Type: ApplicationFiled: September 18, 2019Publication date: July 16, 2020Applicant: Fyusion, Inc.Inventors: Abhishek Kar, Rodrigo Ortiz Cayon, Ben Mildenhall, Stefan Johannes Josef Holzer, Radu Bogdan Rusu
-
Publication number: 20200228774Abstract: An estimated camera pose may be determined for each of a plurality of single plane images of a designated three-dimensional scene. The sampling density of the single plane images may be below the Nyquist rate. However, the sampling density of the single plane images may be sufficiently high such that the single plane images is sufficiently high such that they may be promoted to multiplane images and used to generate novel viewpoints in a light field reconstruction framework. Scene depth information identifying for each of a respective plurality of pixels in the single plane image a respective depth value may be determined for each single plane image. A respective multiplane image including a respective plurality of depth planes may be determined for each single plane image. Each of the depth planes may include a respective plurality of pixels from the respective single plane image.Type: ApplicationFiled: September 18, 2019Publication date: July 16, 2020Applicant: Fyusion, Inc.Inventors: Abhishek Kar, Rodrigo Ortiz Cayon, Ben Mildenhall, Stefan Johannes Josef Holzer, Radu Bogdan Rusu
-
Publication number: 20200226736Abstract: A sampling density for capturing a plurality of two-dimensional images of a three-dimensional scene may be determined. The sampling density may be below the Nyquist rate. However, the sampling density may be sufficiently high such that captured images may be promoted to multiplane images and used to generate novel viewpoints in a light field reconstruction framework. Recording guidance may be provided at a display screen on a mobile computing device based on the determined sampling density. The recording guidance identifying a plurality of camera poses at which to position a camera to capture images of the three-dimensional scene. A plurality of images captured via the camera based on the recording guidance may be stored on a storage device.Type: ApplicationFiled: September 18, 2019Publication date: July 16, 2020Applicant: Fyusion, Inc.Inventors: Abhishek Kar, Rodrigo Ortiz Cayon, Ben Mildenhall, Stefan Johannes Josef Holzer, Radu Bogdan Rusu
-
Patent number: 10713851Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a surround view for presentation on a device. A real object can be tracked in the live image data for the purposes of creating a surround view using a number of tracking points. As a camera is moved around the real object, virtual objects can be rendered into live image data to create synthetic images where a position of the tracking points can be used to position the virtual object in the synthetic image. The synthetic images can be output in real-time. Further, virtual objects in the synthetic images can be incorporated into surround views.Type: GrantFiled: November 12, 2018Date of Patent: July 14, 2020Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Alexander Jay Bruen Trevor, Martin Saelzle, Stephen David Miller, Radu Bogdan Rusu
-
Publication number: 20200213578Abstract: Various embodiments of the present disclosure relate generally to systems and methods for drone-based systems and methods for capturing a multi- media representation of an entity. In some embodiments, the multi-media representation is digital, or multi-view, or interactive, and/or the combinations thereof. According to particular embodiments, a drone having a camera to is controlled or operated to obtain a plurality of images having location information. The plurality of images, including at least a portion of overlapping subject matter, are fused to form multi-view interactive digital media representations.Type: ApplicationFiled: March 9, 2020Publication date: July 2, 2020Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Radu Bogdan Rusu
-
Patent number: 10698558Abstract: Various embodiments of the present disclosure relate generally to systems and methods for automatic tagging of objects on a multi-view interactive digital media representation of a dynamic entity. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a multi-view interactive digital media representation for presentation on a device. Multi-view interactive digital media representations correspond to multi-view interactive digital media representations of the dynamic objects in backgrounds. A first multi-view interactive digital media representation of a dynamic object is obtained. Next, the dynamic object is tagged. Then, a second multi-view interactive digital media representation of the dynamic object is generated. Finally, the dynamic object in the second multi-view interactive digital media representation is automatically identified and tagged.Type: GrantFiled: June 12, 2017Date of Patent: June 30, 2020Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Radu Bogdan Rusu
-
Patent number: 10687046Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation (MVIDMR) of object can be generated from live images of the object captured from a hand-held camera. Methods are described where image data associated with the images capture from the hand-held camera are manipulated to generate a more desirable MVIDMR of the object. In particular, the image data can be manipulated so that it appears as if the camera traveled a smoother trajectory during the capture of the images which can provide a smoother output of the MVIDMR. In embodiment, key point matching within the image data and, optionally, IMU data from a sensor package on the camera can be used to generate constraints used in a factor graph optimization that is used to generate a smoother trajectory of the camera.Type: GrantFiled: November 1, 2018Date of Patent: June 16, 2020Assignee: Fyusion, Inc.Inventors: Alexander Jay Bruen Trevor, Chris Beall, Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Krunal Ketan Chande
-
Patent number: 10671412Abstract: A programmable device including a memory and at least one processor coupled to the memory is provided. The memory stores a plurality hybrid objects. Each hybrid object of the plurality of hybrid objects includes a native object wrapped by an interpreted object. The at least one processor can be coupled to the memory. The at least one processor can be configured to identify a message to execute an operation on one or more hybrid objects of the plurality of hybrid objects; clone, in response to reception of the message, each native object within the one or more hybrid objects to create one or more cloned native objects; wrap each cloned native object of the one or more native objects with a new interpreted object to create one or more new hybrid objects; and execute the operation on the one or more new hybrid objects.Type: GrantFiled: March 21, 2019Date of Patent: June 2, 2020Assignee: Adobe Inc.Inventors: Stavila Radu-Bogdan, Grecescu Ioan Vladimir
-
Publication number: 20200167570Abstract: A multi-view interactive digital media representation (MVIDMR) of an object can be generated from live images of an object captured from a camera. Selectable tags can be placed at locations on the object in the MVIDMR. When the selectable tags are selected, media content can be output which shows details of the object at location where the selectable tag is placed. A machine learning algorithm can be used to automatically recognize landmarks on the object in the frames of the MVIDMR and a structure from motion calculation can be used to determine 3-D positions associated with the landmarks. A 3-D skeleton associated with the object can be assembled from the 3-D positions and projected into the frames associated with the MVIDMR. The 3-D skeleton can be used to determine the selectable tag locations in the frames of the MVIDMR of the object.Type: ApplicationFiled: January 31, 2020Publication date: May 28, 2020Applicant: Fyusion, Inc.Inventors: Chris Beall, Abhishek Kar, Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Pavel Hanchar
-
Patent number: 10665024Abstract: Various embodiments of the present invention relate generally to systems and methods for collecting, analyzing, and manipulating images and video. According to particular embodiments, live images captured by a camera on a mobile device may be analyzed as the mobile device moves along a path. The live images may be compared with a target view. A visual indicator may be provided to guide the alteration of the positioning of the mobile device to more closely align with the target view.Type: GrantFiled: July 5, 2018Date of Patent: May 26, 2020Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Alexander Jay Bruen Trevor, Michelle Jung-Ah Ho, David Klein, Stephen David Miller, Shuichi Tsutsumi, Radu Bogdan Rusu
-
Patent number: 10659686Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation can be generated from live images captured from a camera as the camera moves along a path. Then, a sequence of the images can be selected based upon sensor data from an inertial measurement unit and upon image data such that one of the live images is selected for each of a plurality of poses along the path. A multi-view interactive digital media representation may be created from the sequence of images, and the images may be encoded as a video via a designated encoding format.Type: GrantFiled: March 23, 2018Date of Patent: May 19, 2020Assignee: Fyusion, Inc.Inventors: Alexander Jay Bruen Trevor, Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Stephen Miller
-
Patent number: 10650574Abstract: Various embodiments of the present disclosure relate generally to systems and processes for generating stereo pairs for virtual reality. According to particular embodiments, a method comprises obtaining a monocular sequence of images using the single lens camera during a capture mode. The sequence of images is captured along a camera translation. Each image in the sequence of images contains at least a portion of overlapping subject matter, which includes an object. The method further comprises generating stereo pairs, for one or more points along the camera translation, for virtual reality using the sequence of images. Generating the stereo pairs may include: selecting frames for each stereo pair based on a spatial baseline; interpolating virtual images in between captured images in the sequence of images; correcting selected frames by rotating the images; and rendering the selected frames by assigning each image in the selected frames to left and right eyes.Type: GrantFiled: January 17, 2017Date of Patent: May 12, 2020Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Yuheng Ren
-
Patent number: 10645371Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation can be generated from live images captured from a camera. The live images can include an object. An angular view of the object captured in the live images can be estimated using sensor data from an inertial measurement unit. The multi-view interactive digital media representation can include a plurality of images where each of the plurality of images includes the object from a different camera view. When the plurality of images is output to a display, the object can appear to undergo a 3-D rotation through the determined angular view where the 3-D rotation of the object is generated without a 3-D polygon model of the object.Type: GrantFiled: October 11, 2019Date of Patent: May 5, 2020Assignee: Fyusion, Inc.Inventors: Alexander Jay Bruen Trevor, Chris Beall, Stefan Johannes Josef Holzer, Radu Bogdan Rusu
-
Publication number: 20200133462Abstract: Various embodiments of the present disclosure relate generally to systems and methods for generating multi-view interactive digital media representations in a virtual reality environment. According to particular embodiments, a plurality of images is fused into a first content model and a first context model, both of which include multi-view interactive digital media representations of objects. Next, a virtual reality environment is generated using the first content model and the first context model. The virtual reality environment includes a first layer and a second layer. The user can navigate through and within the virtual reality environment to switch between multiple viewpoints of the content model via corresponding physical movements. The first layer includes the first content model and the second layer includes a second content model and wherein selection of the first layer provides access to the second layer with the second content model.Type: ApplicationFiled: December 23, 2019Publication date: April 30, 2020Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Radu Bogdan Rusu, Alexander Jay Bruen Trevor, Krunal Ketan Chande
-
Patent number: 10628675Abstract: Provided are mechanisms and processes for performing skeleton detection and tracking via client-server communication. In one example, a server transmits a skeleton detection message that includes position data for a skeleton representing the structure of an object depicted in a first video frame in a raw video stream at a client device. Based on the initial position data, a processor identifies intervening position data for the skeleton in one or more intervening video frames that are temporally located after the first video frame in the raw video stream. A filtered video stream is then presented by altering the raw video stream based at least in part on the first position data and the intervening position data.Type: GrantFiled: February 7, 2017Date of Patent: April 21, 2020Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Matteo Munaro, Abhishek Kar, Alexander Jay Bruen Trevor, Krunal Ketan Chande, Radu Bogdan Rusu
-
Patent number: 10608900Abstract: Examples of the disclosure enable one or more operations to be executed or implemented while managing computational resources. In some examples, an instruction to implement a first operation is received. The first operation is associated with a first node of a plurality of nodes. The plurality of nodes are arranged in a plurality of regions. A second node of the plurality of nodes that is related to the first node is identified. On condition that the second node is arranged in an active region of the plurality of regions, a second operation associated with the second node is implemented within a period of time. On condition that the second node is not arranged in the active region, the second operation is not implemented within the period of time. Aspects of the disclosure enable a computing device to defer the implementation of an operation to facilitate managing computational resources.Type: GrantFiled: November 4, 2015Date of Patent: March 31, 2020Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Olivier Colle, Jaideep Sarkar, Muralidhar Sathsahayaraman, Radu Bogdan Gruian