Patents by Inventor Abhishek Kar
Abhishek Kar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10950033Abstract: A plurality of images may be analyzed to determine an object model. The object model may have a plurality of components, and each of the images may correspond with one or more of the components. Component condition information may be determined for one or more of the components based on the images. The component condition information may indicate damage incurred by the object portion corresponding with the component.Type: GrantFiled: November 22, 2019Date of Patent: March 16, 2021Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Abhishek Kar, Matteo Munaro, Pavel Hanchar, Radu Bogdan Rusu
-
Patent number: 10949978Abstract: A segmentation of an object depicted in a first visual representation may be determined. The segmentation may include for each image a first respective image portion that includes the object, a second respective image portion that includes a respective ground area located beneath the object, and a third respective image portion that includes a background area located above the second respective portion and behind the object. A second visual representation may be constructed that includes the first respective image portion and a target background image portion that replaces the third respective image portion and that is selected from a target background image based on an area of the third respective image portion relative to the respective image.Type: GrantFiled: July 22, 2019Date of Patent: March 16, 2021Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Matthias Reso, Abhishek Kar, Julius Santiago, Pavel Hanchar, Radu Bogdan Rusu
-
Patent number: 10950032Abstract: Pixels in a visual representation of an object that includes one or more perspective view images may be mapped to a standard view of the object. Based on the mapping, a portion of the object captured in the visual representation of the object may be identified. A user interface on a display device may indicate the identified object portion.Type: GrantFiled: July 22, 2019Date of Patent: March 16, 2021Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Abhishek Kar, Matteo Munaro, Aidas Liaudanskas, Matthias Reso, Alexander Jay Bruen Trevor, Radu Bogdan Rusu
-
Patent number: 10911732Abstract: An estimated camera pose may be determined for each of a plurality of single plane images of a designated three-dimensional scene. The sampling density of the single plane images may be below the Nyquist rate. However, the sampling density of the single plane images may be sufficiently high such that the single plane images is sufficiently high such that they may be promoted to multiplane images and used to generate novel viewpoints in a light field reconstruction framework. Scene depth information identifying for each of a respective plurality of pixels in the single plane image a respective depth value may be determined for each single plane image. A respective multiplane image including a respective plurality of depth planes may be determined for each single plane image. Each of the depth planes may include a respective plurality of pixels from the respective single plane image.Type: GrantFiled: September 18, 2019Date of Patent: February 2, 2021Assignee: Fyusion, Inc.Inventors: Abhishek Kar, Rodrigo Ortiz Cayon, Ben Mildenhall, Stefan Johannes Josef Holzer, Radu Bogdan Rusu
-
Patent number: 10893250Abstract: A respective target viewpoint may be rendered for each of a plurality of multiplane images of a three-dimensional scene. Each of the multiplane images may be associated with a respective single plane image of the three-dimensional scene captured from a respective viewpoint. Each of the multiplane images may include a respective plurality of depth planes. Each of the depth planes may include a respective plurality of pixels from the respective single plane image. Each of the pixels in the depth plane may be positioned at approximately the same distance from the respective viewpoint. A weighted combination of the target viewpoint renderings may be determined, where the sampling density of the single plane images is sufficiently high that the weighted combination satisfies the inequality in Equation (7). The weighted combination of the target viewpoint renderings may be transmitted as a novel viewpoint image.Type: GrantFiled: September 18, 2019Date of Patent: January 12, 2021Assignee: Fyusion, Inc.Inventors: Abhishek Kar, Rodrigo Ortiz Cayon, Ben Mildenhall, Stefan Johannes Josef Holzer, Radu Bogdan Rusu
-
Patent number: 10887582Abstract: Images of an object may be analyzed to determine individual damage maps of the object. Each damage map may represent damage to an object depicted in one of the images. The damage may be represented in a standard view of the object. An aggregated damage map for the object may be determined based on the individual damage maps.Type: GrantFiled: October 8, 2019Date of Patent: January 5, 2021Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Abhishek Kar, Alexander Jay Bruen Trevor, Pavel Hanchar, Matteo Munaro, Aidas Liaudanskas, Radu Bogdan Rusu
-
Patent number: 10863210Abstract: Provided are mechanisms and processes for performing live filtering in a camera view via client-server communication. In one example, a first video frame in a raw video stream is transmitted from a client device to a server. The client device receives a filter processing message associated with the first video frame that includes filter data for applying a filter to the first video frame. A processor at the client device creates a filtered video stream by applying the filter to a second video frame that occurs in the video stream later than the first video frame.Type: GrantFiled: August 7, 2018Date of Patent: December 8, 2020Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Matteo Munaro, Abhishek Kar, Alexander Jay Bruen Trevor, Krunal Ketan Chande, Radu Bogdan Rusu
-
Publication number: 20200349757Abstract: Pixels in a visual representation of an object that includes one or more perspective view images may be mapped to a standard view of the object. Based on the mapping, a portion of the object captured in the visual representation of the object may be identified. A user interface on a display device may indicate the identified object portion.Type: ApplicationFiled: July 22, 2019Publication date: November 5, 2020Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Abhishek Kar, Matteo Munaro, Aidas Liaudanskas, Matthias Reso, Alexander Jay Bruen Trevor, Radu Bogdan Rusu
-
Publication number: 20200257862Abstract: A tag characterizing a portion of a multi-view interactive digital media representation (MVIDMR) may be determined by applying a grammar to natural language data. The MVIDMR may include images of an object and may be navigable in one or more dimensions. An object model location for the tag identifying a location within a three-dimensional object model may be determined by applying the grammar to the natural language data. The tag may then be applied to the MVIDMR by associating it with two or more of the images at positions determined based on the object model location.Type: ApplicationFiled: April 28, 2020Publication date: August 13, 2020Applicant: Fyusion, Inc.Inventors: Abhishek Kar, Martin Markus Hubert Wawro, Stefan Johannes Josef Holzer, Radu Bogdan Rusu
-
Patent number: 10726560Abstract: Various embodiments describe systems and processes for generating AR/VR content. In one aspect, a method for generating a 3D projection of an object in a virtual reality or augmented reality environment comprises obtaining a sequence of images along a camera translation using a single lens camera. Each image contains a portion of overlapping subject matter, including the object. The object is segmented from the sequence of images using a trained segmenting neural network to form a sequence of segmented object images, to which an art-style transfer is applied using a trained transfer neural network. On-the-fly interpolation parameters are computed and stereoscopic pairs are generated for points along the camera translation from the refined sequence of segmented object images for displaying the object as a 3D projection in a virtual reality or augmented reality environment. Segmented image indices are mapped to a rotation range for display in the virtual reality or augmented reality environment.Type: GrantFiled: February 7, 2017Date of Patent: July 28, 2020Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Yuheng Ren, Abhishek Kar, Alexander Jay Bruen Trevor, Krunal Ketan Chande, Martin Josef Nikolaus Saelzle, Radu Bogdan Rusu
-
Publication number: 20200234466Abstract: The pose of an object may be estimated based on fiducial points identified in a visual representation of the object. Each fiducial point may correspond with a component of the object, and may be associated with a first location in an image of the object and a second location in a 3D space. A 3D skeleton of the object may be determined by connecting the locations in the 3D space, and the object's pose may be determined based on the 3D skeleton.Type: ApplicationFiled: July 22, 2019Publication date: July 23, 2020Applicant: FYUSION, INC.Inventors: STEFAN JOHANNES JOSEF HOLZER, PAVEL HANCHAR, ABHISHEK KAR, MATTEO MUNARO, KRUNAL KETAN CHANDE, RADU BOGDAN RUSU
-
Publication number: 20200234397Abstract: A three-dimensional (3D) skeleton may be determined based on a plurality of vertices and a plurality of faces in a two-dimensional (2D) mesh in a top-down image of an object. A correspondence mapping between a designated perspective view image and the top-down object image may be determined based on the 3D skeleton. The correspondence mapping may link a respective first location in the top-down object image to a respective second location in the designated perspective view image for each of a plurality of points in the designated perspective view image. A top-down mapped image of the object may be created by determining a first respective pixel value for each of the first locations, with each first respective pixel value being determined based on a second respective pixel value for the respective second location linked with the respective first location via the correspondence mapping.Type: ApplicationFiled: July 22, 2019Publication date: July 23, 2020Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Matteo Munaro, Aidas Liaudanskas, Abhishek Kar, Krunal Ketan Chande, Radu Bogdan Rusu
-
Publication number: 20200234424Abstract: Reference images of an object may be mapped to an object model to create a reference object model representation. Evaluation images of the object may also be mapped to the object model via the processor to create an evaluation object model representation. Object condition information may be determined by comparing the reference object model representation with the evaluation object model representation. The object condition information may indicate one or more differences between the reference object model representation and the evaluation object model representation. A graphical representation of the object model that includes the object condition information may be displayed on a display screen.Type: ApplicationFiled: November 22, 2019Publication date: July 23, 2020Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Abhishek Kar, Matteo Munaro, Pavel Hanchar, Radu Bogdan Rusu
-
Publication number: 20200234488Abstract: A plurality of images may be analyzed to determine an object model. The object model may have a plurality of components, and each of the images may correspond with one or more of the components. Component condition information may be determined for one or more of the components based on the images. The component condition information may indicate damage incurred by the object portion corresponding with the component.Type: ApplicationFiled: November 22, 2019Publication date: July 23, 2020Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Abhishek Kar, Matteo Munaro, Pavel Hanchar, Radu Bogdan Rusu
-
Publication number: 20200236343Abstract: Images of an object may be analyzed to determine individual damage maps of the object. Each damage map may represent damage to an object depicted in one of the images. The damage may be represented in a standard view of the object. An aggregated damage map for the object may be determined based on the individual damage maps.Type: ApplicationFiled: October 8, 2019Publication date: July 23, 2020Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Abhishek Kar, Alexander Jay Bruen Trevor, Pavel Hanchar, Matteo Munaro, Aidas Liaudanskas, Radu Bogdan Rusu
-
Publication number: 20200236296Abstract: One or more images of an object, each from a respective viewpoint, may be captured at a camera at a mobile computing device. The images may be compared to reference data to identify a difference between the images and the reference data. Image capture guidance may be provided on a display screen for capturing another one or more images of the object that includes the identified difference.Type: ApplicationFiled: November 22, 2019Publication date: July 23, 2020Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Abhishek Kar, Matteo Munaro, Pavel Hanchar, Radu Bogdan Rusu
-
Publication number: 20200234451Abstract: A segmentation of an object depicted in a first visual representation may be determined. The segmentation may include for each image a first respective image portion that includes the object, a second respective image portion that includes a respective ground area located beneath the object, and a third respective image portion that includes a background area located above the second respective portion and behind the object. A second visual representation may be constructed that includes the first respective image portion and a target background image portion that replaces the third respective image portion and that is selected from a target background image based on an area of the third respective image portion relative to the respective image.Type: ApplicationFiled: July 22, 2019Publication date: July 23, 2020Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Matthias Reso, Abhishek Kar, Julius Santiago, Pavel Hanchar, Radu Bogdan Rusu
-
Publication number: 20200234398Abstract: According to various embodiments, component information may be identified for each input image of an object. The component information may indicate a portion of the input image in which a particular component of the object is depicted. A viewpoint may be determined for each input image that indicates a camera pose for the input image relative to the object. A three-dimensional skeleton of the object may be determined based on the viewpoints and the component information. A multi-view panel corresponding to the designated component of the object that is navigable in three dimensions and that the portions of the input images in which the designated component of the object is depicted may be stored on a storage device.Type: ApplicationFiled: July 22, 2019Publication date: July 23, 2020Applicant: Fyusion, IncInventors: Stefan Johannes Josef Holzer, Matteo Munaro, Martin Markus Hubert Wawro, Abhishek Kar, Pavel Hanchar, Krunal Ketan Chande, Radu Bogdan Rusu
-
Patent number: 10719939Abstract: Various embodiments describe systems and processes for generating AR/VR content. In one aspect, a method for generating a three-dimensional (3D) projection of an object is provided. A sequence of images along a camera translation may be obtained using a single lens camera. Each image contains at least a portion of overlapping subject matter, which includes the object. The object is semantically segmented from the sequence of images using a trained neural network to form a sequence of segmented object images, which are then refined using fine-grained segmentation. On-the-fly interpolation parameters are computed and stereoscopic pairs are generated for points along the camera translation from the refined sequence of segmented object images for displaying the object as a 3D projection in a virtual reality or augmented reality environment. Segmented image indices are then mapped to a rotation range for display in the virtual reality or augmented reality environment.Type: GrantFiled: February 8, 2017Date of Patent: July 21, 2020Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Yuheng Ren, Abhishek Kar, Alexander Jay Bruen Trevor, Krunal Ketan Chande, Martin Josef Nikolaus Saelzle, Radu Bogdan Rusu
-
Publication number: 20200226736Abstract: A sampling density for capturing a plurality of two-dimensional images of a three-dimensional scene may be determined. The sampling density may be below the Nyquist rate. However, the sampling density may be sufficiently high such that captured images may be promoted to multiplane images and used to generate novel viewpoints in a light field reconstruction framework. Recording guidance may be provided at a display screen on a mobile computing device based on the determined sampling density. The recording guidance identifying a plurality of camera poses at which to position a camera to capture images of the three-dimensional scene. A plurality of images captured via the camera based on the recording guidance may be stored on a storage device.Type: ApplicationFiled: September 18, 2019Publication date: July 16, 2020Applicant: Fyusion, Inc.Inventors: Abhishek Kar, Rodrigo Ortiz Cayon, Ben Mildenhall, Stefan Johannes Josef Holzer, Radu Bogdan Rusu