Patents by Inventor Radu Bogdan
Radu Bogdan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20190332866Abstract: A multi-view interactive digital media representation (MVIDMR) of an object can be generated from live images of an object captured from a camera. Selectable tags can be placed at locations on the object in the MVIDMR. When the selectable tags are selected, media content can be output which shows details of the object at location where the selectable tag is placed. A machine learning algorithm can be used to automatically recognize landmarks on the object in the frames of the MVIDMR and a structure from motion calculation can be used to determine 3-D positions associated with the landmarks. A 3-D skeleton associated with the object can be assembled from the 3-D positions and projected into the frames associated with the MVIDMR. The 3-D skeleton can be used to determine the selectable tag locations in the frames of the MVIDMR of the object.Type: ApplicationFiled: November 2, 2018Publication date: October 31, 2019Applicant: Fyusion, Inc.Inventors: Chris Beall, Abhishek Kar, Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Pavel Hanchar
-
Publication number: 20190313085Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation (MVIDMR) of object can be generated from live images of the object captured from a hand-held camera. Methods are described where image data associated with the images capture from the hand-held camera are manipulated to generate a more desirable MVIDMR of the object. In particular, the image data can be manipulated so that it appears as if the camera traveled a smoother trajectory during the capture of the images which can provide a smoother output of the MVIDMR. In embodiment, key point matching within the image data and, optionally, IMU data from a sensor package on the camera can be used to generate constraints used in a factor graph optimization that is used to generate a smoother trajectory of the camera.Type: ApplicationFiled: November 1, 2018Publication date: October 10, 2019Applicant: Fyusion, Inc.Inventors: Alexander Jay Bruen Trevor, Chris Beall, Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Krunal Ketan Chande
-
Patent number: 10437879Abstract: Provided are mechanisms and processes for performing visual search using multi-view digital media representations, such as surround views. In one example, a process includes receiving a visual search query that includes a surround view of an object to be searched, where the surround view includes spatial information, scale information, and different viewpoint images of the object. The surround view is compared to stored surround views by comparing spatial information and scale information of the surround view to spatial information and scale information of the stored surround views. A correspondence measure is then generated indicating the degree of similarity between the surround view and a possible match. At least one search result is then transmitted with a corresponding image in response to the visual search query.Type: GrantFiled: January 18, 2017Date of Patent: October 8, 2019Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Abhishek Kar, Alexander Jay Bruen Trevor, Pantelis Kalogiros, Ioannis Spanos, Radu Bogdan Rusu
-
Patent number: 10440351Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a multi-view interactive digital media representation for presentation on a device. Once a multi-view interactive digital media representation is generated, a user can provide navigational inputs, such via tilting of the device, which alter the presentation state of the multi-view interactive digital media representation. The navigational inputs can be analyzed to determine metrics which indicate a user's interest in the multi-view interactive digital media representation.Type: GrantFiled: March 3, 2017Date of Patent: October 8, 2019Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Stephen David Miller, Pantelis Kalogiros, George Haber
-
Patent number: 10430995Abstract: This present disclosure relates to systems and processes for interpolating images of an object from a multi-directional structured image array. In particular embodiments, a plurality of images corresponding to a light field is obtained using a camera. Each image contains at least a portion of overlapping subject matter with another image. First, second, and third images are determined, which are the closest three images in the plurality of images to a desired image location in the light field. A first set of candidate transformations is identified between the first and second images, and a second set of candidate transformations is identified between the first and third images. For each pixel location in the desired image location, first and second best pixel values are calculated using the first and second set of candidate transformations, respectively, and the first and second best pixel values are blended to form an interpolated pixel.Type: GrantFiled: April 29, 2019Date of Patent: October 1, 2019Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Yuheng Ren
-
Publication number: 20190297258Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation can be generated from live images captured from a camera as the camera moves along a path. Then, a sequence of the images can be selected based upon sensor data from an inertial measurement unit and upon image data such that one of the live images is selected for each of a plurality of poses along the path. A multi-view interactive digital media representation may be created from the sequence of images, and the images may be encoded as a video via a designated encoding format.Type: ApplicationFiled: March 23, 2018Publication date: September 26, 2019Applicant: Fyusion, Inc.Inventors: Alexander Jay Bruen Trevor, Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Stephen Miller
-
Publication number: 20190281271Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation can be generated from live images captured from a camera. The live images can include an object. An angular view of the object captured in the live images can be estimated using sensor data from an inertial measurement unit. The determined angular views can be used to select from among the live images. The multi-view interactive digital media representation can include a plurality of images where each of the plurality of images includes the object from a different camera view. When the plurality of images is output to a display, the object can appear to undergo a 3-D rotation through the determined angular view where the 3-D rotation of the object is generated without a 3-D polygon model of the object.Type: ApplicationFiled: April 19, 2019Publication date: September 12, 2019Applicant: Fyusion, Inc.Inventors: Alexander Jay Bruen Trevor, Chris Beall, Vladimir Glavtchev, Stefan Johannes Josef Holzer, Radu Bogdan Rusu
-
Publication number: 20190278434Abstract: Various embodiments of the present disclosure relate generally to systems and methods for automatic tagging of objects on a multi-view interactive digital media representation of a dynamic entity. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a multi-view interactive digital media representation for presentation on a device. Multi-view interactive digital media representations correspond to multi-view interactive digital media representations of the dynamic objects in backgrounds. A first multi-view interactive digital media representation of a dynamic object is obtained. Next, the dynamic object is tagged. Then, a second multi-view interactive digital media representation of the dynamic object is generated. Finally, the dynamic object in the second multi-view interactive digital media representation is automatically identified and tagged.Type: ApplicationFiled: May 30, 2019Publication date: September 12, 2019Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Radu Bogdan Rusu
-
Publication number: 20190251738Abstract: This present disclosure relates to systems and processes for interpolating images of an object from a multi-directional structured image array. In particular embodiments, a plurality of images corresponding to a light field is obtained using a camera. Each image contains at least a portion of overlapping subject matter with another image. First, second, and third images are determined, which are the closest three images in the plurality of images to a desired image location in the light field. A first set of candidate transformations is identified between the first and second images, and a second set of candidate transformations is identified between the first and third images. For each pixel location in the desired image location, first and second best pixel values are calculated using the first and second set of candidate transformations, respectively, and the first and second best pixel values are blended to form an interpolated pixel.Type: ApplicationFiled: April 29, 2019Publication date: August 15, 2019Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Yuheng Ren
-
Patent number: 10382739Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation (MVIDMR) of an object can be generated from live images of an object captured from a camera. After the MVIDMR of the object is generated, a tag can be placed at a location on the object in the MVIDMR. The locations of the tag in the frames of the MVIDMR can vary from frame to frame as the view of the object changes. When the tag is selected, media content can be output which shows details of the object at location where the tag is placed. In one embodiment, the object can be car and tags can be used to link to media content showing details of the car at the locations where the tags are placed.Type: GrantFiled: April 26, 2018Date of Patent: August 13, 2019Assignee: Fyusion, Inc.Inventors: Radu Bogdan Rusu, Dave Morrison, Keith Martin, Stephen David Miller, Pantelis Kalogiros, Mike Penz, Martin Markus Hubert Wawro, Bojana Dumeljic, Jai Chaudhry, Luke Parham, Julius Santiago, Stefan Johannes Josef Holzer
-
Publication number: 20190244372Abstract: Various embodiments of the present invention relate generally to systems and processes for interpolating images of an object. According to particular embodiments, a sequence of images is obtained using a camera which captures the sequence of images along a camera translation. Each image contains at least a portion of overlapping subject matter. A plurality of keypoints is identified on a first image of the sequence of images. Each keypoint from the first image are kept track of to a second image. Using a predetermined algorithm, a plurality of transformations are computed using two randomly sampled keypoint correspondences, each of which includes a keypoint on the first image and a corresponding keypoint on the second image. An optimal subset of transformations is determined from the plurality of transformations based on predetermined criteria, and transformation parameters corresponding to the optimal subset of transformations is calculated and stored for on-the-fly interpolation.Type: ApplicationFiled: April 15, 2019Publication date: August 8, 2019Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Radu Bogdan Rusu, Yuheng Ren
-
Publication number: 20190220991Abstract: This present disclosure relates to systems and processes for capturing an unstructured light field in a plurality of images. In particular embodiments, a plurality of keypoints are identified on a first keyframe in a plurality of captured images. A first convex hull is computed from all keypoints in the first keyframe and merged with previous convex hulls corresponding to previous keyframes to form a convex hull union. Each keypoint is tracked from the first keyframe to a second image. The second image is adjusted to compensate for camera rotation during capture, and a second convex hull is computed from all keypoints in the second image. If the overlapping region between the second convex hull and the convex hull union is equal to, or less than, a predetermined size, the second image is designated as a new keyframe, and the convex hull union is augmented with the second convex hull.Type: ApplicationFiled: January 4, 2019Publication date: July 18, 2019Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Yuheng Ren
-
Patent number: 10356395Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a multi-view interactive digital media representation for presentation on a device. Once a multi-view interactive digital media representation is generated, a user can provide navigational inputs, such via tilting of the device, which alter the presentation state of the multi-view interactive digital media representation. The navigational inputs can be analyzed to determine metrics which indicate a user's interest in the multi-view interactive digital media representation.Type: GrantFiled: March 3, 2017Date of Patent: July 16, 2019Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Stephen David Miller, Pantelis Kalogiros, George Haber
-
Patent number: 10356341Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation (MVIDMR) of a person can be generated from live images of a person captured from a hand-held camera. Using the image data from the live images, a skeleton of the person and a boundary between the person and a background can be determined from different viewing angles and across multiple images. Using the skeleton and the boundary data, effects can be added to the person, such as wings. The effects can change from image to image to account for the different viewing angles of the person captured in each image.Type: GrantFiled: March 26, 2018Date of Patent: July 16, 2019Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Stephen David Miller, Pantelis Kalogiros, Matteo Munaro, Alexander Jay Bruen Trevor, Nico Gregor Sebastian Blodow, Luo Yi Tan, Mike Penz, Martin Markus Hubert Wawro, Matthias Reso, Chris Beall, Yusuke Tomoto, Krunal Ketan Chande
-
Patent number: 10353946Abstract: Provided are mechanisms and processes for performing live search using multi-view digital media representations. In one example, a process includes receiving a visual search query from a device for an object to be searched, where the visual search query includes a first set of viewpoints of the object obtained during capture of a first surround view of the object during a live search session. Next, additional recommended viewpoints of the object are identified for the device to capture, where the additional recommended viewpoints are chosen to provide more information about the object. A first set of search results based on the first set of viewpoints and additional recommended viewpoints of the object are transmitted to the device. In response, a second set of viewpoints of the object captured using image capture capabilities of the device are received. A second set of search results with enhanced matches for the object based on the first and second sets of viewpoints are then transmitted to the device.Type: GrantFiled: January 18, 2017Date of Patent: July 16, 2019Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Abhishek Kar, Pantelis Kalogiros, Ioannis Spanos, Luke Parham, Radu Bogdan Rusu
-
Patent number: 10313651Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation can be generated from live images captured from a camera. The live images can include an object. An angular view of the object captured in the live images can be estimated using sensor data from an inertial measurement unit. The determined angular views can be used to select from among the live images. The multi-view interactive digital media representation can include a plurality of images where each of the plurality of images includes the object from a different camera view. When the plurality of images is output to a display, the object can appear to undergo a 3-D rotation through the determined angular view where the 3-D rotation of the object is generated without a 3-D polygon model of the object.Type: GrantFiled: May 22, 2017Date of Patent: June 4, 2019Assignee: Fyusion, Inc.Inventors: Alexander Jay Bruen Trevor, Chris Beall, Vladimir Glavtchev, Stefan Johannes Josef Holzer, Radu Bogdan Rusu
-
Publication number: 20190156559Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a surround view. In particular embodiments, a surround view can be generated by combining a panoramic view of an object with a panoramic view of a distant scene, such that the object panorama is placed in a foreground position relative to the distant scene panorama. Such combined panoramas can enhance the interactive and immersive viewing experience of the surround view.Type: ApplicationFiled: December 31, 2018Publication date: May 23, 2019Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Radu Bogdan Rusu
-
Publication number: 20190158741Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation can be generated from live images captured from a camera. The live images can include an object. An angular view of the object captured in the live images can be estimated using sensor data from an inertial measurement unit. The determined angular views can be used to select from among the live images and determine when a three hundred sixty degree view of the object has been captured. When the plurality of images is output to a display, the object can appear to undergo a 3-D rotation, such as a three hundred sixty degree rotation, through the determined angular view where the 3-D rotation of the object is generated without a 3-D polygon model of the object.Type: ApplicationFiled: January 25, 2019Publication date: May 23, 2019Applicant: Fyusion, Inc.Inventors: Alexander Jay Bruen Trevor, Chris Beall, Stefan Johannes Josef Holzer, Radu Bogdan Rusu
-
Publication number: 20190149806Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation can be generated from live images captured from a camera. The live images can include an object. An angular view of the object captured in the live images can be estimated using sensor data from an inertial measurement unit. The multi-view interactive digital media representation can include a plurality of images where each of the plurality of images includes the object from a different camera view. When the plurality of images is output to a display, the object can appear to undergo a 3-D rotation through the determined angular view where the 3-D rotation of the object is generated without a 3-D polygon model of the object.Type: ApplicationFiled: December 19, 2018Publication date: May 16, 2019Applicant: Fyusion, Inc.Inventors: Alexander Jay Bruen Trevor, Chris Beall, Stefan Johannes Josef Holzer, Radu Bogdan Rusu
-
Publication number: 20190139310Abstract: Various embodiments of the present invention relate generally to systems and methods for collecting, analyzing, and manipulating images and video. According to particular embodiments, live images captured by a camera on a mobile device may be analyzed as the mobile device moves along a path. The live images may be compared with a target view. A visual indicator may be provided to guide the alteration of the positioning of the mobile device to more closely align with the target view.Type: ApplicationFiled: July 5, 2018Publication date: May 9, 2019Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Alexander Jay Bruen Trevor, Michelle Jung-Ah Ho, David Klein, Stephen David Miller, Shuichi Tsutsumi, Radu Bogdan Rusu