Patents by Inventor Elena Dotsenko
Elena Dotsenko has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240169585Abstract: An illustrative camera calibration system may identify a first configuration of a set of differentiable point targets as depicted in a first image of a scene and a second configuration of the set of differentiable point targets as depicted in a second image of the scene. The differentiable point targets may each be independently positionable within the scene and the first and second images may be respectively captured by first and second image capture devices in accordance with first and second poses with respect to the scene, Based on the identified first and second configurations of the differentiable point targets, the camera calibration system may generate calibration parameters for a camera array that includes the first and second image capture devices. The calibration parameters may be configured to represent the first and second poses of the first and second image capture devices, Corresponding methods and systems are also disclosed.Type: ApplicationFiled: November 21, 2022Publication date: May 23, 2024Inventors: Tom Hsi Hao Shang, Elena Dotsenko
-
METHODS AND SYSTEMS FOR PRODUCING VOLUMETRIC CONTENT USING COMBINED FIXED AND DYNAMIC CAMERA SYSTEMS
Publication number: 20240144606Abstract: An illustrative volumetric content production system may access first capture data and second capture data. These first capture data may represent an entirety of a capture target and may be captured by a fixed camera system including a first plurality of image capture devices having respective fixed viewpoints with respect to the capture target. The second capture data may represent a portion of the capture target less than the entirety of the capture target and may be captured by a dynamic camera system including a second plurality of image capture devices having respective dynamic viewpoints with respect to the capture target. Based on the first and second capture data, the volumetric content production system may generate a volumetric representation of an object included in the portion of the capture target in accordance with principles described herein. Corresponding methods and systems are also disclosed.Type: ApplicationFiled: October 28, 2022Publication date: May 2, 2024Inventors: Liang Luo, Tom Hsi Hao Shang, Vidhya Seran, Elena Dotsenko -
Publication number: 20240143263Abstract: An illustrative content server system associated with a server domain may transmit, to a client device associated with a client domain, a data stream representing media content. The media content is represented at a first quality level for presentation within a presentation environment as the data stream is transmitted. Based on environment data captured in the client domain to represent real-time conditions of the presentation environment as the media content is presented, the content server system may determine an engagement parameter representing a real-time engagement level of a user with respect to the media content. Based on this engagement parameter, the content server system may adjust, in real time during the transmitting of the data stream and presentation of the media content, the data stream to represent the media content at a second quality level that is different from the first quality level. Corresponding methods and systems are also disclosed.Type: ApplicationFiled: October 27, 2022Publication date: May 2, 2024Inventors: Liang Luo, Tom Hsi Hao Shang, Elena Dotsenko, Vidhya Seran
-
Patent number: 11854228Abstract: An illustrative image processing system determines that an object depicted in an image is an instance of an object type for which a machine learning model is available to the image processing system. In response to the determining that the object is the instance of the object type, the image processing system obtains pose data generated based on the machine learning model to represent how objects of the object type are capable of being posed. The image processing system generates a volumetric representation of the object in an estimated pose. The estimated pose is estimated independently of depth data for the object based on the image and the pose data. Corresponding methods and systems are also disclosed.Type: GrantFiled: April 5, 2022Date of Patent: December 26, 2023Assignee: Verizon Patent and Licensing Inc.Inventors: Tom Hsi Hao Shang, Elena Dotsenko, Liang Luo
-
Patent number: 11830140Abstract: An illustrative 3D modeling system generates a first voxelized representation of an object with respect to a voxel space and a second voxelized representation of the object with respect to the voxel space. Based on a first normal of a first voxel included in the first voxelized representation and a second normal of a second voxel included in the second voxelized representation, the 3D modeling system identifies a mergeable intersection between the first and second voxels. Based on the first and second voxelized representations, the 3D modeling system generates a merged voxelized representation of the object with respect to the voxel space. The merged voxelized representation includes a single merged voxel generated, based on the identified mergeable intersection, to represent both the first and second voxels. Corresponding methods and systems are also disclosed.Type: GrantFiled: September 29, 2021Date of Patent: November 28, 2023Assignee: Verizon Patent and Licensing Inc.Inventors: Elena Dotsenko, Vidhya Seran
-
Publication number: 20230326075Abstract: An illustrative camera calibration system accesses a set of images captured by one or more cameras at a scene. The set of images may each depict a same element of image content present at the scene, but may depict the element of image content differently in each image so as to show an apparent movement of the element from one image to another. The camera calibration system applies a structure-from-motion algorithm to the set of images to generate calibration parameters for the one or more cameras based on the apparent movement of the element of image content shown in the set of images. Additionally, the camera calibration system provides the calibration parameters for the one or more cameras to a 3D modeling system configured to model the scene based on the set of images. Corresponding methods and systems are also disclosed.Type: ApplicationFiled: April 7, 2022Publication date: October 12, 2023Inventors: Liang Luo, Igor Gomon, Lonnie Souder, Vidhya Seran, Timothy Atchley, Elena Dotsenko
-
Patent number: 11715270Abstract: An illustrative content augmentation system identifies a presentation context dataset indicating how an extended reality (XR) presentation device is to present XR content in connection with a presentation of primary content. Based on the presentation context dataset, the content augmentation system selects a subset of secondary content items from a set of secondary content items each configured for presentation as XR content that augments the presentation of the primary content. The content augmentation system also provides the selected subset of secondary content items for presentation by the XR presentation device in connection with the presentation of the primary content. Corresponding methods and systems are also disclosed.Type: GrantFiled: September 14, 2021Date of Patent: August 1, 2023Assignee: Verizon Patent and Licensing Inc.Inventors: Duane Burton Molitor, Denny Breitenfeld, Vidhya Seran, Elena Dotsenko, Eric Norman Azares, Kevin M. Byrd, Jeremy E. Hampsten, Robert Strelec, Timothy Atchley
-
Publication number: 20230098187Abstract: An illustrative 3D modeling system generates a first voxelized representation of an object with respect to a voxel space and a second voxelized representation of the object with respect to the voxel space. Based on a first normal of a first voxel included in the first voxelized representation and a second normal of a second voxel included in the second voxelized representation, the 3D modeling system identifies a mergeable intersection between the first and second voxels. Based on the first and second voxelized representations, the 3D modeling system generates a merged voxelized representation of the object with respect to the voxel space. The merged voxelized representation includes a single merged voxel generated, based on the identified mergeable intersection, to represent both the first and second voxels. Corresponding methods and systems are also disclosed.Type: ApplicationFiled: September 29, 2021Publication date: March 30, 2023Inventors: Elena Dotsenko, Vidhya Seran
-
Publication number: 20230083884Abstract: An illustrative content augmentation system identifies a presentation context dataset indicating how an extended reality (XR) presentation device is to present XR content in connection with a presentation of primary content. Based on the presentation context dataset, the content augmentation system selects a subset of secondary content items from a set of secondary content items each configured for presentation as XR content that augments the presentation of the primary content. The content augmentation system also provides the selected subset of secondary content items for presentation by the XR presentation device in connection with the presentation of the primary content. Corresponding methods and systems are also disclosed.Type: ApplicationFiled: September 14, 2021Publication date: March 16, 2023Inventors: Duane Burton Molitor, Denny Breitenfeld, Vidhya Seran, Elena Dotsenko, Eric Norman Azares, Kevin M. Byrd, Jeremy E. Hampsten, Robert Strelec, Timothy Atchley
-
Publication number: 20230064963Abstract: An illustrative image processing system extracts a first color-field image from an original color image associated with a set of color-field components. The first color-field image is associated with a first subset of the set of color-field components. The image processing system also extracts a second color-field image from the original color image. The second color-field image is associated with a second subset of the set of color-field components that is different from the first subset. The image processing system detects a first set of features within the first color-field image and a second set of features within the second color-field image. At least one feature is detected within the first color-field image and included in the first set of features while not being detected within the second color-field image or included in the second set of features. Corresponding methods and systems are also disclosed.Type: ApplicationFiled: August 25, 2021Publication date: March 2, 2023Inventors: Tom Hsi Hao Shang, Elena Dotsenko
-
Patent number: 11527014Abstract: An illustrative scene capture system determines a set of two-dimensional (2D) feature pairs each representing a respective correspondence between particular features depicted in both a first intensity image from a first vantage point and a second intensity image from a second vantage point. Based on the set of 2D feature pairs, the system determines a set of candidate three-dimensional (3D) feature pairs for a first depth image from the first vantage point and a second depth image from the second vantage point. The system selects a subset of selected 3D feature pairs from the set of candidate 3D feature pairs in a manner configured to minimize an error associated with a transformation between the first depth image and the second depth image. Based on the subset of selected 3D feature pairs, the system manages calibration parameters for surface data capture devices that captured the intensity and depth images.Type: GrantFiled: November 24, 2020Date of Patent: December 13, 2022Assignee: Verizon Patent and Licensing Inc.Inventors: Elena Dotsenko, Liang Luo, Tom Hsi Hao Shang, Vidhya Seran
-
Patent number: 11403781Abstract: An illustrative image processing system identifies an object depicted in an image captured by a camera. The image processing system locates, within the image, a set of calibration points corresponding to features of the object. Based on the set of calibration points, the image processing system performs a calibration operation with respect to the camera. Additionally, the image processing system generates model data based on the image. The model data is representative of a model of the object depicted in the image. Corresponding methods and systems are also disclosed.Type: GrantFiled: August 25, 2020Date of Patent: August 2, 2022Assignee: Verizon Patent and Licensing Inc.Inventors: Liang Luo, Elena Dotsenko, Tom Hsi Hao Shang
-
Publication number: 20220230356Abstract: An illustrative image processing system determines that an object depicted in an image is an instance of an object type for which a machine learning model is available to the image processing system. In response to the determining that the object is the instance of the object type, the image processing system obtains pose data generated based on the machine learning model to represent how objects of the object type are capable of being posed. The image processing system generates a volumetric representation of the object in an estimated pose. The estimated pose is estimated independently of depth data for the object based on the image and the pose data. Corresponding methods and systems are also disclosed.Type: ApplicationFiled: April 5, 2022Publication date: July 21, 2022Inventors: Tom Hsi Hao Shang, Elena Dotsenko, Liang Luo
-
Publication number: 20220164988Abstract: An illustrative scene capture system determines a set of two-dimensional (2D) feature pairs each representing a respective correspondence between particular features depicted in both a first intensity image from a first vantage point and a second intensity image from a second vantage point. Based on the set of 2D feature pairs, the system determines a set of candidate three-dimensional (3D) feature pairs for a first depth image from the first vantage point and a second depth image from the second vantage point. The system selects a subset of selected 3D feature pairs from the set of candidate 3D feature pairs in a manner configured to minimize an error associated with a transformation between the first depth image and the second depth image. Based on the subset of selected 3D feature pairs, the system manages calibration parameters for surface data capture devices that captured the intensity and depth images.Type: ApplicationFiled: November 24, 2020Publication date: May 26, 2022Inventors: Elena Dotsenko, Liang Luo, Tom Hsi Hao Shang, Vidhya Seran
-
Patent number: 11328445Abstract: An illustrative image processing system determines calibration parameters for a set of cameras including a first camera configured to capture a scene from a first vantage point and a second camera configured to capture the scene from a second vantage point. The image processing system obtains pose data for an object included in the scene and depicted by first and second images captured, respectively, by the first and second cameras. The pose data is representative of how the object is capable of being posed. Based on the calibration parameters, the pose data, and the first and second images, the image processing system estimates a pose of the object in the scene independently of depth data for the object. The image processing system also generates model data of the scene that includes a volumetric representation of the object in the estimated pose. Corresponding methods and systems are also disclosed.Type: GrantFiled: October 16, 2020Date of Patent: May 10, 2022Assignee: Verizon Patent and Licensing Inc.Inventors: Tom Hsi Hao Shang, Elena Dotsenko, Liang Luo
-
Publication number: 20220122289Abstract: An illustrative image processing system determines calibration parameters for a set of cameras including a first camera configured to capture a scene from a first vantage point and a second camera configured to capture the scene from a second vantage point. The image processing system obtains pose data for an object included in the scene and depicted by first and second images captured, respectively, by the first and second cameras. The pose data is representative of how the object is capable of being posed. Based on the calibration parameters, the pose data, and the first and second images, the image processing system estimates a pose of the object in the scene independently of depth data for the object. The image processing system also generates model data of the scene that includes a volumetric representation of the object in the estimated pose. Corresponding methods and systems are also disclosed.Type: ApplicationFiled: October 16, 2020Publication date: April 21, 2022Inventors: Tom Hsi Hao Shang, Elena Dotsenko, Liang Luo
-
Publication number: 20220067967Abstract: An illustrative image processing system identifies an object depicted in an image captured by a camera. The image processing system locates, within the image, a set of calibration points corresponding to features of the object. Based on the set of calibration points, the image processing system performs a calibration operation with respect to the camera. Additionally, the image processing system generates model data based on the image. The model data is representative of a model of the object depicted in the image. Corresponding methods and systems are also disclosed.Type: ApplicationFiled: August 25, 2020Publication date: March 3, 2022Inventors: Liang Luo, Elena Dotsenko, Tom Hsi Hao Shang
-
Patent number: 7643671Abstract: A facial identification system corrects lighting and pose in images prior to comparison with stored images. A three dimensional image is created from an original two dimensional image by combining the image with shape information. An iterative process is used to adjust the shape in order to match the original two dimensional image. A final image is rendered, with adjustments for lighting and pose, from the shape information.Type: GrantFiled: January 21, 2004Date of Patent: January 5, 2010Assignee: Animetrics Inc.Inventors: Kenneth Dong, Elena Dotsenko
-
Publication number: 20050008199Abstract: A facial identification system corrects lighting and pose in images prior to comparison with stored images. A three dimensional image is created from an original two dimensional image by combining the image with shape information. An iterative process is used to adjust the shape in order to match the original two dimensional image. A final image is rendered, with adjustments for lighting and pose, from the shape information.Type: ApplicationFiled: January 21, 2004Publication date: January 13, 2005Inventors: Kenneth Dong, Elena Dotsenko