Patents by Inventor Vidhya Seran
Vidhya Seran has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11830140Abstract: An illustrative 3D modeling system generates a first voxelized representation of an object with respect to a voxel space and a second voxelized representation of the object with respect to the voxel space. Based on a first normal of a first voxel included in the first voxelized representation and a second normal of a second voxel included in the second voxelized representation, the 3D modeling system identifies a mergeable intersection between the first and second voxels. Based on the first and second voxelized representations, the 3D modeling system generates a merged voxelized representation of the object with respect to the voxel space. The merged voxelized representation includes a single merged voxel generated, based on the identified mergeable intersection, to represent both the first and second voxels. Corresponding methods and systems are also disclosed.Type: GrantFiled: September 29, 2021Date of Patent: November 28, 2023Assignee: Verizon Patent and Licensing Inc.Inventors: Elena Dotsenko, Vidhya Seran
-
Publication number: 20230326075Abstract: An illustrative camera calibration system accesses a set of images captured by one or more cameras at a scene. The set of images may each depict a same element of image content present at the scene, but may depict the element of image content differently in each image so as to show an apparent movement of the element from one image to another. The camera calibration system applies a structure-from-motion algorithm to the set of images to generate calibration parameters for the one or more cameras based on the apparent movement of the element of image content shown in the set of images. Additionally, the camera calibration system provides the calibration parameters for the one or more cameras to a 3D modeling system configured to model the scene based on the set of images. Corresponding methods and systems are also disclosed.Type: ApplicationFiled: April 7, 2022Publication date: October 12, 2023Inventors: Liang Luo, Igor Gomon, Lonnie Souder, Vidhya Seran, Timothy Atchley, Elena Dotsenko
-
Patent number: 11715270Abstract: An illustrative content augmentation system identifies a presentation context dataset indicating how an extended reality (XR) presentation device is to present XR content in connection with a presentation of primary content. Based on the presentation context dataset, the content augmentation system selects a subset of secondary content items from a set of secondary content items each configured for presentation as XR content that augments the presentation of the primary content. The content augmentation system also provides the selected subset of secondary content items for presentation by the XR presentation device in connection with the presentation of the primary content. Corresponding methods and systems are also disclosed.Type: GrantFiled: September 14, 2021Date of Patent: August 1, 2023Assignee: Verizon Patent and Licensing Inc.Inventors: Duane Burton Molitor, Denny Breitenfeld, Vidhya Seran, Elena Dotsenko, Eric Norman Azares, Kevin M. Byrd, Jeremy E. Hampsten, Robert Strelec, Timothy Atchley
-
Publication number: 20230169690Abstract: An illustrative point cloud compression system accesses an input point cloud dataset representative of a point cloud comprising a plurality of points. The point cloud compression system identifies a first attribute dataset and a second attribute dataset within the input point cloud dataset. Based on an application of a transform algorithm to the first and second attribute datasets, respectively, the point cloud compression system generates 1) a first low-frequency component and a first high-frequency component of the first attribute dataset, and 2) a second low-frequency component and a second high-frequency component of the second attribute dataset. The point cloud compression system then generates an output point cloud dataset that prioritizes both the first and second low-frequency components above both the first and second high-frequency components. Corresponding methods and systems are also disclosed.Type: ApplicationFiled: November 30, 2021Publication date: June 1, 2023Inventors: Vidhya Seran, Syed B Aziz
-
Publication number: 20230098187Abstract: An illustrative 3D modeling system generates a first voxelized representation of an object with respect to a voxel space and a second voxelized representation of the object with respect to the voxel space. Based on a first normal of a first voxel included in the first voxelized representation and a second normal of a second voxel included in the second voxelized representation, the 3D modeling system identifies a mergeable intersection between the first and second voxels. Based on the first and second voxelized representations, the 3D modeling system generates a merged voxelized representation of the object with respect to the voxel space. The merged voxelized representation includes a single merged voxel generated, based on the identified mergeable intersection, to represent both the first and second voxels. Corresponding methods and systems are also disclosed.Type: ApplicationFiled: September 29, 2021Publication date: March 30, 2023Inventors: Elena Dotsenko, Vidhya Seran
-
Publication number: 20230083884Abstract: An illustrative content augmentation system identifies a presentation context dataset indicating how an extended reality (XR) presentation device is to present XR content in connection with a presentation of primary content. Based on the presentation context dataset, the content augmentation system selects a subset of secondary content items from a set of secondary content items each configured for presentation as XR content that augments the presentation of the primary content. The content augmentation system also provides the selected subset of secondary content items for presentation by the XR presentation device in connection with the presentation of the primary content. Corresponding methods and systems are also disclosed.Type: ApplicationFiled: September 14, 2021Publication date: March 16, 2023Inventors: Duane Burton Molitor, Denny Breitenfeld, Vidhya Seran, Elena Dotsenko, Eric Norman Azares, Kevin M. Byrd, Jeremy E. Hampsten, Robert Strelec, Timothy Atchley
-
Patent number: 11527014Abstract: An illustrative scene capture system determines a set of two-dimensional (2D) feature pairs each representing a respective correspondence between particular features depicted in both a first intensity image from a first vantage point and a second intensity image from a second vantage point. Based on the set of 2D feature pairs, the system determines a set of candidate three-dimensional (3D) feature pairs for a first depth image from the first vantage point and a second depth image from the second vantage point. The system selects a subset of selected 3D feature pairs from the set of candidate 3D feature pairs in a manner configured to minimize an error associated with a transformation between the first depth image and the second depth image. Based on the subset of selected 3D feature pairs, the system manages calibration parameters for surface data capture devices that captured the intensity and depth images.Type: GrantFiled: November 24, 2020Date of Patent: December 13, 2022Assignee: Verizon Patent and Licensing Inc.Inventors: Elena Dotsenko, Liang Luo, Tom Hsi Hao Shang, Vidhya Seran
-
Publication number: 20220164988Abstract: An illustrative scene capture system determines a set of two-dimensional (2D) feature pairs each representing a respective correspondence between particular features depicted in both a first intensity image from a first vantage point and a second intensity image from a second vantage point. Based on the set of 2D feature pairs, the system determines a set of candidate three-dimensional (3D) feature pairs for a first depth image from the first vantage point and a second depth image from the second vantage point. The system selects a subset of selected 3D feature pairs from the set of candidate 3D feature pairs in a manner configured to minimize an error associated with a transformation between the first depth image and the second depth image. Based on the subset of selected 3D feature pairs, the system manages calibration parameters for surface data capture devices that captured the intensity and depth images.Type: ApplicationFiled: November 24, 2020Publication date: May 26, 2022Inventors: Elena Dotsenko, Liang Luo, Tom Hsi Hao Shang, Vidhya Seran
-
Patent number: 11315306Abstract: An illustrative volumetric processing system generates a plurality of point clouds each representing an object from a different vantage point. Based on the plurality of point clouds, the volumetric processing system consolidates point cloud data corresponding to a surface of the object. Based on the consolidated point cloud data for the object, the volumetric processing system generates a voxel grid representative of the object. Based on the voxel grid, the volumetric processing system generates a set of rendered patches each depicting at least a part of the surface of the object. Corresponding methods and systems are also disclosed.Type: GrantFiled: March 12, 2021Date of Patent: April 26, 2022Assignee: Verizon Patent and Licensing Inc.Inventors: Denny Breitenfeld, Vidhya Seran, Nazneen Khan, Richard J. Kern, II
-
Patent number: 11263167Abstract: An exemplary method includes establishing, by a sending device, a communication session with a receiving device, the communication session configured to transfer an array of related, sequenced data blocks from the sending device to the receiving device. The method includes sending one or more parameter messages including parameters associated with the array of data blocks. The method includes receiving one or more parameter acknowledgement messages to the one or more parameter messages, the one or more parameter acknowledgement messages including a plurality of memory addresses of the receiving device, the plurality of memory addresses including a respective memory address for each data block of the array of data blocks. The method includes sending the array of data blocks to the receiving device, each data block of the array of data blocks sent to the respective memory address using a remote direct memory access (RDMA) protocol.Type: GrantFiled: March 30, 2020Date of Patent: March 1, 2022Assignee: Verizon Patent and Licensing Inc.Inventors: Richard John Kern, II, Vidhya Seran
-
Publication number: 20210303507Abstract: An exemplary method includes establishing, by a sending device, a communication session with a receiving device, the communication session configured to transfer an array of related, sequenced data blocks from the sending device to the receiving device. The method includes sending one or more parameter messages including parameters associated with the array of data blocks. The method includes receiving one or more parameter acknowledgement messages to the one or more parameter messages, the one or more parameter acknowledgement messages including a plurality of memory addresses of the receiving device, the plurality of memory addresses including a respective memory address for each data block of the array of data blocks. The method includes sending the array of data blocks to the receiving device, each data block of the array of data blocks sent to the respective memory address using a remote direct memory access (RDMA) protocol.Type: ApplicationFiled: March 30, 2020Publication date: September 30, 2021Inventors: Richard John Kern, II, Vidhya Seran
-
Publication number: 20210201563Abstract: An illustrative volumetric processing system generates a plurality of point clouds each representing an object from a different vantage point. Based on the plurality of point clouds, the volumetric processing system consolidates point cloud data corresponding to a surface of the object. Based on the consolidated point cloud data for the object, the volumetric processing system generates a voxel grid representative of the object. Based on the voxel grid, the volumetric processing system generates a set of rendered patches each depicting at least a part of the surface of the object. Corresponding methods and systems are also disclosed.Type: ApplicationFiled: March 12, 2021Publication date: July 1, 2021Inventors: Denny Breitenfeld, Vidhya Seran, Nazneen Khan, Richard J. Kern, II
-
Patent number: 11006141Abstract: An exemplary image generation system accesses a full atlas frame sequence that incorporates a set of image sequences combined within the full atlas frame sequence as atlas tiles. The system generates a first partial atlas frame sequence that incorporates a first subset of image sequences selected from the set of image sequences incorporated in the full atlas frame sequence, as well as a second partial atlas frame sequence that incorporates a second subset of image sequences selected from the set of image sequences. The second subset includes a different combination of image sequences than the first subset and includes at least one image sequence in common with the first subset. The system provides the first partial atlas frame sequence to a first video encoder and the second partial atlas frame sequence to a second video encoder communicatively coupled with the first video encoder. Corresponding methods and systems are also disclosed.Type: GrantFiled: March 19, 2020Date of Patent: May 11, 2021Assignee: Verizon Patent and Licensing Inc.Inventors: Oliver S. Castaneda, Nazneen Khan, Denny Breitenfeld, Dan Sun, Vidhya Seran
-
Patent number: 10977855Abstract: An exemplary volumetric processing system includes a set of point cloud generators, a point cloud organizer, a voxelizer, and a set of patch renderers. The point cloud generators correspond to image capture systems disposed at different vantage points to capture color and depth data for an object located within a capture area. The point cloud generators generate respective point clouds for the different vantage points based on the captured surface data. The point cloud organizer consolidates point cloud data that corresponds to a surface of the object. The voxelizer generates a voxel grid representative of the object based on the consolidated point cloud data. The set of patch renderers generates, based on the voxel grid, a set of rendered patches each depicting at least a part of the surface of the object. Corresponding systems and methods are also disclosed.Type: GrantFiled: September 30, 2019Date of Patent: April 13, 2021Assignee: Verizon Patent and Licensing Inc.Inventors: Denny Breitenfeld, Vidhya Seran, Nazneen Khan, Richard J. Kern, II
-
Publication number: 20210097752Abstract: An exemplary volumetric processing system includes a set of point cloud generators, a point cloud organizer, a voxelizer, and a set of patch renderers. The point cloud generators correspond to image capture systems disposed at different vantage points to capture color and depth data for an object located within a capture area. The point cloud generators generate respective point clouds for the different vantage points based on the captured surface data. The point cloud organizer consolidates point cloud data that corresponds to a surface of the object. The voxelizer generates a voxel grid representative of the object based on the consolidated point cloud data. The set of patch renderers generates, based on the voxel grid, a set of rendered patches each depicting at least a part of the surface of the object. Corresponding systems and methods are also disclosed.Type: ApplicationFiled: September 30, 2019Publication date: April 1, 2021Inventors: Denny Breitenfeld, Vidhya Seran, Nazneen Khan, Richard J. Kern, II
-
Publication number: 20200221114Abstract: An exemplary image generation system accesses a full atlas frame sequence that incorporates a set of image sequences combined within the full atlas frame sequence as atlas tiles. The system generates a first partial atlas frame sequence that incorporates a first subset of image sequences selected from the set of image sequences incorporated in the full atlas frame sequence, as well as a second partial atlas frame sequence that incorporates a second subset of image sequences selected from the set of image sequences. The second subset includes a different combination of image sequences than the first subset and includes at least one image sequence in common with the first subset. The system provides the first partial atlas frame sequence to a first video encoder and the second partial atlas frame sequence to a second video encoder communicatively coupled with the first video encoder. Corresponding methods and systems are also disclosed.Type: ApplicationFiled: March 19, 2020Publication date: July 9, 2020Inventors: Oliver S. Castaneda, Nazneen Khan, Denny Breitenfeld, Dan Sun, Vidhya Seran
-
Patent number: 10638151Abstract: An exemplary video encoding system accesses an image set that includes first and second consecutive color data images depicting a virtual reality scene from a particular vantage point, and first and second consecutive depth data images corresponding to the color data images. The system performs a first-pass video encoding of the image set by identifying motion vector data associated with a transformation from the first to the second color data image, and abstaining from analyzing a transformation from the first to the second depth data image. The system then performs a second-pass video encoding of the image set based on the identified motion vector data by encoding the first and second color data images into a color video stream to be rendered by a media player device, and the first and second depth data images into a depth video stream to be rendered by the media player device.Type: GrantFiled: May 31, 2018Date of Patent: April 28, 2020Assignee: Verizon Patent and Licensing Inc.Inventors: Oliver S. Castaneda, Nazneen Khan, Denny Breitenfeld, Dan Sun, Vidhya Seran
-
Publication number: 20190373278Abstract: An exemplary video encoding system accesses an image set that includes first and second consecutive color data images depicting a virtual reality scene from a particular vantage point, and first and second consecutive depth data images corresponding to the color data images. The system performs a first-pass video encoding of the image set by identifying motion vector data associated with a transformation from the first to the second color data image, and abstaining from analyzing a transformation from the first to the second depth data image. The system then performs a second-pass video encoding of the image set based on the identified motion vector data by encoding the first and second color data images into a color video stream to be rendered by a media player device, and the first and second depth data images into a depth video stream to be rendered by the media player device.Type: ApplicationFiled: May 31, 2018Publication date: December 5, 2019Inventors: Oliver S. Castaneda, Nazneen Khan, Denny Breitenfeld, Dan Sun, Vidhya Seran
-
Publication number: 20160344700Abstract: A proxy server may receive from a user endpoint, a secure connection request to a second server. The secure connection request may be matched to a globally unique identifier registered for the user endpoint by employing a device-specified identifier associated with the globally unique identifier. The proxy server may respond. with an acknowledgement to the user endpoint. The proxy server may intercept, from the user endpoint, a first secure handshake with the second server. The proxy server may initiate a second secure handshake with the second server based on the intercepted first secure handshake. The proxy server may intercept from the second server a second secure handshake response comprising a server certificate with metadata. The proxy server may generate a second certificate using the metadata and signed by a first certificate authority associated with the globally unique identifier registered for the user endpoint.Type: ApplicationFiled: October 14, 2015Publication date: November 24, 2016Inventors: William L. Gaddy, Vidhya Seran, Stephen Andrew Norwalk, John Galluzzo, Vincent James Spinella
-
Patent number: 9479681Abstract: A computer implemented method for automatically identifying shot changes in a video sequence in real-time or near-real-time is disclosed. Optical flow energy change differences between frames, sum-of-square differences between optical-flow-compensated frames, and hue histogram changes within frames are analyzed and stored in frame buffers. A feature vector formed from a combination of these measurements is compared to a feature vector formed from thresholds based on tunable recall and precision to declare the presence or absence of a shot change.Type: GrantFiled: April 4, 2013Date of Patent: October 25, 2016Assignee: A2ZLOGIX, INC.Inventors: William L. Gaddy, Vidhya Seran