Patents by Inventor Matthew A. Shreve

Matthew A. Shreve has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12367656
    Abstract: A system determines an input video and a first annotated image from the input video which identifies an object of interest. The system initiates a tracker based on the first annotated image and the input video. The tracker generates, based on the first annotated image and the input video, information including: a sliding window for false positives; a first set of unlabeled images from the input video; and at least two images with corresponding labeled states. A semi-supervised classifier classifies, based on the information, the first set of unlabeled images from the input video. If a first unlabeled image is classified as a false positive, the system reinitiates the tracker based on a second annotated image occurring in a frame prior to a frame with the false positive. The system generates an output video comprising the input video displayed with tracking on the object of interest.
    Type: Grant
    Filed: September 8, 2022
    Date of Patent: July 22, 2025
    Assignee: Xerox Corporation
    Inventors: Matthew A. Shreve, Robert R. Price, Jeyasri Subramanian, Sumeet Menon
  • Patent number: 12223595
    Abstract: A system is provided which mixes static scene and live annotations for labeled dataset collection. A first recording device obtains a 3D mesh of a scene with physical objects. The first recording device marks, while in a first mode, first annotations for a physical object displayed in the 3D mesh. The system switches to a second mode. The system displays, on the first recording device while in the second mode, the 3D mesh including a first projection indicating a 2D bounding area corresponding to the marked first annotations. The first recording device marks, while in the second mode, second annotations for the physical object or another physical object displayed in the 3D mesh. The system switches to the first mode. The first recording device displays, while in the first mode, the 3D mesh including a second projection indicating a 2D bounding area corresponding to the marked second annotations.
    Type: Grant
    Filed: August 2, 2022
    Date of Patent: February 11, 2025
    Assignee: Xerox Corporation
    Inventors: Matthew A. Shreve, Jeyasri Subramanian
  • Patent number: 12081450
    Abstract: A system and method provide a combination of a modular message structure, a priority-based message packing scheme, and a data packet queue management system to optimize the information content of a transmitted message in, for example, the Ocean of Things (OoT) environment. The modular message structure starts with a header that provides critical information and reference points for time and location. The rest of the message is composed of modular data packets, each of which has a data ID section that the message decoder uses for reference when reconstructing the message contents, an optional size section that specifies the length of the following data section if it can contain data of variable length, and a data section that can be compressed in a manner unique to that data type.
    Type: Grant
    Filed: August 30, 2022
    Date of Patent: September 3, 2024
    Assignee: XEROX CORPORATION
    Inventors: Eric D. Cocker, Matthew A. Shreve, Francisco E. Torres
  • Publication number: 20240249476
    Abstract: A system captures, by a recording device, a scene with physical objects, the scene displayed as a three-dimensional (3D) mesh. The system marks 3D annotations for a physical object and identifies a mask. The mask indicates background pixels corresponding to a region behind the physical object. Each background pixel is associated with a value. The system captures a plurality of images of the scene with varying features, wherein a respective image includes: two-dimensional (2D) projections corresponding to the marked 3D annotations for the physical object; and the mask based on the associated value for each background pixel. The system updates the value of each background pixel with a new value. The system trains a machine model using the respective image as generated labeled data, thereby obtaining the generated labeled data in an automated manner based on a minimal amount of marked annotations.
    Type: Application
    Filed: January 19, 2023
    Publication date: July 25, 2024
    Applicant: Palo Alto Research Center Incorporated
    Inventors: Matthew A. Shreve, Robert R. Price
  • Patent number: 11983394
    Abstract: Embodiments described herein provide a system for generating semantically accurate synthetic images. During operation, the system generates a first synthetic image using a first artificial intelligence (AI) model and presents the first synthetic image in a user interface. The user interface allows a user to identify image units of the first synthetic image that are semantically irregular. The system then obtains semantic information for the semantically irregular image units from the user via the user interface and generates a second synthetic image using a second AI model based on the semantic information. The second synthetic image can be an improved image compared to the first synthetic image.
    Type: Grant
    Filed: November 23, 2022
    Date of Patent: May 14, 2024
    Assignee: Xerox Corporation
    Inventors: Raja Bala, Sricharan Kallur Palli Kumar, Matthew A. Shreve
  • Patent number: 11978243
    Abstract: One embodiment provides a system that facilitates efficient collection of training data. During operation, the system obtains, by a recording device, a first image of a physical object in a scene which is associated with a three-dimensional (3D) world coordinate frame. The system marks, on the first image, a plurality of vertices associated with the physical object, wherein a vertex has 3D coordinates based on the 3D world coordinate frame. The system obtains a plurality of second images of the physical object in the scene while changing one or more characteristics of the scene. The system projects the marked vertices on to a respective second image to indicate a two-dimensional (2D) bounding area associated with the physical object.
    Type: Grant
    Filed: November 16, 2021
    Date of Patent: May 7, 2024
    Assignee: Xerox Corporation
    Inventors: Matthew A. Shreve, Sricharan Kallur Palli Kumar, Jin Sun, Gaurang R. Gavai, Robert R. Price, Hoda M. A. Eldardiry
  • Publication number: 20240087287
    Abstract: A system determines an input video and a first annotated image from the input video which identifies an object of interest. The system initiates a tracker based on the first annotated image and the input video. The tracker generates, based on the first annotated image and the input video, information including: a sliding window for false positives; a first set of unlabeled images from the input video; and at least two images with corresponding labeled states. A semi-supervised classifier classifies, based on the information, the first set of unlabeled images from the input video. If a first unlabeled image is classified as a false positive, the system reinitiates the tracker based on a second annotated image occurring in a frame prior to a frame with the false positive. The system generates an output video comprising the input video displayed with tracking on the object of interest.
    Type: Application
    Filed: September 8, 2022
    Publication date: March 14, 2024
    Applicant: Palo Alto Research Center Incorporated
    Inventors: Matthew A. Shreve, Robert R. Price, Jeyasri Subramanian, Sumeet Menon
  • Publication number: 20240073152
    Abstract: A system and method provide a combination of a modular message structure, a priority-based message packing scheme, and a data packet queue management system to optimize the information content of a transmitted message in, for example, the Ocean of Things (OoT) environment. The modular message structure starts with a header that provides critical information and reference points for time and location. The rest of the message is composed of modular data packets, each of which has a data ID section that the message decoder uses for reference when reconstructing the message contents, an optional size section that specifies the length of the following data section if it can contain data of variable length, and a data section that can be compressed in a manner unique to that data type.
    Type: Application
    Filed: August 30, 2022
    Publication date: February 29, 2024
    Applicant: Palo Alto Research Center Incorporated
    Inventors: Eric D. COCKER, Matthew A. SHREVE, Francisco E. TORRES
  • Publication number: 20240069962
    Abstract: A method and system for implementing a task scheduler are provided in a resource constrained computation system that uses meta data provided for each task (e.g. data analysis algorithm or sensor sampling protocol) to determine which tasks should be run in a particular wake cycle, the order in which the tasks are run, and how the tasks are distributed across the available compute resources. When a task successfully completes, it's time of execution is logged in order to provide a reference for when that task should be run again. Task meta data is formatted in a manner to allow for simple integration of new tasks into the processing architecture.
    Type: Application
    Filed: August 30, 2022
    Publication date: February 29, 2024
    Applicant: Palo Alto Research Center Incorporated
    Inventors: Matthew A. SHREVE, Eric D. COCKER
  • Patent number: 11917337
    Abstract: The present specification relates to image capture. More specifically, it relates to selective image capture for sensor carrying devices or floats deployed, for example, on the open sea. In one form, data is generated on the sensor carrying devices or floats by an on-board Inertial Measurement Unit (IMU) and is used to automatically predict the wave motion of the sea. These predictions are then used to determine an acceptable set of motion parameters that are used to trigger the on-board camera(s). The camera(s) then capture images. One consideration is that images captured at or near the peak of a wave crest with minimal pitch and roll will contain fewer obstructions (such as other waves). Such images provide a view further into the horizon to, for example, monitor maritime sea traffic and other phenomenon. Therefore, the likelihood of capturing interesting objects such as ships, boats, garbage, birds, . . . etc. is increased.
    Type: Grant
    Filed: August 31, 2021
    Date of Patent: February 27, 2024
    Assignee: XEROX CORPORATION
    Inventors: Matthew A. Shreve, Eric Cocker
  • Patent number: 11917289
    Abstract: A system is provided which obtains images of a physical object captured by an AR recording device in a 3D scene. The system measures a level of diversity of the obtained images, for a respective image, based on at least: a distance and angle; a lighting condition; and a percentage of occlusion. The system generates, based on the level of diversity, a first visualization of additional images to be captured by projecting, on a display of the recording device, first instructions for capturing the additional images using the AR recording device. The system trains a model based on the collected data. The system performs an error analysis on the collected data to estimate an error rate for each image of the collected data. The system generates, based on the error analysis, a second visualization of further images to be captured. The model is further trained based on the collected data.
    Type: Grant
    Filed: June 14, 2022
    Date of Patent: February 27, 2024
    Assignee: Xerox Corporation
    Inventors: Matthew A. Shreve, Robert R. Price
  • Publication number: 20240046568
    Abstract: A system is provided which mixes static scene and live annotations for labeled dataset collection. A first recording device obtains a 3D mesh of a scene with physical objects. The first recording device marks, while in a first mode, first annotations for a physical object displayed in the 3D mesh. The system switches to a second mode. The system displays, on the first recording device while in the second mode, the 3D mesh including a first projection indicating a 2D bounding area corresponding to the marked first annotations. The first recording device marks, while in the second mode, second annotations for the physical object or another physical object displayed in the 3D mesh. The system switches to the first mode. The first recording device displays, while in the first mode, the 3D mesh including a second projection indicating a 2D bounding area corresponding to the marked second annotations.
    Type: Application
    Filed: August 2, 2022
    Publication date: February 8, 2024
    Applicant: Palo Alto Research Center Incorporated
    Inventors: Matthew A. Shreve, Jeyasri Subramanian
  • Publication number: 20230403459
    Abstract: A system is provided which obtains images of a physical object captured by an AR recording device in a 3D scene. The system measures a level of diversity of the obtained images, for a respective image, based on at least: a distance and angle; a lighting condition; and a percentage of occlusion. The system generates, based on the level of diversity, a first visualization of additional images to be captured by projecting, on a display of the recording device, first instructions for capturing the additional images using the AR recording device. The system trains a model based on the collected data. The system performs an error analysis on the collected data to estimate an error rate for each image of the collected data. The system generates, based on the error analysis, a second visualization of further images to be captured. The model is further trained based on the collected data.
    Type: Application
    Filed: June 14, 2022
    Publication date: December 14, 2023
    Applicant: Palo Alto Research Center Incorporated
    Inventors: Matthew A. Shreve, Robert R. Price
  • Patent number: 11741693
    Abstract: One embodiment facilitates generating synthetic data objects using a semi-supervised GAN. During operation, a generator module synthesizes a data object derived from a noise vector and an attribute label. The system passes, to an unsupervised discriminator module, the data object and a set of training objects which are obtained from a training data set. The unsupervised discriminator module calculates: a value indicating a probability that the data object is real; and a latent feature representation of the data object. The system passes the latent feature representation and the attribute label to a supervised discriminator module. The supervised discriminator module calculates a value indicating a probability that the attribute label given the data object is real. The system performs the aforementioned steps iteratively until the generator module produces data objects with a given attribute label which the unsupervised and supervised discriminator modules can no longer identify as fake.
    Type: Grant
    Filed: November 29, 2017
    Date of Patent: August 29, 2023
    Assignee: Palo Alto Research Center Incorporated
    Inventors: Sricharan Kallur Palli Kumar, Raja Bala, Jin Sun, Hui Ding, Matthew A. Shreve
  • Publication number: 20230090801
    Abstract: Embodiments described herein provide a system for generating semantically accurate synthetic images. During operation, the system generates a first synthetic image using a first artificial intelligence (AI) model and presents the first synthetic image in a user interface. The user interface allows a user to identify image units of the first synthetic image that are semantically irregular. The system then obtains semantic information for the semantically irregular image units from the user via the user interface and generates a second synthetic image using a second AI model based on the semantic information. The second synthetic image can be an improved image compared to the first synthetic image.
    Type: Application
    Filed: November 23, 2022
    Publication date: March 23, 2023
    Applicant: Palo Alto Research Center Incorporated
    Inventors: Raja Bala, Sricharan Kallur Palli Kumar, Matthew A. Shreve
  • Publication number: 20230060417
    Abstract: The present specification relates to image capture. More specifically, it relates to selective image capture for sensor carrying devices or floats deployed, for example, on the open sea. In one form, data is generated on the sensor carrying devices or floats by an on-board Inertial Measurement Unit (IMU) and is used to automatically predict the wave motion of the sea. These predictions are then used to determine an acceptable set of motion parameters that are used to trigger the on-board camera(s). The camera(s) then capture images. One consideration is that images captured at or near the peak of a wave crest with minimal pitch and roll will contain fewer obstructions (such as other waves). Such images provide a view further into the horizon to, for example, monitor maritime sea traffic and other phenomenon. Therefore, the likelihood of capturing interesting objects such as ships, boats, garbage, birds, . . . etc. is increased.
    Type: Application
    Filed: August 31, 2021
    Publication date: March 2, 2023
    Applicant: Palo Alto Research Center Incorporated
    Inventors: Matthew A. SHREVE, Eric COCKER
  • Patent number: 11580450
    Abstract: Embodiments described herein provide a system for facilitating efficient dataset management. During operation, the system obtains a first dataset comprising a plurality of elements. The system then determines a set of categories for a respective element of the plurality of elements by applying a plurality of AI models to the first dataset. A respective category can correspond to an AI model. Subsequently, the system selects a set of sample elements associated with a respective category of a respective AI model and determines a second dataset based on the selected sample elements.
    Type: Grant
    Filed: January 16, 2020
    Date of Patent: February 14, 2023
    Assignee: Palo Alto Research Center Incorporated
    Inventors: Robert R. Price, Matthew A. Shreve
  • Patent number: 11537277
    Abstract: Embodiments described herein provide a system for generating semantically accurate synthetic images. During operation, the system generates a first synthetic image using a first artificial intelligence (AI) model and presents the first synthetic image in a user interface. The user interface allows a user to identify image units of the first synthetic image that are semantically irregular. The system then obtains semantic information for the semantically irregular image units from the user via the user interface and generates a second synthetic image using a second AI model based on the semantic information. The second synthetic image can be an improved image compared to the first synthetic image.
    Type: Grant
    Filed: July 19, 2018
    Date of Patent: December 27, 2022
    Assignee: Palo Alto Research Center Incorporated
    Inventors: Raja Bala, Sricharan Kallur Palli Kumar, Matthew A. Shreve
  • Patent number: 11431894
    Abstract: One embodiment can include a system for providing an image-capturing recommendation. During operation the system receives, from a mobile computing device, one or more images. The one or more images are captured by one or more cameras associated with the mobile computing device. The system analyzes the received images to obtain image-capturing conditions for capturing images of a target within a physical space; determines, based on the obtained image-capturing conditions and a predetermined image-quality requirement, one or more image-capturing settings; and recommends the determined one or more image-capturing settings to a user.
    Type: Grant
    Filed: February 6, 2020
    Date of Patent: August 30, 2022
    Assignee: Palo Alto Research Center Incorporated
    Inventors: Matthew A. Shreve, Raja Bala, Jeyasri Subramanian
  • Patent number: 11288792
    Abstract: One embodiment can provide a system for detecting a difference between a physical object and a reference model corresponding to the physical object. During operation, the system obtains a real-world image of the physical object and generates an augmented reality (AR) image by projecting a three-dimensional overlay generated from the reference model onto the physical object in the real-world image. While generating the AR image, the system aligns a pose of the reference model to a pose of the physical object. The system can detect the difference between the physical object and the reference model based on the generated AR image and the real-world image.
    Type: Grant
    Filed: February 19, 2020
    Date of Patent: March 29, 2022
    Assignee: Palo Alto Research Center Incorporated
    Inventors: Matthew A. Shreve, Lisa S. E. Rythen Larsson