Patents by Inventor Matthew A. Shreve
Matthew A. Shreve has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12367656Abstract: A system determines an input video and a first annotated image from the input video which identifies an object of interest. The system initiates a tracker based on the first annotated image and the input video. The tracker generates, based on the first annotated image and the input video, information including: a sliding window for false positives; a first set of unlabeled images from the input video; and at least two images with corresponding labeled states. A semi-supervised classifier classifies, based on the information, the first set of unlabeled images from the input video. If a first unlabeled image is classified as a false positive, the system reinitiates the tracker based on a second annotated image occurring in a frame prior to a frame with the false positive. The system generates an output video comprising the input video displayed with tracking on the object of interest.Type: GrantFiled: September 8, 2022Date of Patent: July 22, 2025Assignee: Xerox CorporationInventors: Matthew A. Shreve, Robert R. Price, Jeyasri Subramanian, Sumeet Menon
-
Patent number: 12223595Abstract: A system is provided which mixes static scene and live annotations for labeled dataset collection. A first recording device obtains a 3D mesh of a scene with physical objects. The first recording device marks, while in a first mode, first annotations for a physical object displayed in the 3D mesh. The system switches to a second mode. The system displays, on the first recording device while in the second mode, the 3D mesh including a first projection indicating a 2D bounding area corresponding to the marked first annotations. The first recording device marks, while in the second mode, second annotations for the physical object or another physical object displayed in the 3D mesh. The system switches to the first mode. The first recording device displays, while in the first mode, the 3D mesh including a second projection indicating a 2D bounding area corresponding to the marked second annotations.Type: GrantFiled: August 2, 2022Date of Patent: February 11, 2025Assignee: Xerox CorporationInventors: Matthew A. Shreve, Jeyasri Subramanian
-
Patent number: 12081450Abstract: A system and method provide a combination of a modular message structure, a priority-based message packing scheme, and a data packet queue management system to optimize the information content of a transmitted message in, for example, the Ocean of Things (OoT) environment. The modular message structure starts with a header that provides critical information and reference points for time and location. The rest of the message is composed of modular data packets, each of which has a data ID section that the message decoder uses for reference when reconstructing the message contents, an optional size section that specifies the length of the following data section if it can contain data of variable length, and a data section that can be compressed in a manner unique to that data type.Type: GrantFiled: August 30, 2022Date of Patent: September 3, 2024Assignee: XEROX CORPORATIONInventors: Eric D. Cocker, Matthew A. Shreve, Francisco E. Torres
-
Publication number: 20240249476Abstract: A system captures, by a recording device, a scene with physical objects, the scene displayed as a three-dimensional (3D) mesh. The system marks 3D annotations for a physical object and identifies a mask. The mask indicates background pixels corresponding to a region behind the physical object. Each background pixel is associated with a value. The system captures a plurality of images of the scene with varying features, wherein a respective image includes: two-dimensional (2D) projections corresponding to the marked 3D annotations for the physical object; and the mask based on the associated value for each background pixel. The system updates the value of each background pixel with a new value. The system trains a machine model using the respective image as generated labeled data, thereby obtaining the generated labeled data in an automated manner based on a minimal amount of marked annotations.Type: ApplicationFiled: January 19, 2023Publication date: July 25, 2024Applicant: Palo Alto Research Center IncorporatedInventors: Matthew A. Shreve, Robert R. Price
-
Patent number: 11983394Abstract: Embodiments described herein provide a system for generating semantically accurate synthetic images. During operation, the system generates a first synthetic image using a first artificial intelligence (AI) model and presents the first synthetic image in a user interface. The user interface allows a user to identify image units of the first synthetic image that are semantically irregular. The system then obtains semantic information for the semantically irregular image units from the user via the user interface and generates a second synthetic image using a second AI model based on the semantic information. The second synthetic image can be an improved image compared to the first synthetic image.Type: GrantFiled: November 23, 2022Date of Patent: May 14, 2024Assignee: Xerox CorporationInventors: Raja Bala, Sricharan Kallur Palli Kumar, Matthew A. Shreve
-
Patent number: 11978243Abstract: One embodiment provides a system that facilitates efficient collection of training data. During operation, the system obtains, by a recording device, a first image of a physical object in a scene which is associated with a three-dimensional (3D) world coordinate frame. The system marks, on the first image, a plurality of vertices associated with the physical object, wherein a vertex has 3D coordinates based on the 3D world coordinate frame. The system obtains a plurality of second images of the physical object in the scene while changing one or more characteristics of the scene. The system projects the marked vertices on to a respective second image to indicate a two-dimensional (2D) bounding area associated with the physical object.Type: GrantFiled: November 16, 2021Date of Patent: May 7, 2024Assignee: Xerox CorporationInventors: Matthew A. Shreve, Sricharan Kallur Palli Kumar, Jin Sun, Gaurang R. Gavai, Robert R. Price, Hoda M. A. Eldardiry
-
Publication number: 20240087287Abstract: A system determines an input video and a first annotated image from the input video which identifies an object of interest. The system initiates a tracker based on the first annotated image and the input video. The tracker generates, based on the first annotated image and the input video, information including: a sliding window for false positives; a first set of unlabeled images from the input video; and at least two images with corresponding labeled states. A semi-supervised classifier classifies, based on the information, the first set of unlabeled images from the input video. If a first unlabeled image is classified as a false positive, the system reinitiates the tracker based on a second annotated image occurring in a frame prior to a frame with the false positive. The system generates an output video comprising the input video displayed with tracking on the object of interest.Type: ApplicationFiled: September 8, 2022Publication date: March 14, 2024Applicant: Palo Alto Research Center IncorporatedInventors: Matthew A. Shreve, Robert R. Price, Jeyasri Subramanian, Sumeet Menon
-
Publication number: 20240073152Abstract: A system and method provide a combination of a modular message structure, a priority-based message packing scheme, and a data packet queue management system to optimize the information content of a transmitted message in, for example, the Ocean of Things (OoT) environment. The modular message structure starts with a header that provides critical information and reference points for time and location. The rest of the message is composed of modular data packets, each of which has a data ID section that the message decoder uses for reference when reconstructing the message contents, an optional size section that specifies the length of the following data section if it can contain data of variable length, and a data section that can be compressed in a manner unique to that data type.Type: ApplicationFiled: August 30, 2022Publication date: February 29, 2024Applicant: Palo Alto Research Center IncorporatedInventors: Eric D. COCKER, Matthew A. SHREVE, Francisco E. TORRES
-
Publication number: 20240069962Abstract: A method and system for implementing a task scheduler are provided in a resource constrained computation system that uses meta data provided for each task (e.g. data analysis algorithm or sensor sampling protocol) to determine which tasks should be run in a particular wake cycle, the order in which the tasks are run, and how the tasks are distributed across the available compute resources. When a task successfully completes, it's time of execution is logged in order to provide a reference for when that task should be run again. Task meta data is formatted in a manner to allow for simple integration of new tasks into the processing architecture.Type: ApplicationFiled: August 30, 2022Publication date: February 29, 2024Applicant: Palo Alto Research Center IncorporatedInventors: Matthew A. SHREVE, Eric D. COCKER
-
Patent number: 11917337Abstract: The present specification relates to image capture. More specifically, it relates to selective image capture for sensor carrying devices or floats deployed, for example, on the open sea. In one form, data is generated on the sensor carrying devices or floats by an on-board Inertial Measurement Unit (IMU) and is used to automatically predict the wave motion of the sea. These predictions are then used to determine an acceptable set of motion parameters that are used to trigger the on-board camera(s). The camera(s) then capture images. One consideration is that images captured at or near the peak of a wave crest with minimal pitch and roll will contain fewer obstructions (such as other waves). Such images provide a view further into the horizon to, for example, monitor maritime sea traffic and other phenomenon. Therefore, the likelihood of capturing interesting objects such as ships, boats, garbage, birds, . . . etc. is increased.Type: GrantFiled: August 31, 2021Date of Patent: February 27, 2024Assignee: XEROX CORPORATIONInventors: Matthew A. Shreve, Eric Cocker
-
Patent number: 11917289Abstract: A system is provided which obtains images of a physical object captured by an AR recording device in a 3D scene. The system measures a level of diversity of the obtained images, for a respective image, based on at least: a distance and angle; a lighting condition; and a percentage of occlusion. The system generates, based on the level of diversity, a first visualization of additional images to be captured by projecting, on a display of the recording device, first instructions for capturing the additional images using the AR recording device. The system trains a model based on the collected data. The system performs an error analysis on the collected data to estimate an error rate for each image of the collected data. The system generates, based on the error analysis, a second visualization of further images to be captured. The model is further trained based on the collected data.Type: GrantFiled: June 14, 2022Date of Patent: February 27, 2024Assignee: Xerox CorporationInventors: Matthew A. Shreve, Robert R. Price
-
Publication number: 20240046568Abstract: A system is provided which mixes static scene and live annotations for labeled dataset collection. A first recording device obtains a 3D mesh of a scene with physical objects. The first recording device marks, while in a first mode, first annotations for a physical object displayed in the 3D mesh. The system switches to a second mode. The system displays, on the first recording device while in the second mode, the 3D mesh including a first projection indicating a 2D bounding area corresponding to the marked first annotations. The first recording device marks, while in the second mode, second annotations for the physical object or another physical object displayed in the 3D mesh. The system switches to the first mode. The first recording device displays, while in the first mode, the 3D mesh including a second projection indicating a 2D bounding area corresponding to the marked second annotations.Type: ApplicationFiled: August 2, 2022Publication date: February 8, 2024Applicant: Palo Alto Research Center IncorporatedInventors: Matthew A. Shreve, Jeyasri Subramanian
-
Publication number: 20230403459Abstract: A system is provided which obtains images of a physical object captured by an AR recording device in a 3D scene. The system measures a level of diversity of the obtained images, for a respective image, based on at least: a distance and angle; a lighting condition; and a percentage of occlusion. The system generates, based on the level of diversity, a first visualization of additional images to be captured by projecting, on a display of the recording device, first instructions for capturing the additional images using the AR recording device. The system trains a model based on the collected data. The system performs an error analysis on the collected data to estimate an error rate for each image of the collected data. The system generates, based on the error analysis, a second visualization of further images to be captured. The model is further trained based on the collected data.Type: ApplicationFiled: June 14, 2022Publication date: December 14, 2023Applicant: Palo Alto Research Center IncorporatedInventors: Matthew A. Shreve, Robert R. Price
-
Patent number: 11741693Abstract: One embodiment facilitates generating synthetic data objects using a semi-supervised GAN. During operation, a generator module synthesizes a data object derived from a noise vector and an attribute label. The system passes, to an unsupervised discriminator module, the data object and a set of training objects which are obtained from a training data set. The unsupervised discriminator module calculates: a value indicating a probability that the data object is real; and a latent feature representation of the data object. The system passes the latent feature representation and the attribute label to a supervised discriminator module. The supervised discriminator module calculates a value indicating a probability that the attribute label given the data object is real. The system performs the aforementioned steps iteratively until the generator module produces data objects with a given attribute label which the unsupervised and supervised discriminator modules can no longer identify as fake.Type: GrantFiled: November 29, 2017Date of Patent: August 29, 2023Assignee: Palo Alto Research Center IncorporatedInventors: Sricharan Kallur Palli Kumar, Raja Bala, Jin Sun, Hui Ding, Matthew A. Shreve
-
Publication number: 20230090801Abstract: Embodiments described herein provide a system for generating semantically accurate synthetic images. During operation, the system generates a first synthetic image using a first artificial intelligence (AI) model and presents the first synthetic image in a user interface. The user interface allows a user to identify image units of the first synthetic image that are semantically irregular. The system then obtains semantic information for the semantically irregular image units from the user via the user interface and generates a second synthetic image using a second AI model based on the semantic information. The second synthetic image can be an improved image compared to the first synthetic image.Type: ApplicationFiled: November 23, 2022Publication date: March 23, 2023Applicant: Palo Alto Research Center IncorporatedInventors: Raja Bala, Sricharan Kallur Palli Kumar, Matthew A. Shreve
-
Publication number: 20230060417Abstract: The present specification relates to image capture. More specifically, it relates to selective image capture for sensor carrying devices or floats deployed, for example, on the open sea. In one form, data is generated on the sensor carrying devices or floats by an on-board Inertial Measurement Unit (IMU) and is used to automatically predict the wave motion of the sea. These predictions are then used to determine an acceptable set of motion parameters that are used to trigger the on-board camera(s). The camera(s) then capture images. One consideration is that images captured at or near the peak of a wave crest with minimal pitch and roll will contain fewer obstructions (such as other waves). Such images provide a view further into the horizon to, for example, monitor maritime sea traffic and other phenomenon. Therefore, the likelihood of capturing interesting objects such as ships, boats, garbage, birds, . . . etc. is increased.Type: ApplicationFiled: August 31, 2021Publication date: March 2, 2023Applicant: Palo Alto Research Center IncorporatedInventors: Matthew A. SHREVE, Eric COCKER
-
Patent number: 11580450Abstract: Embodiments described herein provide a system for facilitating efficient dataset management. During operation, the system obtains a first dataset comprising a plurality of elements. The system then determines a set of categories for a respective element of the plurality of elements by applying a plurality of AI models to the first dataset. A respective category can correspond to an AI model. Subsequently, the system selects a set of sample elements associated with a respective category of a respective AI model and determines a second dataset based on the selected sample elements.Type: GrantFiled: January 16, 2020Date of Patent: February 14, 2023Assignee: Palo Alto Research Center IncorporatedInventors: Robert R. Price, Matthew A. Shreve
-
Patent number: 11537277Abstract: Embodiments described herein provide a system for generating semantically accurate synthetic images. During operation, the system generates a first synthetic image using a first artificial intelligence (AI) model and presents the first synthetic image in a user interface. The user interface allows a user to identify image units of the first synthetic image that are semantically irregular. The system then obtains semantic information for the semantically irregular image units from the user via the user interface and generates a second synthetic image using a second AI model based on the semantic information. The second synthetic image can be an improved image compared to the first synthetic image.Type: GrantFiled: July 19, 2018Date of Patent: December 27, 2022Assignee: Palo Alto Research Center IncorporatedInventors: Raja Bala, Sricharan Kallur Palli Kumar, Matthew A. Shreve
-
Patent number: 11431894Abstract: One embodiment can include a system for providing an image-capturing recommendation. During operation the system receives, from a mobile computing device, one or more images. The one or more images are captured by one or more cameras associated with the mobile computing device. The system analyzes the received images to obtain image-capturing conditions for capturing images of a target within a physical space; determines, based on the obtained image-capturing conditions and a predetermined image-quality requirement, one or more image-capturing settings; and recommends the determined one or more image-capturing settings to a user.Type: GrantFiled: February 6, 2020Date of Patent: August 30, 2022Assignee: Palo Alto Research Center IncorporatedInventors: Matthew A. Shreve, Raja Bala, Jeyasri Subramanian
-
Patent number: 11288792Abstract: One embodiment can provide a system for detecting a difference between a physical object and a reference model corresponding to the physical object. During operation, the system obtains a real-world image of the physical object and generates an augmented reality (AR) image by projecting a three-dimensional overlay generated from the reference model onto the physical object in the real-world image. While generating the AR image, the system aligns a pose of the reference model to a pose of the physical object. The system can detect the difference between the physical object and the reference model based on the generated AR image and the real-world image.Type: GrantFiled: February 19, 2020Date of Patent: March 29, 2022Assignee: Palo Alto Research Center IncorporatedInventors: Matthew A. Shreve, Lisa S. E. Rythen Larsson