Patents by Inventor Matthew A. Shreve
Matthew A. Shreve has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11093798Abstract: A method includes receiving a user object specified by a user. A similarity score is computed using a similarity function between the user object and one or more candidate objects in a database based on respective feature vectors. A first subset of the one or more candidate objects is presented to the user based on the respective computed similarity scores. First feedback is received from the user about the first subset of candidate objects. The similarity function is adjusted based on the received first feedback.Type: GrantFiled: December 28, 2018Date of Patent: August 17, 2021Assignee: Palo Alto Research Center IncorporatedInventors: Francisco E. Torres, Hoda Eldardiry, Matthew Shreve, Gaurang Gavai, Chad Ramos
-
Publication number: 20210250492Abstract: One embodiment can include a system for providing an image-capturing recommendation. During operation the system receives, from a mobile computing device, one or more images. The one or more images are captured by one or more cameras associated with the mobile computing device. The system analyzes the received images to obtain image-capturing conditions for capturing images of a target within a physical space; determines, based on the obtained image-capturing conditions and a predetermined image-quality requirement, one or more image-capturing settings; and recommends the determined one or more image-capturing settings to a user.Type: ApplicationFiled: February 6, 2020Publication date: August 12, 2021Applicant: Palo Alto Research Center IncorporatedInventors: Matthew A. Shreve, Raja Bala, Jeyasri Subramanian
-
Publication number: 20210224683Abstract: Embodiments described herein provide a system for facilitating efficient dataset management. During operation, the system obtains a first dataset comprising a plurality of elements. The system then determines a set of categories for a respective element of the plurality of elements by applying a plurality of AI models to the first dataset. A respective category can correspond to an AI model. Subsequently, the system selects a set of sample elements associated with a respective category of a respective AI model and determines a second dataset based on the selected sample elements.Type: ApplicationFiled: January 16, 2020Publication date: July 22, 2021Applicant: Palo Alto Research Center IncorporatedInventors: Robert R. Price, Matthew A. Shreve
-
Patent number: 11068746Abstract: A method for predicting the realism of an object within an image includes generating a training image set for a predetermined object type. The training image set comprises one or more training images at least partially generated using a computer. A pixel level training spatial realism map is generated for each training image of the one or more training images. Each training spatial realism map configured to represent a perceptual realism of the corresponding training image. A predictor is trained using the training image set and the corresponding training spatial realism maps. An image of the predetermined object is received. A spatial realism map of the received image is produced using the trained predictor.Type: GrantFiled: December 28, 2018Date of Patent: July 20, 2021Assignee: Palo Alto Research Center IncorporatedInventors: Raja Bala, Matthew Shreve, Jeyasri Subramanian, Pei Li
-
Publication number: 20210142039Abstract: An apparatus comprises an input interface configured to receive a first 3D point cloud associated with a physical object prior to articulation of an articulatable part, and a second 3D point cloud after articulation of the articulatable part. A processor is operably coupled to the input interface, an output interface, and memory. Program code, when executed by the processor, causes the processor to align the first and second point clouds, find nearest neighbors of points in the first point cloud to points in the second point cloud, eliminate the nearest neighbors of points in the second point cloud such that remaining points in the second point cloud comprise points associated with the articulatable part and points associated with noise, generate an output comprising at least the remaining points of the second point cloud associated with the articulatable part without the noise points, and communicate the output to the output interface.Type: ApplicationFiled: January 18, 2021Publication date: May 13, 2021Inventors: Matthew Shreve, Sreenivas Venkobarao
-
Patent number: 10984355Abstract: When monitoring a workspace to determine whether scheduled tasks or chores are completed according to a predetermined schedule, a video monitoring system monitors a region of interest (ROI) to identify employee-generated signals representing completion of a scheduled task. An employee makes a mark or gesture in the ROI monitored by the video monitoring system and the system analyzes pixels in each captured frame of the ROI to identify an employee signal, map the signal to a corresponding scheduled task, update the task as having been completed upon receipt of the employee signal, and alert a manager of the facility as to whether the task has been completed or not.Type: GrantFiled: April 17, 2015Date of Patent: April 20, 2021Assignee: XEROX CORPORATIONInventors: Michael C. Mongeon, Robert Loce, Matthew Shreve
-
Patent number: 10896317Abstract: An apparatus comprises an input interface configured to receive a first 3D point cloud associated with a physical object prior to articulation of an articulatable part, and a second 3D point cloud after articulation of the articulatable part. A processor is operably coupled to the input interface, an output interface, and memory. Program code, when executed by the processor, causes the processor to align the first and second point clouds, find nearest neighbors of points in the first point cloud to points in the second point cloud, eliminate the nearest neighbors of points in the second point cloud such that remaining points in the second point cloud comprise points associated with the articulatable part and points associated with noise, generate an output comprising at least the remaining points of the second point cloud associated with the articulatable part without the noise points, and communicate the output to the output interface.Type: GrantFiled: December 28, 2018Date of Patent: January 19, 2021Assignee: Palo Alto Research Center IncorporatedInventors: Matthew Shreve, Sreenivas Venkobarao
-
Patent number: 10854006Abstract: One embodiment provides a system that facilitates efficient collection of training data for training an image-detection artificial intelligence (AI) engine. During operation, the system obtains a three-dimensional (3D) model of a physical object placed in a scene, generates a virtual object corresponding to the physical object based on the 3D model, and substantially superimposes, in a view of an augmented reality (AR) camera, the virtual object over the physical object. The system can further configure the AR camera to capture a physical image comprising the physical object in the scene and a corresponding AR image comprising the virtual object superimposed over the physical object, and create an annotation for the physical image based on the AR image.Type: GrantFiled: November 15, 2018Date of Patent: December 1, 2020Assignee: Palo Alto Research Center IncorporatedInventors: Matthew A. Shreve, Robert R. Price
-
Patent number: 10810416Abstract: One embodiment provides a method for facilitating real-world interaction with virtual reality. During operation, the system receives, by a computing device from a virtual reality device associated with a user, instructions to configure physical components, wherein for a first physical component at a first location, the instructions indicate a type and an orientation, and wherein for a second physical component located at a second location, the instructions indicate a type, a length of extension, and an angle. The system executes, by a pose-adjusting unit, the instructions, which involves: physically moving the first physical component to the indicated orientation at the first location; physically extending the second physical component from the second location by the indicated length; and physically rotating the extended second physical component by the indicated angle. The system renders, on the virtual reality device, the configured physical components.Type: GrantFiled: December 14, 2018Date of Patent: October 20, 2020Assignee: Palo Alto Reseach Center IncorporatedInventors: Matthew A. Shreve, Robert R. Price, Lester D. Nelson, James D. Glasnapp
-
Publication number: 20200250484Abstract: One embodiment provides a system that facilitates efficient collection of training data. During operation, the system obtains, by a recording device, a first image of a physical object in a scene which is associated with a three-dimensional (3D) world coordinate frame. The system marks, on the first image, a plurality of vertices associated with the physical object, wherein a vertex has 3D coordinates based on the 3D world coordinate frame. The system obtains a plurality of second images of the physical object in the scene while changing one or more characteristics of the scene. The system projects the marked vertices on to a respective second image to indicate a two-dimensional (2D) bounding area associated with the physical object.Type: ApplicationFiled: April 23, 2020Publication date: August 6, 2020Applicant: Palo Alto Research Center IncorporatedInventors: Matthew A. Shreve, Sricharan Kallur Palli Kumar, Jin Sun, Gaurang R. Gavai, Robert R. Price, Hoda M. A. Eldardiry
-
Publication number: 20200210780Abstract: A method includes receiving a user object specified by a user. A similarity score is computed using a similarity function between the user object and one or more candidate objects in a database based on respective feature vectors. A first subset of the one or more candidate objects is presented to the user based on the respective computed similarity scores. First feedback is received from the user about the first subset of candidate objects. The similarity function is adjusted based on the received first feedback.Type: ApplicationFiled: December 28, 2018Publication date: July 2, 2020Inventors: Francisco E. Torres, Hoda Eldardiry, Matthew Shreve, Gaurang Gavai, Chad Ramos
-
Publication number: 20200210770Abstract: A method for predicting the realism of an object within an image includes generating a training image set for a predetermined object type. The training image set comprises one or more training images at least partially generated using a computer. A pixel level training spatial realism map is generated for each training image of the one or more training images. Each training spatial realism map configured to represent a perceptual realism of the corresponding training image. A predictor is trained using the training image set and the corresponding training spatial realism maps. An image of the predetermined object is received. A spatial realism map of the received image is produced using the trained predictor.Type: ApplicationFiled: December 28, 2018Publication date: July 2, 2020Inventors: Raja Bala, Matthew Shreve, Jeyasri Subramanian, Pei Li
-
Publication number: 20200207616Abstract: Disclosed are methods and systems of controlling the placement of micro-objects on the surface of a micro-assembler. Control patterns may be used to cause phototransistors or electrodes of the micro-assembler to generate dielectrophoretic (DEP) and electrophoretic (EP) forces which may be used to manipulate, move, position, or orient one or more micro-objects on the surface of the micro-assembler. A set of micro-object may be analyzed. Geometric properties of the set of micro-objects may be identified. The set of micro-objects may be divided into multiple sub-sets of micro-objects based on the one or more geometric properties and one or more control patterns.Type: ApplicationFiled: December 31, 2018Publication date: July 2, 2020Inventors: Anne Plochowietz, Matthew Shreve
-
Publication number: 20200210680Abstract: An apparatus comprises an input interface configured to receive a first 3D point cloud associated with a physical object prior to articulation of an articulatable part, and a second 3D point cloud after articulation of the articulatable part. A processor is operably coupled to the input interface, an output interface, and memory. Program code, when executed by the processor, causes the processor to align the first and second point clouds, find nearest neighbors of points in the first point cloud to points in the second point cloud, eliminate the nearest neighbors of points in the second point cloud such that remaining points in the second point cloud comprise points associated with the articulatable part and points associated with noise, generate an output comprising at least the remaining points of the second point cloud associated with the articulatable part without the noise points, and communicate the output to the output interface.Type: ApplicationFiled: December 28, 2018Publication date: July 2, 2020Inventors: Matthew Shreve, Sreenivas Venkobarao
-
Publication number: 20200207617Abstract: Disclosed are methods and systems of controlling the placement of micro-objects on the surface of a micro-assembler. Control patterns may be used to cause electrodes of the micro-assembler to generate dielectrophoretic (DEP) and electrophoretic (EP) forces which may be used to manipulate, move, position, or orient one or more micro-objects on the surface of the micro-assembler. The control patterns may be part of a library of control patterns.Type: ApplicationFiled: December 31, 2018Publication date: July 2, 2020Inventors: Anne Plochowietz, Bradley Rupp, Jengping Lu, Julie A. Bert, Lara S. Crawford, Sourobh Raychaudhuri, Eugene M. Chow, Matthew Shreve, Sergey Butylkov
-
Patent number: 10699165Abstract: One embodiment provides a system that facilitates efficient collection of training data. During operation, the system obtains, by a recording device, a first image of a physical object in a scene which is associated with a three-dimensional (3D) world coordinate frame. The system marks, on the first image, a plurality of vertices associated with the physical object, wherein a vertex has 3D coordinates based on the 3D world coordinate frame. The system obtains a plurality of second images of the physical object in the scene while changing one or more characteristics of the scene. The system projects the marked vertices on to a respective second image to indicate a two-dimensional (2D) bounding area associated with the physical object.Type: GrantFiled: November 29, 2017Date of Patent: June 30, 2020Assignee: Palo Alto Research Center IncorporatedInventors: Matthew A. Shreve, Sricharan Kallur Palli Kumar, Jin Sun, Gaurang R. Gavai, Robert R. Price, Hoda M. A. Eldardiry
-
Publication number: 20200193150Abstract: One embodiment provides a method for facilitating real-world interaction with virtual reality. During operation, the system receives, by a computing device from a virtual reality device associated with a user, instructions to configure physical components, wherein for a first physical component at a first location, the instructions indicate a type and an orientation, and wherein for a second physical component located at a second location, the instructions indicate a type, a length of extension, and an angle. The system executes, by a pose-adjusting unit, the instructions, which involves: physically moving the first physical component to the indicated orientation at the first location; physically extending the second physical component from the second location by the indicated length; and physically rotating the extended second physical component by the indicated angle. The system renders, on the virtual reality device, the configured physical components.Type: ApplicationFiled: December 14, 2018Publication date: June 18, 2020Applicant: Palo Alto Research Center IncorporatedInventors: Matthew A. Shreve, Robert R. Price, Lester D. Nelson, James D. Glasnapp
-
Publication number: 20200160601Abstract: One embodiment provides a system that facilitates efficient collection of training data for training an image-detection artificial intelligence (AI) engine. During operation, the system obtains a three-dimensional (3D) model of a physical object placed in a scene, generates a virtual object corresponding to the physical object based on the 3D model, and substantially superimposes, in a view of an augmented reality (AR) camera, the virtual object over the physical object. The system can further configure the AR camera to capture a physical image comprising the physical object in the scene and a corresponding AR image comprising the virtual object superimposed over the physical object, and create an annotation for the physical image based on the AR image.Type: ApplicationFiled: November 15, 2018Publication date: May 21, 2020Applicant: Palo Alto Research Center IncorporatedInventors: Matthew A. Shreve, Robert R. Price
-
Publication number: 20200026416Abstract: Embodiments described herein provide a system for generating semantically accurate synthetic images. During operation, the system generates a first synthetic image using a first artificial intelligence (AI) model and presents the first synthetic image in a user interface. The user interface allows a user to identify image units of the first synthetic image that are semantically irregular. The system then obtains semantic information for the semantically irregular image units from the user via the user interface and generates a second synthetic image using a second AI model based on the semantic information. The second synthetic image can be an improved image compared to the first synthetic image.Type: ApplicationFiled: July 19, 2018Publication date: January 23, 2020Applicant: Palo Alto Research Center IncorporatedInventors: Raja Bala, Sricharan Kallur Palli Kumar, Matthew A. Shreve
-
Publication number: 20190147333Abstract: One embodiment facilitates generating synthetic data objects using a semi-supervised GAN. During operation, a generator module synthesizes a data object derived from a noise vector and an attribute label. The system passes, to an unsupervised discriminator module, the data object and a set of training objects which are obtained from a training data set. The unsupervised discriminator module calculates: a value indicating a probability that the data object is real; and a latent feature representation of the data object. The system passes the latent feature representation and the attribute label to a supervised discriminator module. The supervised discriminator module calculates a value indicating a probability that the attribute label given the data object is real. The system performs the aforementioned steps iteratively until the generator module produces data objects with a given attribute label which the unsupervised and supervised discriminator modules can no longer identify as fake.Type: ApplicationFiled: November 29, 2017Publication date: May 16, 2019Applicant: Palo Alto Research Center IncorporatedInventors: Sricharan Kallur Palli Kumar, Raja Bala, Jin Sun, Hui Ding, Matthew A. Shreve