Patents by Inventor Matthew A. Shreve
Matthew A. Shreve has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20200250484Abstract: One embodiment provides a system that facilitates efficient collection of training data. During operation, the system obtains, by a recording device, a first image of a physical object in a scene which is associated with a three-dimensional (3D) world coordinate frame. The system marks, on the first image, a plurality of vertices associated with the physical object, wherein a vertex has 3D coordinates based on the 3D world coordinate frame. The system obtains a plurality of second images of the physical object in the scene while changing one or more characteristics of the scene. The system projects the marked vertices on to a respective second image to indicate a two-dimensional (2D) bounding area associated with the physical object.Type: ApplicationFiled: April 23, 2020Publication date: August 6, 2020Applicant: Palo Alto Research Center IncorporatedInventors: Matthew A. Shreve, Sricharan Kallur Palli Kumar, Jin Sun, Gaurang R. Gavai, Robert R. Price, Hoda M. A. Eldardiry
-
Publication number: 20200210780Abstract: A method includes receiving a user object specified by a user. A similarity score is computed using a similarity function between the user object and one or more candidate objects in a database based on respective feature vectors. A first subset of the one or more candidate objects is presented to the user based on the respective computed similarity scores. First feedback is received from the user about the first subset of candidate objects. The similarity function is adjusted based on the received first feedback.Type: ApplicationFiled: December 28, 2018Publication date: July 2, 2020Inventors: Francisco E. Torres, Hoda Eldardiry, Matthew Shreve, Gaurang Gavai, Chad Ramos
-
Publication number: 20200210770Abstract: A method for predicting the realism of an object within an image includes generating a training image set for a predetermined object type. The training image set comprises one or more training images at least partially generated using a computer. A pixel level training spatial realism map is generated for each training image of the one or more training images. Each training spatial realism map configured to represent a perceptual realism of the corresponding training image. A predictor is trained using the training image set and the corresponding training spatial realism maps. An image of the predetermined object is received. A spatial realism map of the received image is produced using the trained predictor.Type: ApplicationFiled: December 28, 2018Publication date: July 2, 2020Inventors: Raja Bala, Matthew Shreve, Jeyasri Subramanian, Pei Li
-
Publication number: 20200207616Abstract: Disclosed are methods and systems of controlling the placement of micro-objects on the surface of a micro-assembler. Control patterns may be used to cause phototransistors or electrodes of the micro-assembler to generate dielectrophoretic (DEP) and electrophoretic (EP) forces which may be used to manipulate, move, position, or orient one or more micro-objects on the surface of the micro-assembler. A set of micro-object may be analyzed. Geometric properties of the set of micro-objects may be identified. The set of micro-objects may be divided into multiple sub-sets of micro-objects based on the one or more geometric properties and one or more control patterns.Type: ApplicationFiled: December 31, 2018Publication date: July 2, 2020Inventors: Anne Plochowietz, Matthew Shreve
-
Publication number: 20200210680Abstract: An apparatus comprises an input interface configured to receive a first 3D point cloud associated with a physical object prior to articulation of an articulatable part, and a second 3D point cloud after articulation of the articulatable part. A processor is operably coupled to the input interface, an output interface, and memory. Program code, when executed by the processor, causes the processor to align the first and second point clouds, find nearest neighbors of points in the first point cloud to points in the second point cloud, eliminate the nearest neighbors of points in the second point cloud such that remaining points in the second point cloud comprise points associated with the articulatable part and points associated with noise, generate an output comprising at least the remaining points of the second point cloud associated with the articulatable part without the noise points, and communicate the output to the output interface.Type: ApplicationFiled: December 28, 2018Publication date: July 2, 2020Inventors: Matthew Shreve, Sreenivas Venkobarao
-
Publication number: 20200207617Abstract: Disclosed are methods and systems of controlling the placement of micro-objects on the surface of a micro-assembler. Control patterns may be used to cause electrodes of the micro-assembler to generate dielectrophoretic (DEP) and electrophoretic (EP) forces which may be used to manipulate, move, position, or orient one or more micro-objects on the surface of the micro-assembler. The control patterns may be part of a library of control patterns.Type: ApplicationFiled: December 31, 2018Publication date: July 2, 2020Inventors: Anne Plochowietz, Bradley Rupp, Jengping Lu, Julie A. Bert, Lara S. Crawford, Sourobh Raychaudhuri, Eugene M. Chow, Matthew Shreve, Sergey Butylkov
-
Patent number: 10699165Abstract: One embodiment provides a system that facilitates efficient collection of training data. During operation, the system obtains, by a recording device, a first image of a physical object in a scene which is associated with a three-dimensional (3D) world coordinate frame. The system marks, on the first image, a plurality of vertices associated with the physical object, wherein a vertex has 3D coordinates based on the 3D world coordinate frame. The system obtains a plurality of second images of the physical object in the scene while changing one or more characteristics of the scene. The system projects the marked vertices on to a respective second image to indicate a two-dimensional (2D) bounding area associated with the physical object.Type: GrantFiled: November 29, 2017Date of Patent: June 30, 2020Assignee: Palo Alto Research Center IncorporatedInventors: Matthew A. Shreve, Sricharan Kallur Palli Kumar, Jin Sun, Gaurang R. Gavai, Robert R. Price, Hoda M. A. Eldardiry
-
Publication number: 20200193150Abstract: One embodiment provides a method for facilitating real-world interaction with virtual reality. During operation, the system receives, by a computing device from a virtual reality device associated with a user, instructions to configure physical components, wherein for a first physical component at a first location, the instructions indicate a type and an orientation, and wherein for a second physical component located at a second location, the instructions indicate a type, a length of extension, and an angle. The system executes, by a pose-adjusting unit, the instructions, which involves: physically moving the first physical component to the indicated orientation at the first location; physically extending the second physical component from the second location by the indicated length; and physically rotating the extended second physical component by the indicated angle. The system renders, on the virtual reality device, the configured physical components.Type: ApplicationFiled: December 14, 2018Publication date: June 18, 2020Applicant: Palo Alto Research Center IncorporatedInventors: Matthew A. Shreve, Robert R. Price, Lester D. Nelson, James D. Glasnapp
-
Publication number: 20200160601Abstract: One embodiment provides a system that facilitates efficient collection of training data for training an image-detection artificial intelligence (AI) engine. During operation, the system obtains a three-dimensional (3D) model of a physical object placed in a scene, generates a virtual object corresponding to the physical object based on the 3D model, and substantially superimposes, in a view of an augmented reality (AR) camera, the virtual object over the physical object. The system can further configure the AR camera to capture a physical image comprising the physical object in the scene and a corresponding AR image comprising the virtual object superimposed over the physical object, and create an annotation for the physical image based on the AR image.Type: ApplicationFiled: November 15, 2018Publication date: May 21, 2020Applicant: Palo Alto Research Center IncorporatedInventors: Matthew A. Shreve, Robert R. Price
-
Publication number: 20200026416Abstract: Embodiments described herein provide a system for generating semantically accurate synthetic images. During operation, the system generates a first synthetic image using a first artificial intelligence (AI) model and presents the first synthetic image in a user interface. The user interface allows a user to identify image units of the first synthetic image that are semantically irregular. The system then obtains semantic information for the semantically irregular image units from the user via the user interface and generates a second synthetic image using a second AI model based on the semantic information. The second synthetic image can be an improved image compared to the first synthetic image.Type: ApplicationFiled: July 19, 2018Publication date: January 23, 2020Applicant: Palo Alto Research Center IncorporatedInventors: Raja Bala, Sricharan Kallur Palli Kumar, Matthew A. Shreve
-
Publication number: 20190147333Abstract: One embodiment facilitates generating synthetic data objects using a semi-supervised GAN. During operation, a generator module synthesizes a data object derived from a noise vector and an attribute label. The system passes, to an unsupervised discriminator module, the data object and a set of training objects which are obtained from a training data set. The unsupervised discriminator module calculates: a value indicating a probability that the data object is real; and a latent feature representation of the data object. The system passes the latent feature representation and the attribute label to a supervised discriminator module. The supervised discriminator module calculates a value indicating a probability that the attribute label given the data object is real. The system performs the aforementioned steps iteratively until the generator module produces data objects with a given attribute label which the unsupervised and supervised discriminator modules can no longer identify as fake.Type: ApplicationFiled: November 29, 2017Publication date: May 16, 2019Applicant: Palo Alto Research Center IncorporatedInventors: Sricharan Kallur Palli Kumar, Raja Bala, Jin Sun, Hui Ding, Matthew A. Shreve
-
Publication number: 20190130219Abstract: One embodiment provides a system that facilitates efficient collection of training data. During operation, the system obtains, by a recording device, a first image of a physical object in a scene which is associated with a three-dimensional (3D) world coordinate frame. The system marks, on the first image, a plurality of vertices associated with the physical object, wherein a vertex has 3D coordinates based on the 3D world coordinate frame. The system obtains a plurality of second images of the physical object in the scene while changing one or more characteristics of the scene. The system projects the marked vertices on to a respective second image to indicate a two-dimensional (2D) bounding area associated with the physical object.Type: ApplicationFiled: November 29, 2017Publication date: May 2, 2019Applicant: Palo Alto Research Center IncorporatedInventors: Matthew A. Shreve, Sricharan Kallur Palli Kumar, Jin Sun, Gaurang R. Gavai, Robert R. Price, Hoda M. A. Eldardiry
-
Patent number: 9911055Abstract: Methods, systems, and processor-readable media for the detection and classification of license plates. In an example embodiment, an image of a vehicle can be captured with an image-capturing unit. A license plate region can then be located in the captured image of the vehicle by extracting a set of candidate regions from the image utilizing a weak classifier. A set of candidate regions can be ranked utilizing a secondary strong classifier. The captured image can then be classified according to a confidence driven classification based on classification criteria determined by the weak classifier and the secondary strong classifier.Type: GrantFiled: March 8, 2016Date of Patent: March 6, 2018Assignee: Conduent Business Services, LLCInventors: Vladimir Kozitsky, Matthew Shreve, Orhan Bulan
-
Patent number: 9875903Abstract: A computer vision system (100) operates to monitor an environment (e.g., such as a restaurant, store or other retail establishment) including a resource located therein (e.g., such as a restroom, a dining table, a drink, condiment or supply dispenser, a trash receptacle or a tray collection rack). The system includes: an image source or camera (104) that supplies image data (130) representative of at least a portion of the environment monitored by the system, the portion including the resource therein; and an event detection device (102) including a data processor (112) and operative to detect an event involving the resource. Suitably, the event detection device is arranged to: (i) be selectively configurable by a user to define the event involving the resource; (ii) receive the image data supplied by the image source; (iii) analyze the received image data to detect the defined event; and (iv) output a notification in response to detecting the defined event.Type: GrantFiled: March 30, 2015Date of Patent: January 23, 2018Assignee: Conduent Business Services, LLCInventors: Matthew A. Shreve, Michael C. Mongeon, Edgar A. Bernal, Robert P. Loce
-
Publication number: 20170262723Abstract: Methods, systems, and processor-readable media for the detection and classification of license plates. In an example embodiment, an image of a vehicle can be captured with an image-capturing unit. A license plate region can then be located in the captured image of the vehicle by extracting a set of candidate regions from the image utilizing a weak classifier. A set of candidate regions can be ranked utilizing a secondary strong classifier. The captured image can then be classified according to a confidence driven classification based on classification criteria determined by the weak classifier and the secondary strong classifier.Type: ApplicationFiled: March 8, 2016Publication date: September 14, 2017Inventors: Vladimir Kozitsky, Matthew Shreve, Orhan Bulan
-
Publication number: 20170169297Abstract: A system and method of monitoring a region of interest comprises obtaining visual data comprising image frames of the region of interest over a period of time, analyzing individual subjects within the region of interest, the analyzing including at least one of tracking movement of individual subjects over time within the region of interest or extracting an appearance attribute of the individual subjects, and defining a group to include individual subjects having at least one of similar movement profiles or similar appearance attributes. The tracking movement includes detecting at least one of a trajectory of an individual subject within the region of interest, a dwell of an individual subject in at least one location within the region of interest, or an entrance or exit location within the region of interest.Type: ApplicationFiled: December 9, 2015Publication date: June 15, 2017Applicant: Xerox CorporationInventors: Edgar A. Bernal, Aaron M. Burry, Matthew A. Shreve, Michael C. Mongeon, Robert P. Loce, Peter Paul, Wencheng Wu
-
Patent number: 9563914Abstract: A system for delivering one of a good and service to a customer in a retail environment includes a computer located at an order station. The computer is configured to receive an order for the one good and service. The system includes a first image capture device in communication with the computer. The first image capture device captures a first image of a customer ordering the one good and service in response to the order being submitted. The system further includes a wearable computer peripheral device configured to acquire the first image from the first image capture device and electronically display the first image to a user tasked with delivering the one good and service while carrying the second wearable computer peripheral device. In this manner, an identity of the customer can be compared against the first image upon a delivery of the one good and service.Type: GrantFiled: April 15, 2014Date of Patent: February 7, 2017Assignee: XEROX CORPORATIONInventors: Matthew A. Shreve, Michael C. Mongeon, Robert P. Loce
-
Publication number: 20160307143Abstract: When monitoring a workspace to determine whether scheduled tasks or chores are completed according to a predetermined schedule, a video monitoring system monitors a region of interest (ROI) to identify employee-generated signals representing completion of a scheduled task. An employee makes a mark or gesture in the ROI monitored by the video monitoring system and the system analyzes pixels in each captured frame of the ROI to identify an employee signal, map the signal to a corresponding scheduled task, update the task as having been completed upon receipt of the employee signal, and alert a manager of the facility as to whether the task has been completed or not.Type: ApplicationFiled: April 17, 2015Publication date: October 20, 2016Inventors: Michael C. Mongeon, Robert Loce, Matthew Shreve
-
Patent number: 9460367Abstract: Systems and methods for automating an image rejection process. Features including texture, spatial structure, and image quality characteristics can be extracted from one or more images to train a classifier. Features can be calculated with respect to a test image for submission of the features to the classifier, given an operating point corresponding to a desired false positive rate. One or more inputs can be generated from the classifier as a confidence value corresponding to a likelihood of, for example: a license plate being absent in the image, the license plate being unreadable, or the license plate being obstructed. The confidence value can be compared against a threshold to determine if the image(s) should be removed from a human review pipeline, thereby reducing images requiring human review.Type: GrantFiled: December 5, 2014Date of Patent: October 4, 2016Assignee: Xerox CorporationInventors: Vladimir Kozitsky, Matthew Shreve, Aaron M. Burry
-
Publication number: 20160148076Abstract: Systems and methods for automating an image rejection process. Features including texture, spatial structure, and image quality characteristics can be extracted from one or more images to train a classifier. Features can be calculated with respect to a test image for submission of the features to the classifier, given an operating point corresponding to a desired false positive rate. One or more inputs can be generated from the classifier as a confidence value corresponding to a likelihood of, for example: a license plate being absent in the image, the license plate being unreadable, or the license plate being obstructed. The confidence value can be compared against a threshold to determine if the image(s) should be removed from a human review pipeline, thereby reducing images requiring human review.Type: ApplicationFiled: December 5, 2014Publication date: May 26, 2016Inventors: Vladimir Kozitsky, Matthew Shreve, Aaron M. Burry