Trainable Classifiers Or Pattern Recognizers (e.g., Adaline, Perceptron) Patents (Class 382/159)
  • Patent number: 10339369
    Abstract: Facial expressions are recognized using relations determined by class-to-class comparisons. In one example, descriptors are determined for each of a plurality of facial expression classes. Pair-wise facial expression class-to-class tasks are defined. A set of discriminative image patches are learned for each task using labelled training images. Each image patch is a portion of an image. Differences in the learned image patches in each training image are determined for each task. A relation graph is defined for each image for each task using the differences. A final descriptor is determined for each image by stacking and concatenating the relation graphs for each task. Finally, the final descriptors of the images of the are fed into a training algorithm to learn a final facial expression model.
    Type: Grant
    Filed: September 16, 2015
    Date of Patent: July 2, 2019
    Assignee: INTEL CORPORATION
    Inventors: Anbang Yao, Junchao Shao, Yurong Chen
  • Patent number: 10331975
    Abstract: Systems, methods, and computer readable media related to training and/or using a neural network model. The trained neural network model can be utilized to generate (e.g., over a hidden layer) a spectral image based on a regular image, and to generate output indicative of one or more features present in the generated spectral image (and present in the regular image since the spectral image is generated based on the regular image). As one example, a regular image may be applied as input to the trained neural network model, a spectral image generated over multiple layers of the trained neural network model based on the regular image, and output generated over a plurality of additional layers based on the spectral image. The generated output may be indicative of various features, depending on the training of the additional layers of the trained neural network model.
    Type: Grant
    Filed: November 29, 2016
    Date of Patent: June 25, 2019
    Assignee: GOOGLE LLC
    Inventor: Alexander Gorban
  • Patent number: 10332091
    Abstract: A tax-exempt sale document creating system including a recognition unit configured to recognize, based on an obtained image, a described content of a passport or a qualification document that indicates a qualification for entering a country; an information obtaining unit configured to obtain price information on a selling price of a commodity; a printing unit configured to print a tax document for a tax-exempt sale for the commodity using the described content recognized by the recognition unit; and a determination unit configured to determine, based on the price information obtained by the information obtaining unit, whether an image of the passport or the qualification document in the obtained image should be printed with the tax document for the tax-exempt sale.
    Type: Grant
    Filed: May 24, 2016
    Date of Patent: June 25, 2019
    Assignee: Ricoh Company, Ltd.
    Inventors: Toshiki Takai, Toshinori Takaki
  • Patent number: 10331976
    Abstract: In image classification, each class of a set of classes is embedded in an attribute space where each dimension of the attribute space corresponds to a class attribute. The embedding generates a class attribute vector for each class of the set of classes. A set of parameters of a prediction function operating in the attribute space respective to a set of training images annotated with classes of the set of classes is optimized such that the prediction function with the optimized set of parameters optimally predicts the annotated classes for the set of training images. The prediction function with the optimized set of parameters is applied to an input image to generate at least one class label for the input image. The image classification does not include applying a class attribute classifier to the input image.
    Type: Grant
    Filed: June 21, 2013
    Date of Patent: June 25, 2019
    Assignee: XEROX CORPORATION
    Inventors: Zeynep Akata, Florent C. Perronnin, Zaid Harchaoui, Cordelia L. Schmid
  • Patent number: 10325163
    Abstract: A computer is programmed to detect an object based on vehicle camera image data. The computer determines a light source and determines, based in part on a light source position, that the detected object is a shadow. The computer then navigates the vehicle without avoiding the object upon determining that the detected object is a shadow.
    Type: Grant
    Filed: November 22, 2016
    Date of Patent: June 18, 2019
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Oswaldo Perez Barrera, Alvaro Jimenez Hernandez
  • Patent number: 10325371
    Abstract: A method for segmenting an image by using each of a plurality of weighted convolution filters for each of grid cells to be used for converting modes according to classes of areas is provided to satisfy level 4 of an autonomous vehicle. The method includes steps of: a learning device (a) instructing (i) an encoding layer to generate an encoded feature map and (ii) a decoding layer to generate a decoded feature map; (b) if a specific decoded feature map is divided into the grid cells, instructing a weight convolution layer to set weighted convolution filters therein to correspond to the grid cells, and to apply a weight convolution operation to the specific decoded feature map; and (c) backpropagating a loss. The method is applicable to CCTV for surveillance as the neural network may have respective optimum parameters to be applied to respective regions with respective distances.
    Type: Grant
    Filed: January 22, 2019
    Date of Patent: June 18, 2019
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10318796
    Abstract: Age progression of a test facial image is facilitated by compiling training data, including a training set(s) having selected initial images of subjects by gender and age-group. In addition, the age progression includes manipulating the training data, including: for a given age-group of a training set, substantially aligning respective face shapes; determining a common frame based on the aligned shapes; substantially aligning respective face appearances to generate a shape-free form corresponding to the face appearance of each subject, using the substantially aligned shapes to generate an age-specific shape-dictionary for each age-group, and a common shape-dictionary for the age-groups of the training set, and using the aligned appearances to generate at least an age-specific appearance-dictionary for each age-group, and a common appearance-dictionary for the age-groups of the training set.
    Type: Grant
    Filed: November 10, 2016
    Date of Patent: June 11, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Saritha Arunkumar, Nalini K. Ratha, Christos Sagonas, Peter W. Waggett
  • Patent number: 10318892
    Abstract: Application of inter-class and intra-class filtering, based on aggregate point-to-point distances, to vector data for purposes of filtering the vector data for purposes of pattern recognition. In some embodiments: (i) the inter-class filtering is based on Euclidean distance, in all dimensions, between vector data points in vector space; and/or (ii) the intra-class filtering is based on a distance, in all dimensions, between vector data points in vector space.
    Type: Grant
    Filed: December 14, 2016
    Date of Patent: June 11, 2019
    Assignee: International Business Machines Corporation
    Inventors: Saritha Arunkumar, Su Yang
  • Patent number: 10318848
    Abstract: A method of training for image classification includes labelling a crop from an image including an object of interest. The crop may be labelled with an indication of whether the object of interest is framed, partially framed or not present in the crop. The method may also include assigning a fully framed class to the labelled crop, including the object of interest, if the object of interest is framed. A labelled crop may be assigned a partially framed class if the object of interest is partially framed. A background class may be assigned to a labelled crop if the object of interest is not present in the crop.
    Type: Grant
    Filed: August 25, 2016
    Date of Patent: June 11, 2019
    Assignee: QUALCOMM Incorporated
    Inventors: Daniel Hendricus Franciscus Dijkman, David Jonathan Julian
  • Patent number: 10307125
    Abstract: According to one embodiment, an image processing apparatus includes processing circuitry. The processing circuitry obtains a parameter value representing temporal information on blood flow for each pixel of multiple image data items, and generates a parameter image by determining a pixel value of each pixel according to the parameter value representing the temporal information on blood flow. The processing circuitry further generates a composite image of an X-ray fluoroscopic image of an object obtained in real time and a road map image using at least a part of the parameter image, and causes a display to display the composite image.
    Type: Grant
    Filed: April 15, 2016
    Date of Patent: June 4, 2019
    Assignee: Toshiba Medical Systems Corporation
    Inventors: Ryoichi Nagae, Yasuto Hayatsu, Yoshiaki Iijima, Naoki Uchida, Yuichiro Watanabe, Takuya Sakaguchi
  • Patent number: 10296982
    Abstract: A system and method for evaluating an insurance applicant as part of an underwriting process to determine one or more appropriate terms of life or other insurance coverage, such as premiums. A processing element employing a neural network is trained to correlate aspects of appearance and/or voice with personal and/or health-related characteristic. A database of images and/or voice recordings of individuals with known personal and/or health-related characteristics is provided for this purpose. The processing element is then provided with an image and/or voice recording of the insurance applicant. The image may be an otherwise non-diagnostic image, such as an ordinary “selfie.
    Type: Grant
    Filed: September 15, 2016
    Date of Patent: May 21, 2019
    Assignee: State Farm Mutual Automobile Insurance Company
    Inventors: Michael L. Bernico, Jeffrey Myers
  • Patent number: 10296848
    Abstract: Systems and methods for intelligently training a machine learning model includes: configuring a machine learning (ML) training data request for a pre-existing machine learning classification model; transmitting the machine learning training data request to each of a plurality of external training data sources, wherein each of the plurality of external training data sources is different; collecting and storing the machine learning training data from each of the plurality of external training data sources; processing the collected machine learning training data using a predefined training data processing algorithm; and in response to processing the collected machine learning training data, deploying a subset of the collected machine learning training data into a live machine learning model.
    Type: Grant
    Filed: March 5, 2018
    Date of Patent: May 21, 2019
    Assignee: Clinc, Inc.
    Inventors: Jason Mars, Lingjia Tang, Michael Laurenzano, Johann Hauswald
  • Patent number: 10289963
    Abstract: One embodiment provides a method for developing a text analytics program for extracting at least one target concept including: utilizing at least one processor to execute computer code that performs the steps of: initiating a development tool that accepts user input to develop rules for extraction of features of the at least one target concept within a dataset comprising textual information; developing, using the rules for feature extraction, an evaluation dataset comprising at least one document annotated with the at least one target concept to be extracted by the text analytics program; creating, using the rules for feature extraction, a rule-based annotator to extract the at least one target concept; training, using the evaluation dataset, a machine-learning annotator to extract the at least one target concept within the dataset; combining the rule-based annotator and the machine learning annotator to form a combined annotator; evaluating, using the evaluation dataset, extraction performance of the combine
    Type: Grant
    Filed: February 27, 2017
    Date of Patent: May 14, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Laura Chiticariu, Jeffrey Thomas Kreulen, Rajasekar Krishnamurthy, Prithviraj Sen, Shivakumar Vaithyanathan
  • Patent number: 10282615
    Abstract: A system and methodologies for neuromorphic (NM) vision simulate conventional analog NM system functionality and generate digital NM image data that facilitate improved object detection, classification, and tracking.
    Type: Grant
    Filed: June 12, 2017
    Date of Patent: May 7, 2019
    Assignees: Volkswagen AG, Audi AG, Porsche AG
    Inventors: Edmund Dawes Zink, Douglas Allen Hauger, Lutz Junge, Jerramy L. Gipson, Allen Khorasani, Nils Kuepper, Stefan T. Hilligardt
  • Patent number: 10282606
    Abstract: In an example embodiment, a web page is obtained using a web page address stored in a first record and is parsed to extract one or more images from the web page along with a first plurality of features for each of the one or more images from the web page. Information about each image of the web page and the extracted first plurality of features for the web page are input into a supervised machine learning classifier to calculate a logo confidence score for each image of the web page, the logo confidence score indicating the probability that the image is an organization logo. In response to a particular image in the web page having a logo confidence score transgressing a first threshold, the particular image is injected into an organization logo field of the first record.
    Type: Grant
    Filed: March 27, 2018
    Date of Patent: May 7, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Songtao Guo, Christopher Matthew Degiere, Jingjing Huang, Aarti Kumar, Alex Ching Lai, Xian Li
  • Patent number: 10275819
    Abstract: Incompatible item pairings may be eliminated or at least reduced when multiple items are presented. A pairwise approach is taken to train a machine learning model to return an incompatibility score for any given pair of items, which indicates a degree of incompatibility between the pair of items. Once trained, the machine learning model may be used to determine an incompatibility score for each unique pairing of items in a set of multiple items. In some embodiments, a graph is generated having nodes that correspond to the multiple items and undirected edges between pairs of the nodes. Scores are generated for each edge of the graph, a minimum spanning tree in the graph is determined, and the items are ranked based at least in part on the minimum spanning tree so that the items can be presented according to the ranking.
    Type: Grant
    Filed: May 13, 2015
    Date of Patent: April 30, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Joseph Edwin Johnson, Mohamed Mostafa Ibrahim Elshenawy, Shiblee Imtiaz Hasan, Nathan Eugene Masters, JaeHa Oh, Benjamin Schwartz
  • Patent number: 10268923
    Abstract: A system and methods are provided for dynamic classifying in real time cases received in a stream of big data. The system comprises multiple remote autonomous classifiers. Each autonomous classifier generates a classification scheme comprising a plurality of classifier parameters. Upon receiving a case, the system determines from among the plurality of classifier parameters a most similar classifier parameter and the case may be added to a buffer of cases represented by the most similar classifier parameter. When a measure of error between the case and the most similar classifier parameter is greater than a threshold, the buffer is dynamically regrouped into one or more new buffers, according to a criterion of segmentation quality. One or more new classifier parameters, representing the one or more regrouped case buffers, are added to the classification scheme.
    Type: Grant
    Filed: December 29, 2016
    Date of Patent: April 23, 2019
    Assignee: BAR-ILAN UNIVERSITY
    Inventor: Roy Gelbard
  • Patent number: 10248860
    Abstract: A method of identifying, with a camera, an object in an image of a scene, by determining the distinctiveness of each of a number of attributes of an object of interest, independent of the camera viewpoint, determining the detectability of each of the attributes based on the relative orientation of a candidate object in the image of the scene, determining a camera setting for viewing the candidate object based on the distinctiveness of an attribute, so as to increase the detectability of the attribute, and capturing an image of the candidate object with the camera setting to determine the confidence that the candidate object is the object of interest.
    Type: Grant
    Filed: November 15, 2017
    Date of Patent: April 2, 2019
    Assignee: CANON KABUSHIKI KAISHA
    Inventors: Fei Mai, Geoffrey Richard Taylor
  • Patent number: 10242295
    Abstract: The present invention relates to method and apparatus for generating, updating classifier, detecting objects, and image processing device. A method for generating a multi-class classifier, comprising the following steps: generating at least one one-class background classifier by using a one-class object classifier and background image regions obtained from a sequence of images; and assembling the one-class object classifier and the at least one one-class background classifier into a multi-class classifier.
    Type: Grant
    Filed: May 11, 2016
    Date of Patent: March 26, 2019
    Assignee: Canon Kabushiki Kaisha
    Inventors: Long Jiang, Yong Jiang, Wenwen Zhang
  • Patent number: 10242292
    Abstract: A set of virtual images can be generated based on one or more real images and target rendering specifications, such that the set of virtual images correspond to (for example) different rendering specifications (or combinations thereof) than do the real images. A machine-learning model can be trained using the set of virtual images. Another real image can then be processed using the trained machine-learning model. The processing can include segmenting the other real image to detect whether and/or which objects are represented (and/or a state of the object). The object data can then be used to identify (for example) a state of a procedure.
    Type: Grant
    Filed: June 4, 2018
    Date of Patent: March 26, 2019
    Assignee: Digital Surgery Limited
    Inventors: Odysseas Zisimopoulos, Evangello Flouty, Imanol Luengo Muntion, Mark Stacey, Sam Muscroft, Petros Giataganas, Andre Chow, Jean Nehme, Danail Stoyanov
  • Patent number: 10235771
    Abstract: Techniques are provided for estimating a three-dimensional pose of an object. An image including the object can be obtained, and a plurality of two-dimensional (2D) projections of a three-dimensional bounding (3D) box of the object in the image can be determined. The plurality of 2D projections of the 3D bounding box can be determined by applying a trained regressor to the image. The trained regressor is trained to predict two-dimensional projections of the 3D bounding box of the object in a plurality of poses, based on a plurality of training images. The three-dimensional pose of the object is estimated using the plurality of 2D projections of the 3D bounding box.
    Type: Grant
    Filed: April 24, 2017
    Date of Patent: March 19, 2019
    Assignee: QUALCOMM Incorporated
    Inventors: Mahdi Rad, Markus Oberweger, Vincent Lepetit
  • Patent number: 10235603
    Abstract: Method, device and computer-readable medium for sensitive picture recognition are provided in the disclosure. Aspects of the disclosure provide a method for sensitive picture recognition. The method includes receiving a picture to be processed from a picture library associated with a user account, applying a sensitive picture recognition model to the picture to determine whether the picture is a sensitive picture or not, and providing a privacy protection associated with the user account to the picture when the picture is the sensitive picture. In an example, the method includes storing the picture in a private album under the user account with access security protection.
    Type: Grant
    Filed: July 11, 2016
    Date of Patent: March 19, 2019
    Assignee: Xiaomi Inc.
    Inventors: Tao Zhang, Fei Long, Zhijun Chen
  • Patent number: 10234554
    Abstract: Embodiments described herein simplify the recognition of objects in synthetic aperture radar (SAR) imagery. This may be achieved by processing the image in order to make the shadow caused by the object appear more similar to the object. Alternatively, this may be achieved by processing the image in order to make the layover caused by the object appear more similar to the object. Manipulation of the shadow caused by the object and the layover caused by the object may comprise altering the aspect ratio of the image, and in the case of manipulating the shadow caused by the image, may further comprise transforming the image by reflection or rotation. The aspect ratio of the image may be altered based on information about the image collection geometry, obtained by the SAR.
    Type: Grant
    Filed: May 6, 2016
    Date of Patent: March 19, 2019
    Assignee: THALES HOLDINGS UK PLC
    Inventor: Malcolm Stevens
  • Patent number: 10235600
    Abstract: The present invention provides a system and method for structured low-rank matrix factorization of data. The system and method involve solving an optimization problem that is not convex, but theoretical results should that a rank-deficient local minimum gives a global minimum. The system and method also involve an optimization strategy that is highly parallelizable and can be performed using a highly reduced set of variables. The present invention can be used for many large scale problems, with examples in biomedical video segmentation and hyperspectral compressed recovery.
    Type: Grant
    Filed: June 22, 2016
    Date of Patent: March 19, 2019
    Assignee: The Johns Hopkins University
    Inventors: Rene Vidal, Benjamin Haeffele, Eric D. Young
  • Patent number: 10229503
    Abstract: Methods, apparatuses, and computer-readable media are provided for splitting one or more merged blobs for one or more video frames. A blob detected for a current video frame is identified. The identified blob includes pixels of at least a portion of a foreground object in the current video frame. The identified blob is determined to be associated with two or more blob trackers from a plurality of blob trackers. The plurality of blob trackers are received from an object tracking operation performed for a previous video frame. It is then determined whether one or more splitting conditions are met. The splitting conditions can be based on a spatial relationship between bounding regions of the two or more blob trackers and a bounding region of the identified blob. The identified blob can be split into a first blob and a second blob in response to determining the one or more splitting conditions are met.
    Type: Grant
    Filed: May 18, 2017
    Date of Patent: March 12, 2019
    Assignee: QUALCOMM Incorporated
    Inventors: Ying Chen, Ning Bi, Zhongmin Wang
  • Patent number: 10223583
    Abstract: In an object detection apparatus 1, a window definer 11 defines a window relative to the location of a pixel in an input image 20. A classification value calculator 13 calculates a classification value indicative of the likelihood that a detection target is present in the window image contained in the window based on the feature data of the detection target. A classification image generator 14 arranges the classification value calculated from the window image according to the pixel location to generate a classification image. An integrator 15 integrates the classification image and a past classification image 42 generated from a past input image input prior to the input image 20 to generate an integrated image 45. A determiner 16 determines whether the detection target is present in the input image 20 based on the integrated image 45.
    Type: Grant
    Filed: November 21, 2016
    Date of Patent: March 5, 2019
    Assignee: MegaChips Corporation
    Inventors: Yuki Haraguchi, Hiromu Hasegawa
  • Patent number: 10216983
    Abstract: A security monitoring technique includes receiving data related to one or more individuals from one or more cameras in an environment. Based on the input data from the cameras, agent-based simulators are executed that each operate to generate a model of behavior of a respective individual, wherein an output of each model is symbolic sequences representative of internal experiences of the respective individual during simulation. Based on the symbolic sequences, a subsequent behavior for each of the respective individuals is predicted when the symbolic sequences match a query symbolic sequence for a query behavior.
    Type: Grant
    Filed: December 6, 2016
    Date of Patent: February 26, 2019
    Assignee: GENERAL ELECTRIC COMPANY
    Inventors: Peter Henry Tu, Tao Gao, Jilin Tu
  • Patent number: 10210179
    Abstract: This disclosure describes systems and methods for identifying and ranking relevant and diverse image search results in response to a query. An optimum set of features is identified for every query. The optimum set of features can be selected from a predefined set of features and can be selected based on a variance across features derived from an initial set of objects returned in response to the query. The optimum set of features can then be used to re-rank the initial set of objects or to search for a second set of objects and rank the second set of objects.
    Type: Grant
    Filed: November 18, 2008
    Date of Patent: February 19, 2019
    Assignee: EXCALIBUR IP, LLC
    Inventors: Roelof van Zwol, Reinier H. van Leuken
  • Patent number: 10205871
    Abstract: An imaging device may be configured to monitor a field of view for various objects or events occurring therein. The imaging device may capture a plurality of images at various focal lengths, identify a region of interest including one or more semantic objects therein, and determine measures of the levels of blur or sharpness within the regions of interest of the images. Based on their respective focal lengths and measures of their respective levels of blur or sharpness, a focal length for capturing subsequent images with sufficient clarity may be predicted. The imaging device may be adjusted to capture images at the predicted focal length, and such images may be captured. Feedback for further adjustments to the imaging device may be identified by determining measures of the levels of blur or sharpness within the subsequently captured images.
    Type: Grant
    Filed: July 10, 2017
    Date of Patent: February 12, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Dushyant Goyal, Pragyana K. Mishra
  • Patent number: 10192129
    Abstract: Systems and methods are disclosed for selecting target objects within digital images. In particular, in one or more embodiments, the disclosed systems and methods generate a trained neural network based on training digital images and training indicators. Moreover, one or more embodiments of the disclosed systems and methods utilize a trained neural network and iterative user indicators to select targeted objects in digital images. Specifically, the disclosed systems and methods can transform user indicators into distance maps that can be utilized in conjunction with color channels and a trained neural network to identify pixels that reflect the target object.
    Type: Grant
    Filed: November 18, 2015
    Date of Patent: January 29, 2019
    Assignee: ADOBE SYSTEMS INCORPORATED
    Inventors: Brian Price, Scott Cohen, Ning Xu
  • Patent number: 10186081
    Abstract: A tracker is described which comprises an input configured to receive captured sensor data depicting an object. The tracker has a processor configured to access a rigged, smooth-surface model of the object and to compute values of pose parameters of the model by calculating an optimization to fit the model to data related to the captured sensor data. Variables representing correspondences between the data and the model are included in the optimization jointly with the pose parameters.
    Type: Grant
    Filed: December 29, 2015
    Date of Patent: January 22, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jonathan James Taylor, Thomas Joseph Cashman, Andrew William Fitzgibbon, Toby Sharp, Jamie Daniel Joseph Shotton
  • Patent number: 10186060
    Abstract: A computer generates connection matrixes corresponding to subgraphs extracted from source graphs. The connection matrixes include a plurality of elements each describing a connection between nodes in a corresponding subgraph or between a node in the corresponding subgraph and a neighboring node connected to one of the nodes in the corresponding subgraph. Based on the connection matrixes, the computer then generates a reference matrix that indicates a characteristic pattern of connections of nodes in the subgraphs, taking into consideration an order in which these nodes are arranged. The computer further performs a node-ordering swap operation on individual subgraphs, such that a submatrix representing node-to-node connections in a subgraph will be more similar to the reference matrix. The node-ordering swap operation includes changing the order of two nodes in a subgraph or swapping one node in a subgraph with a neighboring node connected to that subgraph.
    Type: Grant
    Filed: November 21, 2016
    Date of Patent: January 22, 2019
    Assignee: Fujitsu Limited
    Inventor: Koji Maruhashi
  • Patent number: 10180932
    Abstract: Systems and methods are provided for creating tables using auto-generated templates. Reports including lines of text to be extracted into tables are received. An auto define input is received to auto-generate the tables corresponding to the reports. Groups of lines are identified from among the lines of text in the reports. A detail group and relevant groups are selected and identified from among the groups of lines. A final detail group is created by merging the detail group with at least a portion of the relevant groups. Append groups are identified from among the groups of lines not included in the final detail group. Templates corresponding to the final detail group and the append groups are generated. Text is extracted from the reports based on the templates. Tables are generated using the text extracted from the reports, by assigning the text from the text fragments to entries in the tables.
    Type: Grant
    Filed: June 30, 2015
    Date of Patent: January 15, 2019
    Assignee: Datawatch Corporation
    Inventor: Mark Stephen Kyre
  • Patent number: 10176388
    Abstract: Systems and methods for segmenting an image using a convolutional neural network are described herein. A convolutional neural network (CNN) comprises an encoder-decoder architecture, and may comprise one or more Long Short Term Memory (LSTM) layers between the encoder and decoder layers. The LSTM layers provide temporal information in addition to the spatial information of the encoder-decoder layers. A subset of a sequence of images is input into the encoder layer of the CNN and a corresponding sequence of segmented images is output from the decoder layer. In some embodiments, the one or more LSTM layers may be combined in such a way that the CNN is predictive, providing predicted output of segmented images. Though the CNN provides multiple outputs, the CNN may be trained from single images or by generation of noisy ground truth datasets. Segmenting may be performed for object segmentation or free space segmentation.
    Type: Grant
    Filed: January 19, 2017
    Date of Patent: January 8, 2019
    Assignee: Zoox, Inc.
    Inventors: Mahsa Ghafarianzadeh, James William Vaisey Philbin
  • Patent number: 10162070
    Abstract: Acquired data that corresponds at least in part to a target structure is received. One or more subsets of a first type are formed from the acquired data. The one or more subsets of the first type are converted to one or more subsets of a second, different type.
    Type: Grant
    Filed: November 1, 2012
    Date of Patent: December 25, 2018
    Assignee: WESTERNGECO L.L.C.
    Inventors: David Nichols, Everett Mobley, Jr.
  • Patent number: 10163028
    Abstract: A computer-implemented method (40) of reducing processing time of an application for visualizing image data, the application being one of a plurality of applications selectable by a user and each of the plurality of applications comprising a pre-processing algorithm for pre-processing the image data.
    Type: Grant
    Filed: January 23, 2017
    Date of Patent: December 25, 2018
    Assignee: Koninklijke Philips N.V.
    Inventors: Eran Rubens, Bella Fadida-Specktor, Eliahu Zino, Menachem Melamed, Eran Meir, Netanel Weinberg, Maria Kaplinsky
  • Patent number: 10157352
    Abstract: Systems, methods, and computer program methods for modifying a configuration of a document management system are described. In some implementation document data are received as machine learning inputs, where the document data represent one or more documents. Then, a pattern is recognized in the one or more documents using machine learning. Based on the recognized pattern, a configuration of a document management system is modified.
    Type: Grant
    Filed: February 11, 2016
    Date of Patent: December 18, 2018
    Assignee: DataNovo, Inc.
    Inventors: Alex H Chan, Oleksandr Loginov, Eric Dew
  • Patent number: 10157178
    Abstract: A computer-implemented method according to one embodiment includes identifying a plurality of documents associated with a predetermined subject, where each of the plurality of documents contains textual data, analyzing the textual data of each of the plurality of documents to identify one or more categories within the plurality of the documents, and returning the one or more categories identified within the plurality of the documents.
    Type: Grant
    Filed: February 5, 2016
    Date of Patent: December 18, 2018
    Assignee: International Business Machines Corporation
    Inventors: Charles E. McManis, Jr., Douglas A. Smith
  • Patent number: 10146954
    Abstract: In one embodiment, a method includes managing and controlling a plurality of data-access credentials. The method further includes accessing data from a plurality, of sources in a plurality of data formats. The accessing includes using one or more data-access credentials of the plurality of data-access credentials. The one or more data-access credentials are associated with at least a portion of the plurality of data sources. The method also includes abstracting the data into a standardized format for further analysis. The abstracting includes selecting the standardized format based on a type of the data. In addition, the method includes applying a security policy to the data. The applying includes identifying at least a portion of the data for exclusion from storage based on the security policy. Additionally, the method includes filtering from storage any data identified for exclusion. Further, the method includes storing the data in the standardized format.
    Type: Grant
    Filed: July 31, 2017
    Date of Patent: December 4, 2018
    Assignee: Quest Software Inc.
    Inventors: Michel Albert Brisebois, Jason Aylesworth, Curtis T. Johnstone, Andrew John Leach, Elena V. Vinogradov, Joel Stacy Blaiberg, Stephen Pope, Shawn Donald Holmesdale, GuangNing Hu
  • Patent number: 10140511
    Abstract: According to one embodiment, a computer-implemented method is configured for building a classification and/or data extraction knowledge base using an electronic form. The method includes: receiving an electronic form having associated therewith a plurality of metadata labels, each metadata label corresponding to at least one element of interest represented within the electronic form; parsing the plurality of metadata labels to determine characteristic features of the element(s) of interest; building a representation of the electronic form based on the plurality of metadata labels; generating a plurality of permutations of the representation of the electronic form by applying a predetermined set of variations to the representation; and training either a classification model, an extraction model, or both using: the representation of the electronic form, and the plurality of permutations of the representation of the electronic form. Corresponding systems and computer program products are also disclosed.
    Type: Grant
    Filed: December 30, 2016
    Date of Patent: November 27, 2018
    Assignee: KOFAX, INC.
    Inventors: Anthony Macciola, Jan W. Amtrup, Stephen Michael Thompson
  • Patent number: 10140555
    Abstract: A processing system includes an input unit and a generation unit. The input unit receives an input of a plurality of sample images obtained by capturing an object associated with a category in different conditions with respect to a plurality of the categories. The generation unit generates likelihood distribution information in which each value representing a possible feature of each pixel or each pixel block, and each value representing a likelihood belonging to each of the categories are associated with each other, based on a feature of each pixel or each pixel block included in an area relating to each of the objects within the plurality of sample images associated with a category.
    Type: Grant
    Filed: October 2, 2014
    Date of Patent: November 27, 2018
    Assignee: NEC Corporation
    Inventor: Yusuke Takahashi
  • Patent number: 10129608
    Abstract: A solution is provided for detecting video highlights of a sports video. A video highlight of a sports video is a portion of the sports video and represents a semantically important event captured in the sports video. An audio stream associated with the sports video is evaluated, e.g., the loudness and length of the loudness of the portions of the audio stream. Video segments of the sports video are selected based on the evaluation of the audio stream. Each selected video segment represents a video highlight candidate of the sports video. A trained audio classification model is used to recognize the voice patterns in the audio stream associated with each selected video segment. Based on the comparison of the recognized video patterns with a set of desired voice patterns, one or more video segments are selected as the video highlights of the sports video.
    Type: Grant
    Filed: February 24, 2015
    Date of Patent: November 13, 2018
    Inventors: Zheng Han, Xiaowei Dai, Jiangyu Liu
  • Patent number: 10127476
    Abstract: A system, method and computer program product is provided. An input signal for classification and a set of pre-classified signals are received, each comprising a vector representation of an object having a plurality of vector elements. A sparse vector comprising a plurality of sparse vector coefficients is determined. Each sparse vector coefficient corresponds to a signal in the set of pre-classified signals and represents the likelihood of a match between the object represented in the input signal and the object represented in the corresponding signal. A largest sparse vector coefficient is compared with a predetermined threshold. If the largest sparse vector coefficient is less than the predetermined threshold, the corresponding signal is removed from the set of pre-classified signals. The determining and comparing are repeated using the input signal and the reduced set of pre-classified signals.
    Type: Grant
    Filed: December 7, 2017
    Date of Patent: November 13, 2018
    Assignee: International Business Machines Corporation
    Inventors: Cecilia J. Aas, Raymond S. Glover
  • Patent number: 10129476
    Abstract: In some embodiments, the present invention provides for an exemplary computer system that may include: a camera component configured to acquire a visual content, wherein the visual content having a plurality of frames with a visual representation of a face of a person; a processor configured to: apply, for each frame, a multi-dimensional face detection regressor for fitting at least one meta-parameter to detect or to track a plurality of multi-dimensional landmarks representative of a face; apply a face movement detection algorithm to identify each displacement of each respective multi-dimensional landmark between frames; and apply a face movement compensation algorithm to generate a face movement compensated output that stabilizes the visual representation of the face.
    Type: Grant
    Filed: April 25, 2018
    Date of Patent: November 13, 2018
    Assignee: Banuba Limited
    Inventors: Yury Hushchyn, Aliaksei Sakolski
  • Patent number: 10121055
    Abstract: This invention describes methods and systems for the automated facial landmark localization. Our approach proceeds from sparse to dense landmarking steps using a set of models to best account for the shape and texture variation manifested by facial landmarks across pose and expression. We also describe the use of an l1-regularized least squares approach that we incorporate into our shape model, which is an improvement over the shape model used by several prior Active Shape Model (ASM) based facial landmark localization algorithms.
    Type: Grant
    Filed: September 8, 2016
    Date of Patent: November 6, 2018
    Assignee: CARNEGIE MELLON UNIVERSITY
    Inventors: Marios Savvides, Keshav Thirumalai Seshadri
  • Patent number: 10121076
    Abstract: An entity interaction recognition system algorithmically recognizes a variety of different types of entity interactions that may be captured in two-dimensional images. In some embodiments, the system estimates the three-dimensional spatial configuration or arrangement of entities depicted in the image. In some embodiments, the system applies a proxemics-based analysis to determine an interaction type. In some embodiments, the system infers, from a characteristic of an entity detected in an image, an area or entity of interest in the image.
    Type: Grant
    Filed: May 2, 2016
    Date of Patent: November 6, 2018
    Assignee: SRI International
    Inventors: Ishani Chakraborty, Hui Cheng, Omar Javed
  • Patent number: 10121094
    Abstract: A system, method and computer program product is provided. An input signal for classification and a set of pre-classified signals are received, each comprising a vector representation of an object having a plurality of vector elements. A sparse vector comprising a plurality of sparse vector coefficients is determined. Each sparse vector coefficient corresponds to a signal in the set of pre-classified signals and represents the likelihood of a match between the object represented in the input signal and the object represented in the corresponding signal. A largest sparse vector coefficient is compared with a predetermined threshold. If the largest sparse vector coefficient is less than the predetermined threshold, the corresponding signal is removed from the set of pre-classified signals. The determining and comparing are repeated using the input signal and the reduced set of pre-classified signals.
    Type: Grant
    Filed: December 9, 2016
    Date of Patent: November 6, 2018
    Assignee: International Business Machines Corporation
    Inventors: Cecilia J. Aas, Raymond S. Glover
  • Patent number: 10111632
    Abstract: For breast cancer detection with an x-ray scanner, a cascade of multiple classifiers is trained or used. One or more of the classifiers uses a deep-learnt network trained on non-x-ray data, at least initially, to extract features. Alternatively or additionally, one or more of the classifiers is trained using classification of patches rather than pixels and/or classification with regression to create additional cancer-positive partial samples.
    Type: Grant
    Filed: January 31, 2017
    Date of Patent: October 30, 2018
    Assignee: Siemens Healthcare GmbH
    Inventors: Yaron Anavi, Atilla Peter Kiraly, David Liu, Shaohua Kevin Zhou, Zhoubing Xu, Dorin Comaniciu
  • Patent number: 10108848
    Abstract: This invention relates to a method of analyzing a factor of an attribute based on a case sample set containing combinations of image data and attribute data associated with the image data. The attribute factor analysis method includes: a division step of dividing an image region of the image data forming each element of the case sample set into parts in a mesh shape of a predetermined sample size; a reconstruction step of reconstructing, based on the case sample set, the case sample sets for the respective parts to obtain reconstructed case sample sets; an analysis step of analyzing, for each of the reconstructed case sample sets, a dependency between an explanatory variable representing a feature value of image data on each part and an objective variable representing the attribute data, to thereby obtain an attribute factor analysis result; and a visualization step of visualizing the attribute factor analysis result to produce the visualized attribute factor analysis result.
    Type: Grant
    Filed: March 28, 2014
    Date of Patent: October 23, 2018
    Assignee: NEC SOLUTION INNOVATORS, LTD.
    Inventors: Yasuyuki Ihara, Masashi Sugiyama
  • Patent number: 10102256
    Abstract: A method and system for improving an Internet based search is provided. The method includes generating an intent domain associated with a subject based intent classification. An unstructured data analysis process is executed with respect to a content corpus being associated with the subject based intent classification and a search phase entered in a search field of a graphical user interface with respect to a domain specific search query for specified subject matter. In response the subject based intent classification is determined to be associated with the search query and the subject based intent classification is compared to search results data. A subset of search results of the search results data correlating to the subject based intent classification is determined and ranked resulting in a ranked list. The subject based intent classification and the ranked list are presented to a user.
    Type: Grant
    Filed: September 29, 2016
    Date of Patent: October 16, 2018
    Assignee: International Business Machines Corporation
    Inventors: Gilbert Barron, Jasmine S. Basrai, Michael J. Bordash, Lisa Seacat DeLuca