Neural Networks Patents (Class 382/156)
  • Patent number: 10460485
    Abstract: Disclosed herein are system, method, and computer program product embodiments for generating and adjusting multi-dimensional data visualizations. An embodiment operates by a computer implemented method that includes evaluating, by at least one processor, data to be displayed on a multi-dimensional data visualization and information associated with the multi-dimensional data visualization. The method further includes determining one or more parameters for the multi-dimensional data visualization based on the evaluated data and the evaluated information. The method further includes generating the multi-dimensional data visualization based on the determined one or more parameters, where the multi-dimensional data visualization comprises at least four dimensions. The method also includes graphically displaying the multi-dimensional data visualization.
    Type: Grant
    Filed: March 15, 2017
    Date of Patent: October 29, 2019
    Assignee: SAP SE
    Inventors: Malin Wittkopf, Anca Maria Florescu, Christina Hall, Tatjana Borovikov, Guido Wagner, Klaus Herter, Felix Harling, Christian Knirsch, Christian Grail, Bogdan Alexander, Joachim Fiess, Hergen Siefken, Hee Tatt Ooi, Hans-Juergen Richstein, Marita Kruempelmann, Ingo Rues
  • Patent number: 10460440
    Abstract: Systems and techniques for facilitating a deep convolutional neural network with self-transfer learning are presented. In one example, a system includes a machine learning component, a medical imaging diagnosis component and a visualization component. The machine learning component generates learned medical imaging output regarding an anatomical region based on a convolutional neural network that receives medical imaging data. The machine learning component also performs a plurality of sequential downsampling and upsampling of the medical imaging data associated with convolutional layers of the convolutional neural network. The medical imaging diagnosis component determines a classification and an associated localization for a portion of the anatomical region based on the learned medical imaging output associated with the convolutional neural network.
    Type: Grant
    Filed: October 24, 2017
    Date of Patent: October 29, 2019
    Assignee: General Electric Company
    Inventors: Min Zhang, Gopal Biligeri Avinash
  • Patent number: 10452976
    Abstract: A neural network recognition method includes obtaining a first neural network that includes layers and a second neural network that includes a layer connected to the first neural network, actuating a processor to compute a first feature map from input data based on a layer of the first neural network, compute a second feature map from the input data based on the layer connected to the first neural network in the second neural network, and generate a recognition result based on the first neural network from an intermediate feature map computed by applying an element-wise operation to the first feature map and the second feature map.
    Type: Grant
    Filed: March 20, 2017
    Date of Patent: October 22, 2019
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Byungin Yoo, Youngsung Kim, Youngjun Kwak, Chang Kyu Choi
  • Patent number: 10445611
    Abstract: A method for detecting at least one pseudo-3D bounding box based on a CNN capable of converting modes according to conditions of objects in an image is provided. The method includes steps of: a learning device (a) instructing a pooling layer to generate a pooled feature map corresponding to a 2D bounding box, and instructing a type-classifying layer to determine whether objects in the pooled feature map are truncated or non-truncated; (b) instructing FC layers to generate box pattern information corresponding to the pseudo-3D bounding box; (c) instructing classification layers to generate orientation class information on the objects, and regression layers to generate regression information on coordinates of the pseudo-3D bounding box; and (d) backpropagating class losses and regression losses generated from FC loss layers. Through the method, rendering of truncated objects can be performed while virtual driving, and this is useful for mobile devices and also for military purpose.
    Type: Grant
    Filed: January 25, 2019
    Date of Patent: October 15, 2019
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyu Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10423861
    Abstract: The technology disclosed relates to constructing a convolutional neural network-based classifier for variant classification. In particular, it relates to training a convolutional neural network-based classifier on training data using a backpropagation-based gradient update technique that progressively match outputs of the convolutional network network-based classifier with corresponding ground truth labels. The convolutional neural network-based classifier comprises groups of residual blocks, each group of residual blocks is parameterized by a number of convolution filters in the residual blocks, a convolution window size of the residual blocks, and an atrous convolution rate of the residual blocks, the size of convolution window varies between groups of residual blocks, the atrous convolution rate varies between groups of residual blocks. The training data includes benign training examples and pathogenic training examples of translated sequence pairs generated from benign variants and pathogenic variants.
    Type: Grant
    Filed: October 15, 2018
    Date of Patent: September 24, 2019
    Assignee: Illumina, Inc.
    Inventors: Hong Gao, Kai-How Farh, Laksshman Sundaram, Jeremy Francis McRae
  • Patent number: 10410108
    Abstract: A method for assigning a personalized aesthetic score to an image is provided. The method includes providing a base neural network for generating learned features. The base neural network is trained on a first set of training images and the base neural network includes two or more layers comprising one or more initial layers and one or more final layers. The method further includes receiving a second set of training images and updating the base neural network to generate a personalized neural network based on the received second set of training images. Updating the base neural network comprises re-training the final layers of the base neural network with the second set of images and keeping the initial layers of the base neural network, such that the personalized neural network includes two or more layers comprising one or more initial layers and one or more final layers.
    Type: Grant
    Filed: August 7, 2017
    Date of Patent: September 10, 2019
    Inventors: Appu Shaji, Ramzi Rizk, Gökhan Yildirim
  • Patent number: 10402686
    Abstract: A method for an object detector to be used for surveillance based on a convolutional neural network capable of converting modes according to scales of objects is provided. The method includes steps of: a learning device (a) instructing convolutional layers to output a feature map by applying convolution operations to an image and instructing an RPN to output ROIs in the image; (b) instructing pooling layers to output first feature vectors by pooling each of ROI areas on the feature map per each of their scales, instructing first FC layers to output second feature vectors, and instructing second FC layers to output class information and regression information; and (c) instructing loss layers to generate class losses and regression losses by referring to the class information, the regression information, and their corresponding GTs.
    Type: Grant
    Filed: January 25, 2019
    Date of Patent: September 3, 2019
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10402745
    Abstract: The present system and method for analyzing sentiment is based on fuzzy set theory and clustering to classify text as positive, negative, or objective. The method for training and testing a document collection to analyze sentiment comprises computing a frequency matrix comprising at least one row and at least column vectors, executing term reduction of the terms, enumerating categories, computing centroids of the enumerated categories and computing a fuzzy polarity map. The row vectors may correspond to terms and the column vectors may correspond to documents. The frequencies of terms in a document indicate the relevance of the document to a query.
    Type: Grant
    Filed: September 30, 2013
    Date of Patent: September 3, 2019
    Assignee: SEMEON ANALYTICS INC.
    Inventors: Alkis Papadopoullos, Jocelyn Desbiens
  • Patent number: 10387755
    Abstract: A method of classifying substrates with a metrology tool is herein disclosed. The method begins by training a deep learning framework using convolutional neural networks with a training dataset for classifying image dataset. Obtaining a new image from the meteorology tool. Running the new image through the deep learning framework to classify the new image.
    Type: Grant
    Filed: June 28, 2017
    Date of Patent: August 20, 2019
    Assignee: Applied Materials, Inc.
    Inventors: Sreekar Bhaviripudi, Shreekant Gayaka
  • Patent number: 10380124
    Abstract: A method of searching a plurality of data sets with a search query may include receiving the search query, where the search query may include one or more tokens. The method may also include accessing the plurality of data sets, and calculating maximum possible search scores for each of the plurality of data sets. The method may additionally include identifying a subset of the plurality of data sets for which the corresponding maximum possible search scores exceed a threshold score. The method may further include calculating search scores for the subset of the plurality of data sets, and providing the a result list based on the search scores.
    Type: Grant
    Filed: October 6, 2016
    Date of Patent: August 13, 2019
    Assignee: Oracle International Corporation
    Inventors: Jason Gage, Timothy Eager, Qian Jiang, Gerhard Brugger
  • Patent number: 10380264
    Abstract: A machine translation method and apparatus are provided. The machine translation apparatus generates a feature vector of a source sentence from the source sentence, where the source sentence being is written in a first language, and converts the generated feature vector of the source sentence to a feature vector of a normalized sentence. The machine translation apparatus generates a target sentence from the feature vector of the normalized sentence, wherein the target sentence corresponding corresponds to the source sentence and being is written in a second language.
    Type: Grant
    Filed: February 10, 2017
    Date of Patent: August 13, 2019
    Assignee: Samsung Electronics Co., Ltd.
    Inventor: Hodong Lee
  • Patent number: 10373022
    Abstract: In an optical character recognition (OCR) method for digitizing printed text images using a long-short term memory (LSTM) network, text images are pre-processed using a stroke-aware max-min pooling method before being fed into the network, for both network training and OCR prediction. During training, an average stroke thickness is computed from the training dataset. Stroke-aware max-min pooling is applied to each text line image, where minimum pooling is applied if the stroke thickness of the line is greater than the average stroke thickness, while max pooling is applied if the stroke thickness is less than or equal to the average stroke thickness. The pooled images are used for network training. During prediction, stroke-aware max-min pooling is applied to each input text line image, and the pooled image is fed to the trained LSTM network to perform character recognition.
    Type: Grant
    Filed: February 28, 2018
    Date of Patent: August 6, 2019
    Assignee: KONICA MINOLTA LABORATORY U.S.A., INC.
    Inventors: Yongmian Zhang, Shubham Agarwal
  • Patent number: 10354168
    Abstract: Methods and systems are provided for end-to-end text recognition in digitized documents of handwritten characters over multiple lines without explicit line segmentation. An image is received. Based on the image, one or more feature maps are determined. Each of the one or more feature maps include one or more feature vectors. Based at least in part on the one or more feature maps, one or more scalar scores are determined. Based on the one or more scalar scores, one or more attention weights are determined. By applying the one or more attention weights to each of the one or more feature vectors, one or more image summary vectors are determined. Based at least in part on the one or more image summary vectors, one or more handwritten characters are determined.
    Type: Grant
    Filed: April 7, 2017
    Date of Patent: July 16, 2019
    Assignee: A2IA S.A.S.
    Inventor: Theodore Damien Christian Bluche
  • Patent number: 10325173
    Abstract: A method for analyzing images to generate a plurality of output features includes receiving input features of the image and performing Fourier transforms on each input feature. Kernels having coefficients of a plurality of trained features are received and on-the-fly Fourier transforms (OTF-FTs) are performed on the coefficients in the kernels. The output of each Fourier transform and each OTF-FT are multiplied together to generate a plurality of products and each of the products are added to produce one sum for each output feature. Two-dimensional inverse Fourier transforms are performed on each sum.
    Type: Grant
    Filed: August 22, 2018
    Date of Patent: June 18, 2019
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Mihir Narendra Mody, Manu Mathew, Chaitanya Satish Ghone
  • Patent number: 10325018
    Abstract: A first handwriting input is received comprising strokes corresponding to a set of first characters comprising one or more first characters forming a first language model unit. A set of candidate first characters and a set of candidate first language model units with corresponding probability scores are determined based on an analysis of the one or more sets of candidate first characters using the first language model and a corresponding first character recognition model. When no first probability score satisfies a threshold, one or more sets of candidate second characters and a set of candidate second language model units are determined based on an analysis of the first handwriting input using a second language model and a corresponding second character recognition model. A first candidate list is then output comprising at least one of the set of candidate second language model units.
    Type: Grant
    Filed: October 17, 2016
    Date of Patent: June 18, 2019
    Assignee: Google LLC
    Inventors: Marcos Calvo, Victor Carbune, Henry Rowley, Thomas Deselaers
  • Patent number: 10325178
    Abstract: The present disclosure relates to image preprocessing to improve object recognition. In one implementation, a system for preprocessing an image for object recognition may include at least one memory storing instructions and at least one processor configured to execute the instructions to perform operations. The operations may include receiving the image, detecting a plurality of bounding boxes within the image, grouping the plurality of bounding boxes into a plurality of groups such that bounding boxes within a group have shared areas exceeding an area threshold, deriving a first subset of the plurality of bounding boxes by selecting bounding boxes having highest class confidence scores from at least one group, selecting a bounding box from the first subset having a highest score based on area and class confidence score, and outputting the selected bounding box.
    Type: Grant
    Filed: November 28, 2018
    Date of Patent: June 18, 2019
    Assignee: Capital One Services, LLC
    Inventors: Qiaochu Tang, Sunil Subrahmanyam Vasisht, Stephen Michael Wylie, Geoffrey Dagley, Micah Price, Jason Richard Hoover
  • Patent number: 10325201
    Abstract: A method for generating a deceivable composite image by using a GAN (Generative Adversarial Network) including a generating and a discriminating neural network to allow a surveillance system to recognize surroundings and detect a rare event, such as hazardous situations, more accurately by using a heterogeneous sensor fusion is provided. The method includes steps of: a computing device, generating location candidates of a rare object on a background image, and selecting a specific location candidate among the location candidates as an optimal location of the rare object by referring to candidate scores; inserting a rare object image into the optimal location, generating an initial composite image; and adjusting color values corresponding to each of pixels in the initial composite image, generating the deceivable composite image. Further, the method may be applicable to a pedestrian assistant system and a route planning by using 3D maps, GPS, smartphones, V2X communications, etc.
    Type: Grant
    Filed: January 31, 2019
    Date of Patent: June 18, 2019
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10311324
    Abstract: A method for learning parameters of CNNs capable of identifying objectnesses by detecting bottom lines and top lines of nearest obstacles in an input image is provided. The method includes steps of: a learning device, (a) instructing a first CNN to generate first encoded feature maps and first decoded feature maps, and instructing a second CNN to generate second encoded feature maps and second decoded feature maps; (b) generating first and second obstacle segmentation results respectively representing where the bottom lines and the top lines are estimated as being located per each column, by referring to the first and the second decoded feature maps respectively; (c) estimating the objectnesses by referring to the first and the second obstacle segmentation results; (d) generating losses by referring to the objectnesses and their corresponding GTs; and (f) backpropagating the losses, to thereby learn the parameters of the CNNs.
    Type: Grant
    Filed: October 26, 2018
    Date of Patent: June 4, 2019
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10304208
    Abstract: Disclosed are methods, apparatus and systems for gesture recognition based on neural network processing. One exemplary method for identifying a gesture communicated by a subject includes receiving a plurality of images associated with the gesture, providing the plurality of images to a first 3-dimensional convolutional neural network (3D CNN) and a second 3D CNN, where the first 3D CNN is operable to produce motion information, where the second 3D CNN is operable to produce pose and color information, and where the first 3D CNN is operable to implement an optical flow algorithm to detect the gesture, fusing the motion information and the pose and color information to produce an identification of the gesture, and determining whether the identification corresponds to a singular gesture across the plurality of images using a recurrent neural network that comprises one or more long short-term memory units.
    Type: Grant
    Filed: January 25, 2019
    Date of Patent: May 28, 2019
    Assignee: Avodah Labs, Inc.
    Inventors: Trevor Chandler, Dallas Nash, Michael Menefee
  • Patent number: 10303743
    Abstract: An online system stores online documents, where each online document has a layout. The system creates augmented online documents by combining the online documents with one or more content items. The system stores client interactions with the content items, responsive to presenting the augmented online documents via a client device. The system receives a new online document. The system creates new augmented online documents by combining the new online document with one or more new content items. For each new augmented online document, the system generates a score based on one or more features describing the layout of the new augmented online document. The system selects a new augmented online document based on the generated scores and sends the selected new augmented online document for presentation via a client device.
    Type: Grant
    Filed: October 28, 2016
    Date of Patent: May 28, 2019
    Assignee: Facebook, Inc.
    Inventors: Dan Zhang, Xiongjun Liang, Chin Lung Fong, Maria Angelidou, Harshit Agarwal, Shiyang Liu
  • Patent number: 10304133
    Abstract: Systems and methods are provided for communicating and processing market data. The market data may comprise quotes, orders, trades and/or statistics. A messaging structure allows for adding, re-ordering and/or expanding data, within the printable character set of any language. One or more delimiters are defined and used to delimit data elements within the message structure. The data is interpreted based on templates which may be disseminated prior to the sending of messages and used as an abstraction so that the meaning of data need not be conveyed in the message.
    Type: Grant
    Filed: April 28, 2017
    Date of Patent: May 28, 2019
    Assignee: Chicago Mercantile Exchange Inc.
    Inventors: Ron Newell, Vijay Menon, Fred Malabre, Joe Lobraco, Jim Northey
  • Patent number: 10289962
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a distilled machine learning model. One of the methods includes training a cumbersome machine learning model, wherein the cumbersome machine learning model is configured to receive an input and generate a respective score for each of a plurality of classes; and training a distilled machine learning model on a plurality of training inputs, wherein the distilled machine learning model is also configured to receive inputs and generate scores for the plurality of classes, comprising: processing each training input using the cumbersome machine learning model to generate a cumbersome target soft output for the training input; and training the distilled machine learning model to, for each of the training inputs, generate a soft output that matches the cumbersome target soft output for the training input.
    Type: Grant
    Filed: June 4, 2015
    Date of Patent: May 14, 2019
    Assignee: Google LLC
    Inventors: Oriol Vinyals, Jeffrey A. Dean, Geoffrey E. Hinton
  • Patent number: 10289964
    Abstract: Various embodiments train a prediction model for predicting a label to be allocated to a prediction target explanatory variable set. In one embodiment, one or more sets of training data are acquired. Each of the one or more sets of training data includes at least one set of explanatory variables and a label allocated to the at least one explanatory variable set. A plurality of explanatory variable subsets is extracted from the at least one set of explanatory variables. A prediction model is trained utilizing the training data. The plurality of explanatory variable subsets is reflected on a label predicted by the prediction model to be allocated to a prediction target explanatory variable set with each of the plurality of explanatory variable subsets weighted respectively.
    Type: Grant
    Filed: December 12, 2017
    Date of Patent: May 14, 2019
    Assignee: International Business Machines Corporation
    Inventors: Takayuki Katsuki, Yuma Shinohara
  • Patent number: 10282849
    Abstract: Systems and methods for predictive/reconstructive visual object tracking are disclosed. The visual object tracking has advanced abilities to track objects in scenes, which can have a variety of applications as discussed in this disclosure. In some exemplary implementations, a visual system can comprise a plurality of associative memory units, wherein each associative memory unit has a plurality of layers. The associative memory units can be communicatively coupled to each other in a hierarchical structure, wherein data in associative memory units in higher levels of the hierarchical structure are more abstract than lower associative memory units. The associative memory units can communicate to one another supplying contextual data.
    Type: Grant
    Filed: June 19, 2017
    Date of Patent: May 7, 2019
    Assignee: Brain Corporation
    Inventors: Filip Piekniewski, Micah Richert, Dimitry Fisher, Patryk Laurent, Csaba Petre
  • Patent number: 10275902
    Abstract: A user identification system includes an image recognition network to analyze image data and generate shape data based on the image data. The system also includes a generalist network to analyze the shape data and generate general category data based on the shape data. The system further includes a specialist network to compare the general category data with a characteristic to generate narrow category data. Moreover, the system includes a classifier layer including a plurality of nodes to represent a classification decision based on the narrow category data.
    Type: Grant
    Filed: May 9, 2016
    Date of Patent: April 30, 2019
    Assignee: MAGIC LEAP, INC.
    Inventor: Gary R. Bradski
  • Patent number: 10268925
    Abstract: The present disclosure relates to image preprocessing to improve object recognition. In one implementation, a system for preprocessing an image for object recognition may include at least one memory storing instructions and at least one processor configured to execute the instructions to perform operations. The operations may include receiving the image, detecting a plurality of bounding boxes within the image, grouping the plurality of bounding boxes into a plurality of groups such that bounding boxes within a group have shared areas exceeding an area threshold, deriving a first subset of the plurality of bounding boxes by selecting bounding boxes having highest class confidence scores from at least one group, selecting a bounding box from the first subset having a highest score based on area and class confidence score, and outputting the selected bounding box.
    Type: Grant
    Filed: May 14, 2018
    Date of Patent: April 23, 2019
    Assignee: Capital One Services, LLC
    Inventors: Qiaochu Tang, Sunil Subrahmanyam Vasisht, Stephen Michael Wylie, Geoffrey Dagley, Micah Price, Jason Richard Hoover
  • Patent number: 10262235
    Abstract: A method of identifying and recognizing characters using a dual-stage neural network pipeline, the method including: receiving, by a computing device, image data; providing the image data to a first convolutional layer of a convolutional neural network (CNN); applying, using the CNN, pattern recognition to the image data to identify a region of the image data containing text; providing sub-image data comprising the identified region of the image data to a convolutional recurrent neural network (CRNN); and recognizing, using the CRNN, the characters within the sub-image data.
    Type: Grant
    Filed: February 26, 2018
    Date of Patent: April 16, 2019
    Assignee: CAPITAL ONE SERVICES, LLC
    Inventors: Xi Chen, Mamadou Diallo, Qiang Xue
  • Patent number: 10242695
    Abstract: Techniques for enhancing an acoustic echo canceller based on visual cues are described herein. The techniques include changing adaptation of a filter of the acoustic echo canceller, calibrating the filter, or reducing background noise from an audio signal processed by the acoustic echo canceller. The changing, calibrating, and reducing are responsive to visual cues that describe acoustic characteristics of a location of a device that includes the acoustic echo canceller. Such visual cues may indicate that no human being is present at the location, that some subject(s) are engaged in speaking or sound generating activities, or that motion associated with an echo path change has occurred at the location.
    Type: Grant
    Filed: September 14, 2017
    Date of Patent: March 26, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Kavitha Velusamy, Wai C. Chu, Ramya Gopalan, Amit S. Chhetri
  • Patent number: 10231619
    Abstract: Methods and systems for suppressing shadowgraphic flow projection artifacts in OCT angiography images of a sample are disclosed. In one example approach, normalized OCT angiography data is analyzed at the level of individual A-scans to classify signals as either flow or projection artifact. This classification information is then used to suppress projection artifacts in the three dimensional OCT angiography dataset.
    Type: Grant
    Filed: December 9, 2016
    Date of Patent: March 19, 2019
    Assignee: Oregon Health & Science University
    Inventors: David Huang, Yali Jia, Miao Zhong
  • Patent number: 10216976
    Abstract: A method, device and medium for fingerprint identification are provided. The method for fingerprint identification includes that: it is detected whether the number of damaged pixel units in a fingerprint identification sensor reaches a preset threshold value, and the damaged pixel units are physically damaged pixel units in the fingerprint identification sensor; and if the number of the damaged pixel units reaches the preset threshold value, identifying a fingerprint image acquired by the fingerprint identification sensor is stopped.
    Type: Grant
    Filed: December 27, 2016
    Date of Patent: February 26, 2019
    Assignee: XIAOMI INC.
    Inventors: Changyu Sun, Zhijie Li, Wei Sun
  • Patent number: 10185870
    Abstract: An identification method includes: sensing movement data; capturing multiple feature data from the movement data; cutting the first feature data into a plurality of first feature segments, dividing the first feature segments into a plurality of first feature groups, and calculating multiple first similarity parameters of the first feature groups respectively corresponding to a plurality of channels; making the first feature groups correspond to the channels according to the first similarity parameters; simplifying the first feature groups corresponding to the channels respectively by a convolution algorithm to obtain a plurality of first convolution results corresponding to the first feature groups; simplifying the first convolution results corresponding to the first feature groups respectively by a pooling algorithm to obtain multiple first pooling results corresponding to the first feature groups; and combining the first pooling results corresponding to the first feature groups to generate a first feature m
    Type: Grant
    Filed: April 27, 2017
    Date of Patent: January 22, 2019
    Assignee: INSTITUTE FOR INFORMATION INDUSTRY
    Inventors: Chen-Kuo Chiang, Chih-Hsiang Yu, Bo-Nian Chen
  • Patent number: 10181325
    Abstract: Aspects described herein are directed towards methods, computing devices, systems, and computer-readable media that apply scattering operations to extracted visual features of audiovisual input to generate predictions regarding the speech status of a subject. Visual scattering coefficients generated according to one or more aspects described herein may be used as input to a neural network operative to generate the predictions regarding the speech status of the subject. Predictions generated based on the visual features may be combined with predictions based on audio input associated with the visual features. In some embodiments, the extracted visual features may be combined with the audio input to generate a combined feature vector for use in generating predictions.
    Type: Grant
    Filed: June 30, 2017
    Date of Patent: January 15, 2019
    Assignee: Nuance Communications, Inc.
    Inventors: Etienne Marcheret, Josef Vopicka, Vaibhava Goel
  • Patent number: 10176580
    Abstract: A first interface for reading a medical patient image record is provided. Furthermore, provision is made of an encoding module for machine-based learning of data encodings of image patterns by an unsupervised deep learning and for establishing a deep-learning-reduced data encoding of a patient image pattern contained in the patient image record. Furthermore, provision is made of a comparison module for comparing the established data encoding with reference encodings of reference image patterns stored in a database and for selecting a reference image pattern with a reference encoding which is similar to the established data encoding. An assignment module serves to establish a key term assigned to the selected reference image pattern and to assign the established key term to the patient image pattern. A second interface is provided for outputting the established key term with assignment to the patient image pattern.
    Type: Grant
    Filed: July 18, 2016
    Date of Patent: January 8, 2019
    Assignee: SIEMENS HEALTHCARE GMBH
    Inventor: Olivier Pauly
  • Patent number: 10163004
    Abstract: A method for character recognition. The method includes: obtaining a plurality of character segments extracted from an image; determining a first character bounding box including a first set of the plurality of character segments and a second character bounding box including a second set of the plurality of character segments; determining an ordering for the first set based on a plurality of texture properties for the first set; determining a plurality of directions of the first set based on a plurality of brush widths and a plurality of intensities for the first set; and executing character recognition for the first character bounding box by sending the first set, the plurality of directions for the first set, and the ordering for the first set to an intelligent character recognition (ICR) engine.
    Type: Grant
    Filed: March 30, 2017
    Date of Patent: December 25, 2018
    Assignee: Konica Minolta Laboratory U.S.A., Inc.
    Inventors: Stuart Guarnieri, Jason James Grams
  • Patent number: 10157331
    Abstract: The present disclosure relates to image preprocessing to improve object recognition. In one implementation, a system for preprocessing an image for object recognition may include at least one memory storing instructions and at least one processor configured to execute the instructions to perform operations. The operations may include receiving the image, detecting a plurality of bounding boxes within the image, grouping the plurality of bounding boxes into a plurality of groups such that bounding boxes within a group have shared areas exceeding an area threshold, deriving a first subset of the plurality of bounding boxes by selecting bounding boxes having highest class confidence scores from at least one group, selecting a bounding box from the first subset having a highest score based on area and class confidence score, and outputting the selected bounding box.
    Type: Grant
    Filed: March 8, 2018
    Date of Patent: December 18, 2018
    Assignee: Capital One Services, LLC
    Inventors: Qiaochu Tang, Sunil Subrahmanyam Vasisht, Stephen Michael Wylie, Geoffrey Dagley, Micah Price, Jason Richard Hoover
  • Patent number: 10157314
    Abstract: Methods for aerial image processing and object identification using an electronic computing device are presented, the methods including: causing the electronic computing device to build a deep learning model; receiving actual aerial image data; applying the deep learning model to the actual aerial image data to identify areas of interest; post-processing the areas of interest; and returning a number of classified objects corresponding with the areas of interest to a user. In some embodiments, methods further include: applying global positioning system (GPS) mapping to the number of classified objects with respect to the actual aerial image data.
    Type: Grant
    Filed: January 29, 2016
    Date of Patent: December 18, 2018
    Assignee: PANTON, INC.
    Inventors: Saishi Frank Li, Guoyan Cao, Yang Liu, Jack Dong Wang, Mingyang Zhu
  • Patent number: 10147459
    Abstract: Techniques are disclosed herein for applying an artistic style extracted from one or more source images, e.g., paintings, to one or more target images. The extracted artistic style may then be stored as a plurality of layers in a neural network. In some embodiments, two or more stylized target images may be combined and stored as a stylized video sequence. The artistic style may be applied to the target images in the stylized video sequence using various optimization methods and/or pixel- and feature-based regularization techniques in a way that prevents excessive content pixel fluctuations between images and preserves smoothness in the assembled stylized video sequence. In other embodiments, a user may be able to semantically annotate locations of undesired artifacts in a target image, as well as portion(s) of a source image from which a style may be extracted and used to replace the undesired artifacts in the target image.
    Type: Grant
    Filed: September 22, 2016
    Date of Patent: December 4, 2018
    Assignee: Apple Inc.
    Inventors: Bartlomiej W. Rymkowski, Marco Zuliani
  • Patent number: 10115040
    Abstract: Systems and methods for classifying defects using hot scans and convolutional neural networks (CNNs) are disclosed. Primary scanning modes are identified by a processor and a hot scan of a wafer is performed. Defects of interest and nuisance data are selected and images of those areas are captured usa7ing one or more secondary scanning modes. Image sets are collected and divided into subsets. CNNs are trained using the image subsets. An ideal secondary scanning mode is determined and a final hot scan is performed. Defects are filtered and classified according to the final hot scan and the ideal secondary scanning mode. Disclosed systems for classifying defects utilize image data acquisition subsystems such as a scanning electron microscope as well as processors and electronic databases.
    Type: Grant
    Filed: December 7, 2016
    Date of Patent: October 30, 2018
    Assignee: KLA-Tencor Corporation
    Inventor: Bjorn Brauer
  • Patent number: 10102418
    Abstract: A method of segmenting images of biological specimens using adaptive classification to segment a biological specimen into different types of tissue regions. The segmentation is performed by, first, extracting features from the neighborhood of a grid of points (GPs) sampled on the whole-slide (WS) image and classifying them into different tissue types. Secondly, an adaptive classification procedure is performed where some or all of the GPs in a WS image are classified using a pre-built training database, and classification confidence scores for the GPs are generated. The classified GPs with high confidence scores are utilized to generate an adaptive training database, which is then used to re-classify the low confidence GPs. The motivation of the method is that the strong variation of tissue appearance makes the classification problem more challenging, while good classification results are obtained when the training and test data origin from the same slide.
    Type: Grant
    Filed: July 28, 2016
    Date of Patent: October 16, 2018
    Assignee: VENTANA MEDICAL SYSTEMS, INC.
    Inventors: Joerg Bredno, Christophe Chefd'hotel, Ting Chen, Srinivas Chukka, Kien Nguyen
  • Patent number: 10089574
    Abstract: Examples disclosed herein relate to neuron circuits and methods for generating neuron circuit outputs. In some of the disclosed examples, a neuron circuit includes a memristor and first and second current mirrors. The first current mirror may source a first current through the memristor and the second current mirror may sink a second current through the memristor. The memristor may generate a voltage output as a function of the sourced first current and the sunk second current through the memristor.
    Type: Grant
    Filed: September 14, 2016
    Date of Patent: October 2, 2018
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Brent Buchanan, Sity Lam, Le Zheng
  • Patent number: 10074042
    Abstract: Font recognition and similarity determination techniques and systems are described. In a first example, localization techniques are described to train a model using machine learning (e.g., a convolutional neural network) using training images. The model is then used to localize text in a subsequently received image, and may do so automatically and without user intervention, e.g., without specifying any of the edges of a bounding box. In a second example, a deep neural network is directly learned as an embedding function of a model that is usable to determine font similarity. In a third example, techniques are described that leverage attributes described in metadata associated with fonts as part of font recognition and similarity determinations.
    Type: Grant
    Filed: October 6, 2015
    Date of Patent: September 11, 2018
    Assignee: ADOBE SYSTEMS INCORPORATED
    Inventors: Zhaowen Wang, Luoqi Liu, Hailin Jin
  • Patent number: 10032072
    Abstract: Approaches provide for identifying text represented in image data as well as determining a location or region of the image data that includes the text represented in the image data. For example, a camera of a computing device can be used to capture a live camera view of one or more items. The live camera view can be presented to the user on a display screen of the computing device. An application executing on the computing device or at least in communication with the computing device can analyze the image data of the live camera view to identify text represented in the image data as well as determine locations or regions of the image that include the representations.
    Type: Grant
    Filed: June 21, 2016
    Date of Patent: July 24, 2018
    Assignee: A9.com, Inc.
    Inventors: Son Dinh Tran, R. Manmatha
  • Patent number: 10025976
    Abstract: Disclosed herein is a method of optimizing data normalization by selecting the best height normalization setting from training RNN (Recurrent Neural Network) with one or more datasets comprising multiple sample images of handwriting data, which comprises estimating a few top place ratios for normalization by minimizing a cost function for any given sample image in the training dataset, and further, determining the best ratio from the top place ratios by validating the recognition results of sample images with each top place ratio.
    Type: Grant
    Filed: December 28, 2016
    Date of Patent: July 17, 2018
    Assignee: KONICA MINOLTA LABORATORY U.S.A., INC.
    Inventors: Saman Sarraf, Duanduan Yang
  • Patent number: 9977997
    Abstract: Disclosed are a training method and apparatus for a CNN model, which belong to the field of image recognition. The method comprises: performing a convolution operation, maximal pooling operation and horizontal pooling operation on training images, respectively, to obtain second feature images; determining feature vectors according to the second feature images; processing the feature vectors to obtain category probability vectors; according to the category probability vectors and an initial category, calculating a category error; based on the category error, adjusting model parameters; based on the adjusted model parameters, continuing the model parameters adjusting process, and using the model parameters when the number of iteration times reaches a pre-set number of times as the model parameters for the well-trained CNN model. After the convolution operation and maximal pooling operation on the training images on each level of convolution layer, a horizontal pooling operation is performed.
    Type: Grant
    Filed: April 12, 2017
    Date of Patent: May 22, 2018
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Xiang Bai, Feiyue Huang, Xiaowei Guo, Cong Yao, Baoguang Shi
  • Patent number: 9978003
    Abstract: Systems and methods are disclosed for segregating target individuals represented in a probe digital image from background pixels in the probe digital image. In particular, in one or more embodiments, the disclosed systems and methods train a neural network based on two or more of training position channels, training shape input channels, training color channels, or training object data. Moreover, in one or more embodiments, the disclosed systems and methods utilize the trained neural network to select a target individual in a probe digital image. Specifically, in one or more embodiments, the disclosed systems and methods generate position channels, training shape input channels, and color channels corresponding the probe digital image, and utilize the generated channels in conjunction with the trained neural network to select the target individual.
    Type: Grant
    Filed: August 17, 2017
    Date of Patent: May 22, 2018
    Assignee: ADOBE SYSTEMS INCORPORATED
    Inventors: Ian Sachs, Xiaoyong Shen, Sylvain Paris, Aaron Hertzmann, Elya Shechtman, Brian Price
  • Patent number: 9971954
    Abstract: In order to produce an image processing filter by utilizing genetic programming, a taught parameter acquiring unit acquires a taught parameter indicating a feature shape in an input image before processing. A data processing unit creates an output image by processing the input image with an image processing filter, and subsequently a feature extracting unit extracts a detected parameter indicating a feature shape in the output image. An automatic configuring unit evaluates the image processing filter by calculating cosine similarity between the taught parameter and the detected parameter.
    Type: Grant
    Filed: December 14, 2015
    Date of Patent: May 15, 2018
    Assignee: FUJITSU LIMITED
    Inventors: Tsuyoshi Nagato, Tetsuo Koezuka
  • Patent number: 9965175
    Abstract: A system, method and computer program product for use in digital note taking with handwriting input to a computing device are provided. The computing device is connected to an input device in the form of an input surface. A user is able to provide input by applying pressure to or gesturing above the input surface using either his or her finger or an instrument such as a stylus or pen. The present system and method monitors the input strokes. The computing device further has a processor and at least one application for recognizing the handwriting input under control of the processor. The at least one system application is configured to cause display of, on a display interface of a computing device, digital ink in a block layout in accordance with a layout of blocks of the handwriting input and a configuration of the computing device display interface.
    Type: Grant
    Filed: October 19, 2015
    Date of Patent: May 8, 2018
    Assignee: MyScript
    Inventors: Nicolas Rucine, Nathalie Delbecque, Robin Mélinand, Arnoud Boekhoorn, Cédric Coulon, François Bourlieux, Thomas Penin, Aristote Laval
  • Patent number: 9946959
    Abstract: In an example, high-dimensional data is projected to a multi-dimensional space to differentiate clusters of the high-dimensional data. A user selection of at least two of the clusters may be received and a plurality of dissimilar dimensions may be extracted from the at least two clusters. In addition, a user selected of a dissimilar dimension from the plurality of extracted dissimilar dimensions may be received. In response to receipt of the user selection of the dissimilar dimension from the plurality of dissimilar dimensions, a plurality of correlated dimensions to the dissimilar dimension may be determined. In addition, the plurality of dissimilar dimensions and the plurality of correlated dimensions may be displayed.
    Type: Grant
    Filed: April 30, 2014
    Date of Patent: April 17, 2018
    Assignee: ENTIT SOFTWARE LLC
    Inventors: Ming C. Hao, Wei-Nchih Lee, Alexander Jaeger, Nelson L. Chang, Daniel Keim
  • Patent number: 9940551
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for image generation using neural networks. In one of the methods, an initial image is received. Data defining an objective function is received, and the objective function is dependent on processing of a neural network trained to identify features of an image. The initial image is modified to generate a modified image by iteratively performing the following: a current version of the initial image is processed using the neural network to generate a current objective score for the current version of the initial image using the objective function; and the current version of the initial image is modified to increase the current objective score by enhancing a feature detected by the processing.
    Type: Grant
    Filed: June 17, 2016
    Date of Patent: April 10, 2018
    Assignee: Google LLC
    Inventor: Alexander Mordvintsev
  • Patent number: 9940548
    Abstract: An image recognition method includes: receiving an image; acquiring processing result information including values of processing results of convolution processing at positions of a plurality of pixels that constitute the image by performing the convolution processing on the image by using different convolution filters; determining 1 feature for each of the positions of the plurality of pixels on the basis of the values of the processing results of the convolution processing at the positions of the plurality of pixels included in the processing result information and outputting the determined feature for each of the positions of the plurality of pixels; performing recognition processing on the basis of the determined feature for each of the positions of the plurality of pixels; and outputting recognition processing result information obtained by performing the recognition processing.
    Type: Grant
    Filed: February 22, 2016
    Date of Patent: April 10, 2018
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventors: Yasunori Ishii, Sotaro Tsukizawa, Reiko Hagawa