Trainable Classifiers Or Pattern Recognizers (e.g., Adaline, Perceptron) Patents (Class 382/159)
  • Patent number: 10805587
    Abstract: A method is disclosed for automatic white balance correction of color cameras. The method includes capturing a first image of a target object, the first image containing a calibration area defined by a symbology and a calibration zone within the calibration area. The symbology encodes data and has first-color elements and second-color elements. The calibration zone is defined by at least some of at least one of the first-color elements and second-color elements. The method includes obtaining a location of the calibration area in the first image, and capturing a second image of the target object, the second image being multicolor. The method includes locating the calibration area in the second image based on the location, and analyzing the calibration zone within the calibration area of the second image. The method includes calculating and applying at least one white balance compensation bias based on the analyzed calibration zone.
    Type: Grant
    Filed: July 9, 2019
    Date of Patent: October 13, 2020
    Assignee: Zebra Technologies Corporation
    Inventor: Sajan Wilfred
  • Patent number: 10803585
    Abstract: The present disclosure relates to the classification of images, such as medical images using machine learning techniques. In certain aspects, the technique may employ a distance metric for the purpose of classification, where the distance metric determined for a given image with respect to a homogenous group or class of images is used to classify the image.
    Type: Grant
    Filed: October 9, 2018
    Date of Patent: October 13, 2020
    Assignee: GENERAL ELECTRIC COMPANY
    Inventors: Andre de Almeida Maximo, Chitresh Bhushan, Thomas Kwok-Fah Foo, Desmond Teck Beng Yeo
  • Patent number: 10803544
    Abstract: The disclosed technology includes systems and methods for enhancing machine vision object recognition based on a plurality of captured images and an accumulation of corresponding classification analysis scores. A method is provided for capturing, with a camera of a mobile computing device, a plurality of images, each image of the plurality of images comprising a first object. The method includes processing, with a classification module comprising a trained neural network processing engine, at least a portion of the plurality of images. The method includes generating, with the classification module and based on the processing, one or more object classification scores associated with the first object. The method includes accumulating, with an accumulating module, the one or more object classification scores. And responsive to a timeout or an accumulated score exceeding a predetermined threshold, the method includes outputting classification information of the first object.
    Type: Grant
    Filed: May 29, 2019
    Date of Patent: October 13, 2020
    Assignee: CAPITAL ONE SERVICES, LLC
    Inventors: Qiaochu Tang, Geoffrey Dagley, Micah Price, Sunil Vasisht, Stephen Wylie, Jason Hoover
  • Patent number: 10803619
    Abstract: A method for identifying a feature in a first image comprises establishing an initial database of image triplets, and in a pose estimation processor, training a deep learning neural network using the initial database of image triplets, calculating a pose for the first image using the deep learning neural network, comparing the calculated pose to a validation database populated with images data to identify an error case in the deep learning neural network, creating a new set of training data including a plurality of error cases identified in a plurality of input images and retraining the deep learning neural network using the new set of training data. The deep learning neural network may be iteratively retrained with a series of new training data sets. Statistical analysis is performed on a plurality of error cases to select a subset of the error cases included in the new set of training data.
    Type: Grant
    Filed: March 13, 2017
    Date of Patent: October 13, 2020
    Assignee: Siemens Mobility GmbH
    Inventors: Kai Ma, Shanhui Sun, Stefan Kluckner, Ziyan Wu, Terrence Chen, Jan Ernst
  • Patent number: 10796450
    Abstract: A method for detecting and tracking human head in an image by an electronic device is disclosed. The method may include segmenting the image into one or more sub-images; inputting each sub-image to a convolutional neural network trained according to training images having marked human head positions; outputting by a preprocessing layer of the convolutional neural network comprising a first convolutional layer and a pooling layer, a first feature corresponding to each sub-image; mapping through a second convolutional layer the first feature corresponding to each sub-image to a second feature corresponding to each sub-image; mapping through a regression layer the second feature corresponding to each sub-image to a human head position corresponding to each sub-image and a corresponding confidence level of the human head position; and filtering, according to the corresponding confidence level, human head positions corresponding to the one or more sub-images, to acquire detected human head positions in the image.
    Type: Grant
    Filed: March 12, 2019
    Date of Patent: October 6, 2020
    Assignee: Tencent Technology (Shenzhen) Company Limited
    Inventor: Deqiang Jiang
  • Patent number: 10796793
    Abstract: Example methods and systems for generating an aggregated artificial intelligence (AI) engine for radiotherapy treatment planning are provided. One example method may include obtaining multiple AI engines associated with respective multiple treatment planners; generating multiple sets of output data using the multiple AI engines associated with the respective multiple treatment planners: comparing the multiple AI engines associated with the respective multiple treatment planners based on the multiple sets of output data; and based on the comparison, aggregating at least some of the multiple AI engines to generate the aggregated AI engine for performing the particular treatment planning step. The multiple AI engines may be trained to perform a particular treatment planning step, and each of the multiple AI engines is trained to emulate one of the multiple treatment planners performing the particular treatment planning step.
    Type: Grant
    Filed: August 14, 2018
    Date of Patent: October 6, 2020
    Assignee: VARIAN MEDICAL SYSTEMS INTERNATIONAL AG
    Inventors: Corey Zankowski, Charles Adelsheim, Joakim Pyyry, Esa Kuusela
  • Patent number: 10798566
    Abstract: Facilitating secure conveyance of location information and other information in advanced networks (e.g., 4G, 5G, and beyond) is provided herein. Operations of a system can comprise transforming, at a chipset level of the device, information indicative of a location of the device into a binary representation of the information indicative of the location of the device. The operations can also comprise embedding the binary representation of the information indicative of the location of the device into a message. Further, the operations can comprise facilitating a transmission of the message and the binary representation of the information indicative of the location of the device to a network device of a communications network.
    Type: Grant
    Filed: December 5, 2018
    Date of Patent: October 6, 2020
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Mark D. Austin, Sheldon Meredith
  • Patent number: 10796198
    Abstract: Some embodiments include a special-purpose hardware accelerator that can perform specialized machine learning tasks during both training and inference stages. For example, this hardware accelerator uses a systolic array having a number of data processing units (“DPUs”) that are each connected to a small number of other DPUs in a local region. Data from the many nodes of a neural network is pulsed through these DPUs with associated tags that identify where such data was originated or processed, such that each DPU has knowledge of where incoming data originated and thus is able to compute the data as specified by the architecture of the neural network. These tags enable the systolic neural network engine to perform computations during backpropagation, such that the systolic neural network engine is able to support training.
    Type: Grant
    Filed: December 27, 2018
    Date of Patent: October 6, 2020
    Assignee: Western Digital Technologies, Inc.
    Inventor: Luiz M. Franca-Neto
  • Patent number: 10789482
    Abstract: In implementations of the subject matter described herein, an action detection scheme using a recurrent neural network (RNN) is proposed. Representation information of an incoming frame of a video and a predefined action label for the frame are obtained to train a learning network including RNN elements and a classification element. The representation information represents an observed entity in the frame. Specifically, parameters for the RNN elements are determined based on the representation information and the predefined action label. With the determined parameters, the RNN elements are caused to extract features for the frame based on the representation information and features for a preceding frame. Parameters for the classification element are determined based on the extracted features and the predefined action label. The classification element with the determined parameters generates a probability of the frame being associated with the predefined action label.
    Type: Grant
    Filed: March 28, 2017
    Date of Patent: September 29, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Cuiling Lan, Wenjun Zeng, Yanghao Li, Junliang Xing
  • Patent number: 10789484
    Abstract: A crowd type classification system of an aspect of the present invention includes: a staying crowd detection unit that detects a local region indicating a crowd in staying from a plurality of local regions determined in an image acquired by an image acquisition device; a crowd direction estimation unit that estimates a direction of the crowd for an image of a part corresponding to the detected local region, and appends the direction of the crowd to the local region; and a crowd type classification unit that classifies a type of the crowd including a plurality of staying persons for the local region to which the direction is appended by using a relative vector indicating a relative positional relationship between two local regions and directions of crowds in the two local regions, and outputs the type and positions of the crowds.
    Type: Grant
    Filed: February 28, 2017
    Date of Patent: September 29, 2020
    Assignee: NEC CORPORATION
    Inventor: Hiroo Ikeda
  • Patent number: 10783404
    Abstract: A method and a device for verifying a recognition result in character recognition are provided. The device constructs a hidden Markov chain for a character string to be recognized, using recognition result output of a character recognition process. The recognition result includes candidate characters of each character in the character string. The device solves for an optimal path forming a candidate character string according to the hidden Markov chain and a pre-trained state transition matrix. The device recognizes non-Chinese characters in the character string according to state transition probabilities in the optimal path. The device verifies the recognition result according to the non-Chinese characters. The device feeds back a verification result to the character recognition process, wherein the character recognition process applied to the character string to be recognized is modified by the verification result.
    Type: Grant
    Filed: December 3, 2018
    Date of Patent: September 22, 2020
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Sheng Han, Hongfa Wang, Longsha Zhou, Hui Song
  • Patent number: 10776000
    Abstract: A method of and system for receiving, processing, converting and verifying digital ink input is carried out by receiving digital ink input, collecting data relating to the received digital ink input, and receiving a request to convert the received digital ink input. Upon receiving the request, the received digital ink input may be recognized as text characters based at least in part on an analysis of the digital ink input and the converted characters may be displayed on a screen adjacent to the received digital ink input, at which point a user may be able to compare the received digital ink input with the recognized characters to initiate any corrections needed.
    Type: Grant
    Filed: December 19, 2018
    Date of Patent: September 15, 2020
    Assignee: Microsoft Technology Licensing, LLC.
    Inventors: Elise Leigh Livingston, Patrick Edgar Schreiber, Tracy ThuyDuyen Tran, Heather Strong Eden, Rachel Ann Keirouz
  • Patent number: 10776642
    Abstract: Systems and methods to efficiently and effectively train artificial intelligence and neural networks for an autonomous or semi-autonomous vehicle are disclosed. The systems and methods provide for the minimization of the labeling cost by sampling images from a raw video file which are mis-detected, i.e., false positive and false negative detections, or indicate abnormal or unexpected driver behavior. Supplemental information such as controller area network signals and data may be used to augment and further encapsulate desired images from video.
    Type: Grant
    Filed: January 25, 2019
    Date of Patent: September 15, 2020
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventor: Mitsuhiro Mabuchi
  • Patent number: 10776918
    Abstract: The present application relates to method and device for determining image similarity that includes: dividing a target image into multiple regions based on positions of pixels relative to a reference point in the target image, and dividing a reference image into multiple regions based on positions of pixels relative to a reference point in the reference image; determining, based on feature points in the target image and feature points in the reference image as well as the regions obtained by dividing the target image and the regions obtained by dividing the reference image, similarity between a distribution of the feature points in the target image and a distribution of the feature points in the reference image. According to the method of the present application, the similarity is described more reasonably.
    Type: Grant
    Filed: November 17, 2017
    Date of Patent: September 15, 2020
    Assignee: FUJITSU LIMITED
    Inventors: Mengjiao Wang, Rujie Liu
  • Patent number: 10769473
    Abstract: A plurality of recognition positions each recognized by a recognizer as a position of a target object on an input image are acquired. At least one representative position is obtained by performing clustering for the plurality of recognition positions. The representative position is edited in accordance with an editing instruction from a user for the representative position. The input image and the representative position are saved as learning data to be used for learning of the recognizer.
    Type: Grant
    Filed: August 10, 2018
    Date of Patent: September 8, 2020
    Assignee: Canon Kabushiki Kaisha
    Inventor: Masaki Inaba
  • Patent number: 10766137
    Abstract: A machine learning system builds and uses computer models for identifying how to evaluate the level of success reflected in a recorded observation of a task. Such computer models may be used to generate a policy for controlling a robotic system performing the task. The computer models can also be used to evaluate robotic task performance and provide feedback for recalibrating the robotic control policy.
    Type: Grant
    Filed: November 3, 2017
    Date of Patent: September 8, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Brandon William Porter, Leonardo Ruggiero Bachega, Brian C. Beckman, Benjamin Lev Snyder, Michael Vogelsong, Corrinne Yu
  • Patent number: 10769530
    Abstract: Disclosed is a method of training at least a part of a neural network including a plurality of layers performed by a computing device according to an exemplary embodiment of the present disclosure. The method includes: inputting training data including normal data and abnormal data to an input layer of the neural network; making a feature value output from each of one or more hidden nodes of a hidden layer of the neural network for each training data into a histogram and generating a distribution of the feature value for each of the one or more hidden nodes; calculating an error between each distribution of the feature value and a predetermined probability distribution; and selecting at least one hidden node among the one or more hidden nodes of the hidden layer based on the error.
    Type: Grant
    Filed: July 30, 2018
    Date of Patent: September 8, 2020
    Assignee: SUALAB CO., LTD.
    Inventor: Kiyoung Song
  • Patent number: 10754035
    Abstract: A ground-classifier system that classifies ground-cover proximate to an automated vehicle includes a lidar, a camera, and a controller. The lidar that detects a point-cloud of a field-of-view. The camera that renders an image of the field-of-view. The controller is configured to define a lidar-grid that segregates the point-cloud into an array of patches, and define a camera-grid that segregates the image into an array of cells. The point-cloud and the image are aligned such that a patch is aligned with a cell. A patch is determined to be ground when the height is less than a height-threshold. The controller is configured to determine a lidar-characteristic of cloud-points within the patch, determine a camera-characteristic of pixels within the cell, and determine a classification of the patch when the patch is determined to be ground, wherein the classification of the patch is determined based on the lidar-characteristic and the camera-characteristic.
    Type: Grant
    Filed: January 17, 2017
    Date of Patent: August 25, 2020
    Assignee: Aptiv Technologies Limited
    Inventors: Ronald M. Taylor, Izzat H. Izzat
  • Patent number: 10755080
    Abstract: An information processing apparatus includes first and second acquisition units and first and second search units. The first acquisition unit acquires a first feature amount from a search source image including a search object. The first search unit searches for the search object from a plurality of video images based on the first feature amount acquired by the first acquisition unit. The second acquisition unit acquires a second feature amount from the search object searched by the first search unit. The second feature amount is different from the first feature amount. The second search unit searches, based on the second feature amount acquired by the second acquisition unit, the search object from a video image, among the plurality of video images, in which the search object is not searched by at least the first search unit.
    Type: Grant
    Filed: May 21, 2018
    Date of Patent: August 25, 2020
    Assignee: Canon Kabushiki Kaisha
    Inventors: Masahiro Matsushita, Hirotaka Shiiyama
  • Patent number: 10755143
    Abstract: Provided is an object detection device or the like which efficiently generates good-quality training data. This object detection device is provided with: a detection unit which uses a dictionary to detect objects from an input image; a reception unit which displays, on a display device, the input image accompanied by a display emphasizing partial areas of detected objects, and receives, from one operation of an input device, a selection of a partial area and an input of the class of the selected partial area; a generation unit which generates training data from the image of the selected partial area and the inputted class; and a learning unit which uses the training data to learn the dictionary.
    Type: Grant
    Filed: May 21, 2019
    Date of Patent: August 25, 2020
    Assignee: NEC CORPORATION
    Inventor: Yusuke Takahashi
  • Patent number: 10755039
    Abstract: A system and process for extracting information from filled form images is described. In one example the claimed invention first extracts textual information and the hierarchy in a blank form. This information is then used to extract and understand the content of filled forms. In this way, the system does not have to analyze from the beginning each filled form. The system is designed so that it remains as generic as possible. The number of hard coded rules in the whole pipeline was minimized to offer an adaptive solution able to address the largest number of forms, with various structures and typography. The system is also created to be integrated as a built-in function in a larger pipeline. The form understanding pipeline could be the starting point of any advanced Natural Language Processing application.
    Type: Grant
    Filed: November 15, 2018
    Date of Patent: August 25, 2020
    Assignee: International Business Machines Corporation
    Inventors: Antonio Foncubierta Rodriguez, Guillaume Jaume, Maria Gabrani
  • Patent number: 10755112
    Abstract: Methods and systems for reducing an amount of data storage and/or processing power necessary for training data for machine learning by replacing three dimensional models with a plurality of two dimensional images are disclosed. The method includes determining an object of interest from a three-dimensional model and cropping the object of interest into a plurality of two-dimensional images. The plurality of two-dimensional images are cropped such that only the object of interest remain. The plurality of two-dimensional images are cropped with respect to particular attributes, such as a road width, a road angle, or an angle with respect to an adjacent vehicle. An image capturing device captures the real time background images. The objects within the background images are categorized using associated attributes. The attributes are synthesized with the plurality of two-dimensional images such that a plurality of replica two-dimensional images of the 3D model with real-time backgrounds are generated.
    Type: Grant
    Filed: March 13, 2018
    Date of Patent: August 25, 2020
    Assignee: Toyota Research Institute, Inc.
    Inventor: Mitsuhiro Mabuchi
  • Patent number: 10755420
    Abstract: An object tracking method and apparatus are provided. The object tracking method includes detecting a target object in a first-type input image that is based on light in a first wavelength band, tracking the target object in the first-type input image based on detection information of the target object, measuring a reliability of the first-type input image by comparing the first-type image to an image in a database, comparing the reliability of the first-type input image to a threshold, and tracking the target object in a second-type input image that is based on light in a second wavelength band.
    Type: Grant
    Filed: June 20, 2018
    Date of Patent: August 25, 2020
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Jingu Heo, Dong Kyung Nam
  • Patent number: 10748034
    Abstract: A method for training a learning-based medical scanner including (a) obtaining training data from demonstrations of scanning sequences, and (b) learning the medical scanner's control policies using deep reinforcement learning framework based on the training data.
    Type: Grant
    Filed: January 10, 2018
    Date of Patent: August 18, 2020
    Assignee: Siemens Healthcare GmbH
    Inventors: Vivek Kumar Singh, Klaus J. Kirchberg, Kai Ma, Yao-jen Chang, Terrence Chen
  • Patent number: 10747994
    Abstract: Disclosed are a method and apparatus for identifying versions of a form. In an example, clients of a medical company fill out many forms, and many of these forms have multiple versions. The medical company operates in 10 states, and each state has a different version of a client intake form, as well as of an insurance identification form. In order to automatically extract information from a particular filled out form, it may be helpful to identify a particular form template, as well as the version of the form template, of which the filled out form is an instance. A computer system evaluates images of filled out forms, and identifies various form templates and versions of form templates based on the images.
    Type: Grant
    Filed: December 12, 2017
    Date of Patent: August 18, 2020
    Assignee: Captricity, Inc.
    Inventor: Ramesh Sridharan
  • Patent number: 10748279
    Abstract: An image processing apparatus includes: an image acquiring unit configured to acquire a lumen image of a living body; an image generating unit configured to generate a plurality of new images by changing at least one of a hue and a brightness of the lumen image to predetermined values based on range information in which a range of at least one of a hue and a brightness is set in advance according to biometric information included in the lumen image; and a learning unit configured to learn identification criteria to perform identification of a subject based on the images.
    Type: Grant
    Filed: June 20, 2018
    Date of Patent: August 18, 2020
    Assignee: OLYMPUS CORPORATION
    Inventors: Makoto Kitamura, Mitsutaka Kimura
  • Patent number: 10747811
    Abstract: Compositing aware digital image search techniques and systems are described that leverage machine learning. In one example, a compositing aware image search system employs a two-stream convolutional neural network (CNN) to jointly learn feature embeddings from foreground digital images that capture a foreground object and background digital images that capture a background scene. In order to train models of the convolutional neural networks, triplets of training digital images are used. Each triplet may include a positive foreground digital image and a positive background digital image taken from the same digital image. The triplet also contains a negative foreground or background digital image that is dissimilar to the positive foreground or background digital image that is also included as part of the triplet.
    Type: Grant
    Filed: May 22, 2018
    Date of Patent: August 18, 2020
    Assignee: Adobe Inc.
    Inventors: Xiaohui Shen, Zhe Lin, Kalyan Krishna Sunkavalli, Hengshuang Zhao, Brian Lynn Price
  • Patent number: 10740652
    Abstract: There is provided with an image processing apparatus, for example for image recognition such as object counting with machine learning. A generation unit, based on a first captured image, generates a first training data that indicates a first training image and an image recognition result for the first training image. A training unit, by performing training using the first training data, generates a discriminator for image recognition based on both the first training data and second training data that is prepared in advance and that indicates a second training image and an image recognition result for the second training image.
    Type: Grant
    Filed: February 28, 2018
    Date of Patent: August 11, 2020
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Yasuo Bamba
  • Patent number: 10740489
    Abstract: The invention relates to obfuscating data while maintaining local predictive relationships. An embodiment of the present invention is directed to cryptographically obfuscating a data set in a manner that hides personally identifiable information (PII) while allowing third parties to train classes of machine learning algorithms effectively. According to an embodiment of the present invention, the obfuscation acts as a symmetric encryption so that the original obfuscating party may relate the predictions on the obfuscated data to the original PII. The various features of the present invention enable third party prediction services to safely interact with PII.
    Type: Grant
    Filed: May 17, 2018
    Date of Patent: August 11, 2020
    Assignee: JPMorgan Chase Bank, N.A.
    Inventors: Carter Tazio Schonwald, Graham L. Giller
  • Patent number: 10733294
    Abstract: Systems and methods may be used to classify incoming testing data, such as binaries, function calls, an application package, or the like, to determine whether the testing data is contaminated using an adversarial attack or benign while training a machine learning system to detect malware. A method may include using a sparse coding technique or a semi-supervised learning technique to classify the testing data. Training data may be used to represent the testing data using the sparse coding technique or to train the supervised portion of the semi-supervised learning technique.
    Type: Grant
    Filed: September 11, 2017
    Date of Patent: August 4, 2020
    Assignee: Intel Corporation
    Inventor: Li Chen
  • Patent number: 10735446
    Abstract: Embodiments presented herein describe a method for processing streams of data of one or more networked computer systems. According to one embodiment of the present disclosure, an ordered stream of normalized vectors corresponding to information security data obtained from one or more sensors monitoring a computer network is received. A neuro-linguistic model of the information security data is generated by clustering the ordered stream of vectors and assigning a letter to each cluster, outputting an ordered sequence of letters based on a mapping of the ordered stream of normalized vectors to the clusters, building a dictionary of words from of the ordered output of letters, outputting an ordered stream of words based on the ordered output of letters, and generating a plurality of phrases based on the ordered output of words.
    Type: Grant
    Filed: May 13, 2018
    Date of Patent: August 4, 2020
    Assignee: Intellective Ai, Inc.
    Inventors: Wesley Kenneth Cobb, Ming-Jung Seow, Curtis Edward Cole, Cody Shay Falcon, Benjamin A. Konosky, Charles Richard Morgan, Aaron Poffenberger, Thong Toan Nguyen
  • Patent number: 10733742
    Abstract: A method enables object label persistence between subsequent images captured by a camera. One or more processors receive a first image, which is captured by an image sensor on a camera, and which includes a depiction of an object. The processor(s) generate a label for the object, and display the first image on a display. The processor(s) subsequently receive movement data that describes a movement of the camera after the image sensor on the camera captures the first image and before the image sensor on the camera captures a second image. The processor(s) receive the second image. The processor(s) display the second image on the display, and then detect a pixel shift between the first image and the second image as displayed on the display. The processor(s) then label the object with the label on the second image as displayed on the display.
    Type: Grant
    Filed: September 26, 2018
    Date of Patent: August 4, 2020
    Assignee: International Business Machines Corporation
    Inventors: Eric Rozner, Chungkuk Yoo, Inseok Hwang
  • Patent number: 10733885
    Abstract: A method of operating an incident avoidance system for use in a vehicle comprises a gateway receiving a plurality of vehicular data samples from a plurality of data sources in a vicinity of a target vehicle. A stream processor coupled to the gateway, categorizes a first plurality of low latency data samples from the plurality of vehicular data samples based on an allowable latency of each of the plurality of vehicular data samples. A rules engine coupled to the stream processor, receives the plurality of low latency data samples. The rules engine produces a predictive model based on the plurality of low latency data samples. A notification service accesses the predictive model and situational data of the target vehicle to predict an incident. The notification service transmits a notification of the incident to the target vehicle.
    Type: Grant
    Filed: June 7, 2018
    Date of Patent: August 4, 2020
    Assignee: Complete Innovations Inc.
    Inventors: Jihyun Cho, Muhamad Samji, Alan Fong, Tony Lourakis, Victor Lan, Pantelis Chatzinikolis, Wei Zhou
  • Patent number: 10726833
    Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for generating domain-specific speech recognition models for a domain of interest by combining and tuning existing speech recognition models when a speech recognizer does not have access to a speech recognition model for that domain of interest and when available domain-specific data is below a minimum desired threshold to create a new domain-specific speech recognition model. A system configured to practice the method identifies a speech recognition domain and combines a set of speech recognition models, each speech recognition model of the set of speech recognition models being from a respective speech recognition domain. The system receives an amount of data specific to the speech recognition domain, wherein the amount of data is less than a minimum threshold to create a new domain-specific model, and tunes the combined speech recognition model for the speech recognition domain based on the data.
    Type: Grant
    Filed: May 21, 2018
    Date of Patent: July 28, 2020
    Assignee: NUANCE COMMUNICATIONS, INC.
    Inventors: Srinivas Bangalore, Robert Bell, Diamantino Antonio Caseiro, Mazin Gilbert, Patrick Haffner
  • Patent number: 10726558
    Abstract: Various image analysis techniques are disclosed herein that automatically assess the damage to a rooftop of a building or other object. In some aspects, the system may determine the extent of the damage, as well as the type of damage. Further aspects provide for the automatic detection of the roof type, roof geometry, shingle or tile count, or other features that can be extracted from images of the rooftop.
    Type: Grant
    Filed: February 22, 2018
    Date of Patent: July 28, 2020
    Assignee: Dolphin AI, Inc.
    Inventors: Harald Ruda, Nicholas Hughes, Alexander Hughes
  • Patent number: 10713538
    Abstract: Aspects of the present disclosure involve a system and method for learning from images of raw data, including transactional data. In one embodiment, a system is introduced that can learn from the images of such data. In particular, machine learning is implemented on images in order to classify information in a more accurate manner. The images are created from raw data deriving from data sources relating to user accounts, in various embodiments.
    Type: Grant
    Filed: August 14, 2018
    Date of Patent: July 14, 2020
    Assignee: PayPal, Inc.
    Inventors: Lian Liu, Hui-Min Chen
  • Patent number: 10713816
    Abstract: Disclosed in some examples, are methods, systems, and machine readable mediums that correct image color casts by utilizing a fully convolutional network (FCN), where the patches in an input image may differ in influence over the color constancy estimation. This influence is formulated as a confidence weight that reflects the value of a patch for inferring the illumination color. The confidence weights are integrated into a novel pooling layer where they are applied to local patch estimates in determining a global color constancy result.
    Type: Grant
    Filed: July 14, 2017
    Date of Patent: July 14, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Yuanming Hu, Baoyuan Wang, Stephen S. Lin
  • Patent number: 10706332
    Abstract: An analog circuit fault mode classification method comprises the following implementation steps: (1) collecting M groups of voltage signal sample vectors Vij to each of fault modes Fi of the analog circuit by using a data collection board; (2) sequentially extracting fault characteristic vectors VijF of the voltage signal sample vectors Vij by using subspace projection; (3) standardizing the extracted fault characteristic vectors VijF to obtain standardized fault characteristic vectors; (4) constructing a fault mode classifier based on a support vector machine, inputting the standardized fault characteristic vectors, performing learning and training on the classifier, and determining structure parameters of the classifier; and (5) completing determination of fault modes according to fault mode determination rules. The fault mode classifier of the present invention is simple in learning and training and reliable in mode classification accuracy.
    Type: Grant
    Filed: November 24, 2015
    Date of Patent: July 7, 2020
    Assignee: HEFEI UNIVERSITY OF TECHNOLOGY
    Inventors: Lifen Yuan, Shuai Luo, Yigang He, Peng Chen, Chaolong Zhang, Ying Long, Zhen Cheng, Zhijie Yuan, Deqin Zhao
  • Patent number: 10699161
    Abstract: A method of tuning a generative model may be provided. A method may include receiving, at a first generative adversarial network (GAN), a first input identifying an item and at least one user-defined attribute for the item. The method may also include generating, via the first GAN, a first image of the item based on the first input. Further, the method may include receiving, at a second GAN, the first image and a second input indicative of a desire for more or less of the at least one user-defined attribute. Moreover, the method may include generating, via the second GAN, a second image of the item based on the first image and the second input.
    Type: Grant
    Filed: February 28, 2018
    Date of Patent: June 30, 2020
    Assignee: FUJITSU LIMITED
    Inventors: Ramya Malur Srinivasan, Ajay Chander
  • Patent number: 10699163
    Abstract: A human expert may initially label a white light image of teeth, and computer vision may initially label a filtered fluorescent image of the same teeth. Each label may indicate presence or absence of dental plaque at a pixel. The images may be registered. For each pixel of the registered images, a union label may be calculated, which is the union of the expert label and computer vision label. The union labels may be applied to the white light image. This process may be repeated to create a training set of union-labeled white light images. A classifier may be trained on this training set. Once trained, the classifier may classify a previously unseen white light image, by predicting union labels for that image. Alternatively, the items that are initially labeled may comprise images captured by two different imaging modalities, or may comprise different types of sensor measurements.
    Type: Grant
    Filed: August 20, 2018
    Date of Patent: June 30, 2020
    Assignee: Massachusetts Institute of Technology
    Inventors: Pratik Shah, Gregory Yauney
  • Patent number: 10692217
    Abstract: An image processing method and an image processing system are provided. A plurality of image detections are performed on the regions, such that the detections on the image data can adequately meet the variety of needs.
    Type: Grant
    Filed: January 4, 2017
    Date of Patent: June 23, 2020
    Assignee: SERCOMM CORPORATION
    Inventor: Shao-Hai Zhao
  • Patent number: 10684998
    Abstract: Mismatches between schema elements of a data set and a job are identified automatically. Furthermore, the mismatches can be presented visually in conjunction with an interactive visual workspace configured to support diagrammatic authoring of data transformation pipelines. After a data set is connected to a job, one or more mismatches can be determined and presented in context with the workspace. In addition, schema elements can be reconfigured by way of interaction with a visual representation of schema elements to resolve mismatches.
    Type: Grant
    Filed: November 21, 2014
    Date of Patent: June 16, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Pedro Ardila, Christina Storm, Andrew J. Peacock, Amir Netz, Cheryl Couris
  • Patent number: 10657544
    Abstract: Embodiments are directed to a computer implemented business campaign development system. The system includes an electronic tool configured to hold data of a user, and an analyzer circuit configured to derive a cognitive trait of the user based at least in part on the data of the user. The system further includes a targeted business strategy development system configured to derive a targeted business strategy based at least in part on the cognitive trait of the user.
    Type: Grant
    Filed: November 24, 2015
    Date of Patent: May 19, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Guillermo A. Cecchi, James R. Kozloski, Clifford A. Pickover, Irina Rish
  • Patent number: 10657543
    Abstract: Embodiments are directed to a computer implemented business campaign development system. The system includes an electronic tool configured to hold data of a user, and an analyzer circuit configured to derive a cognitive trait of the user based at least in part on the data of the user. The system further includes a targeted business strategy development system configured to derive a targeted business strategy based at least in part on the cognitive trait of the user.
    Type: Grant
    Filed: June 23, 2015
    Date of Patent: May 19, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Guillermo A. Cecchi, James R. Kozloski, Clifford A. Pickover, Irina Rish
  • Patent number: 10657574
    Abstract: Techniques disclosed herein provide more efficient and more relevant item recommendations to users in large-scale environments in which only positive interest information is known. The techniques use a rank-constrained formulation that generalizes relationships based on known user interests in items and/or use a randomized singular value decomposition (SVD) approximation technique to solve the formulation to identify items of interest to users in an efficiently, scalable manner.
    Type: Grant
    Filed: September 13, 2016
    Date of Patent: May 19, 2020
    Assignee: Adobe Inc.
    Inventors: Hung Bui, Branislav Kveton, Suvash Sedhain, Nikolaos Vlassis, Jaya Kawale
  • Patent number: 10650290
    Abstract: Sketch completion using machine learning in a digital medium environment is described. Initially, a user sketches a digital image, e.g., using a stylus in a digital sketch application. A model trained using machine learning is leveraged to identify and describe visual characteristics of the user sketch. The visual characteristics describing the user sketch are compared to clusters of data generated by the model and that describe visual characteristics of a set of digital sketch images. Based on the comparison, digital sketch images having visual characteristics similar to the user sketch are identified from similar clusters. The similar images are returned for presentation as selectable suggestions for sketch completion of the sketched object.
    Type: Grant
    Filed: June 4, 2018
    Date of Patent: May 12, 2020
    Assignee: Adobe Inc.
    Inventors: Piyush Singh, Vikas Kumar, Sourabh Gupta, Nandan Jha, Nishchey Arya, Rachit Gupta
  • Patent number: 10650237
    Abstract: A computer implemented recognition process of an object in a query image provides a set of training images, each training image being defined by a plurality of pixels and comprising an object tag; determines for each training image of the set a plurality of first descriptors, each first descriptor being a vector that represents pixel properties in a corresponding subregion of the associated training image; and selects among the first descriptors a group of exemplar descriptors describing the set of training images, wherein selecting the exemplar descriptors includes determining the first descriptors having a number of repetitions in the set of training images higher than a certain value.
    Type: Grant
    Filed: August 1, 2018
    Date of Patent: May 12, 2020
    Assignee: LOGOGRAB LIMITED
    Inventor: Alessandro Prest
  • Patent number: 10650236
    Abstract: A road detection method and apparatus. A specific embodiment of the method includes: acquiring an image of a predetermined region; semantically segmenting the image to acquire a first probability that a region corresponding to each pixel in the image is a road region; acquiring a historical position information set of a target terminal; correcting, in response to historical position information existing in the historical position information set, the historical position information indicating a historical position located in the predetermined region, the first probability according to the historical position information to obtain a second probability; and determining a region corresponding to a pixel having the second probability greater than a preset threshold as a road region. Such an embodiment improves the road detection accuracy.
    Type: Grant
    Filed: January 30, 2018
    Date of Patent: May 12, 2020
    Assignee: Baidu Online Network Technology (Beijing) Co., Ltd.
    Inventors: Yuan Xia, Yehui Yang, Haishan Wu, Jingbo Zhou, Chao Li
  • Patent number: 10643066
    Abstract: A method and apparatus for training a character detector based on weak supervision, a character detection system and a computer readable storage medium are provided, wherein the method includes: inputting coarse-grained annotation information of a to-be-processed object, wherein the coarse-grained annotation information including a whole bounding outline of a word, text bar or line of the to-be-processed objected; dividing the whole bounding outline of the coarse-grained annotation information, to obtain a coarse bounding box of a character of the to-be-processed object; obtaining a predicted bounding box of the character of the to-be-processed object through a neural network model from the coarse-grained annotation information; and determining a fine bounding box of the character of the to-be-processed object as character-based annotation of the to-be-processed object, according to the coarse bounding box and the predicted bounding box.
    Type: Grant
    Filed: August 23, 2018
    Date of Patent: May 5, 2020
    Assignee: Baidu Online Network Technology (Beijing) Co., Ltd.
    Inventors: Chengquan Zhang, Jiaming Liu, Junyu Han, Errui Ding
  • Patent number: 10643131
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a variational auto-encoder (VAE) to generate disentangled latent factors on unlabeled training images. In one aspect, a method includes receiving the plurality of unlabeled training images, and, for each unlabeled training image, processing the unlabeled training image using the VAE to determine the latent representation of the unlabeled training image and to generate a reconstruction of the unlabeled training image in accordance with current values of the parameters of the VAE, and adjusting current values of the parameters of the VAE by optimizing a loss function that depends on a quality of the reconstruction and also on a degree of independence between the latent factors in the latent representation of the unlabeled training image.
    Type: Grant
    Filed: August 5, 2019
    Date of Patent: May 5, 2020
    Assignee: DeepMind Technologies Limited
    Inventors: Loic Matthey-de-l'Endroit, Arka Tilak Pal, Shakir Mohamed, Xavier Glorot, Irina Higgins, Alexander Lerchner