Trainable Classifiers Or Pattern Recognizers (e.g., Adaline, Perceptron) Patents (Class 382/159)
  • Patent number: 10915734
    Abstract: An image captured using a camera on a device (e.g., a mobile device) may be operated on by one or more processes to determine properties of a user's face in the image. A first process may determine one or more first properties of the user's face in the image. A second process operating downstream from the first process may determine at least one second property of the user's face in the image. The second process may use at least one of the first properties from the first process to determine the second property.
    Type: Grant
    Filed: February 15, 2019
    Date of Patent: February 9, 2021
    Assignee: Apple Inc.
    Inventors: Atulit Kumar, Joerg A. Liebelt, Onur C. Hamsici, Feng Tang
  • Patent number: 10909421
    Abstract: The training method for the phase image generator includes the following steps. Firstly, in each iteration, a loss value is generated, including: (1). the phase image generator generates a plurality of generated phase images using a phase image generation mode; (2). the phase image determiner determines a degree of difference between the generated phase images and original phase images; (3). the loss value of the generated phase images is generated according to the degree of difference. Then, a selector selects a stable loss value from the loss values, and uses the phase image generation mode in the iteration corresponding to the stable loss value as a selected phase image generation mode of the phase image generator.
    Type: Grant
    Filed: December 18, 2018
    Date of Patent: February 2, 2021
    Assignee: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE
    Inventors: Chun-Wei Liu, Luan-Ying Chen, Kao-Der Chang
  • Patent number: 10909871
    Abstract: Disclosed is a method of providing user-customized learning content. The method includes: a step a of configuring a question database including one or more multiple-choice questions having one or more choice items and collecting choice item selection data of a user for the questions; a step b of calculating a modeling vector for the user based on the choice item data and generating modeling vectors for the questions according to each choice item; and a step c of calculating choice item selection probabilities of the user based on the modeling vectors of the user and the modeling vectors of the questions.
    Type: Grant
    Filed: June 19, 2017
    Date of Patent: February 2, 2021
    Inventors: Yeong Min Cha, Jae We Heo, Young Jun Jang
  • Patent number: 10902573
    Abstract: A method, a computer program product, and a computer system for cognitive validation of date/time information based on weather information. A computer trains a machine learning model to determine weather properties in images, by utilizing training images with verified metadata and historical weather data. The computer receives an image taken at a specific location and at an alleged time. The computer generates summarized weather hypotheses with a probability distribution, based on probabilities and confidence levels of the one or more weather hypotheses. The computer verifies the alleged time, by using the machine learning model. The machine learning model is used to determine weather properties in the image and to compare the weather properties in the image to known weather information at the specific location and at the alleged time.
    Type: Grant
    Filed: May 31, 2018
    Date of Patent: January 26, 2021
    Assignee: International Business Machines Corporation
    Inventors: Ana C. Biazetti, Jose A. Nativio
  • Patent number: 10902300
    Abstract: The present disclosure provides a method and apparatus for training a fine-grained image recognition model, a device and a storage medium. The method comprises: obtaining images as training samples, and respectively obtaining a tag corresponding to each image, the tag including a class to which the image belongs; training according to the training samples and corresponding tags to obtain a fine-grained image recognition model, and performing constraint at a feature level from two dimensions, namely, the class and object parts, during the training, so that the fine-grained image recognition model learns key object parts in the images; upon performing the fine-grained image recognition, inputting a to-be-recognized image to the fine-grained image recognition model, so that the fine-grained image recognition model positions key object parts in the image, and completes fine-grained image classification according to the key object parts, and outputs a classification result.
    Type: Grant
    Filed: November 29, 2018
    Date of Patent: January 26, 2021
    Assignee: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.
    Inventors: Ming Sun, Yuchen Yuan, Feng Zhou
  • Patent number: 10896598
    Abstract: An embodiment includes receiving, by a processor, a sensor signal from a monitoring sensor during a scheduled monitoring session for monitoring a first user. An embodiment includes processing, by the processor, the sensor signal using a machine learning (ML) model such that the ML model outputs an indication of whether the first user is experiencing a potential emergency. An embodiment includes performing, by the processor in response to the ML model indicating that the first user is experiencing a potential emergency, a verification routine that includes transmitting a verification request and, upon detecting a lack of response to the verification request within a predetermined amount of time, confirming the potential emergency as an actual emergency. An embodiment includes requesting, by the processor automatically in response to the verification routine confirming that the potential emergency is an actual emergency, dispatch of emergency services to a location of the first user.
    Type: Grant
    Filed: January 3, 2020
    Date of Patent: January 19, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Gregory J. Boss, Rhonda L. Childress, Michael Bender, Matthew Johnson
  • Patent number: 10896495
    Abstract: The present application discloses a method performed by an electronic apparatus for detecting and tracking a target object. The method includes obtaining a first frame of scene; performing object detection and recognition of at least two portions of the target object respectively in at least two bounding boxes of the first frame of scene; obtaining a second frame of scene, the second frame of scene being later in time than the first frame of scene; and performing object tracking of the at least two portions of the target object respectively in the at least two bounding boxes of the first frame of scene.
    Type: Grant
    Filed: May 5, 2017
    Date of Patent: January 19, 2021
    Assignee: BOE TECHNOLOGY GROUP CO., LTD.
    Inventor: Yu Gu
  • Patent number: 10891845
    Abstract: A mouth and nose occluded detecting method includes a detecting step and a warning step. The detecting step includes a facial detecting step, an image extracting step and an occluded determining step. In the facial detecting step, an image is captured by an image capturing device, wherein a facial portion image is obtained from the image. In the image extracting step, a mouth portion is extracted from the facial portion image so as to obtain a mouth portion image. In the occluded determining step, the mouth portion image is entered into an occluding convolutional neural network so as to produce a determining result, wherein the determining result is an occluding state or a normal state. In the warning step, a warning is provided according to the determining result.
    Type: Grant
    Filed: November 28, 2018
    Date of Patent: January 12, 2021
    Assignee: NATIONAL YUNLIN UNIVERSITY OF SCIENCE AND TECHNOLOGY
    Inventors: Chuan-Yu Chang, Fu-Jen Tsai
  • Patent number: 10891516
    Abstract: A learning apparatus causes a first supervised learning model, which receives feature data generated from input data having data items with which a first label and a second label are associated and outputs a first estimation result, to learn such that the first estimation result is close to the first label. The learning apparatus causes a second supervised learning model, which receives the feature data and outputs a second estimation result, to learn such that the second estimation result is close to the second label. The learning apparatus causes a feature extractor, which generates the feature data from the input data, to learn so as to facilitate recognition of the first label and suppress recognition of the second label.
    Type: Grant
    Filed: February 27, 2019
    Date of Patent: January 12, 2021
    Assignee: FUJITSU LIMITED
    Inventors: Toshio Endoh, Kento Uemura
  • Patent number: 10891485
    Abstract: Implementations relate to removal of one or more images from a view of a plurality of images. In some implementations, a method includes obtaining a plurality of images, programmatically analyzing the plurality of images to determine a plurality of image features, and determining one or more image categories for the plurality of images based on the image features. The method further includes identifying a subset of the plurality of images based on the image categories, wherein each image of the subset is associated with an image category for archival. The method further includes causing a user interface to be displayed that includes one or more images of the subset, receiving user input to archive at least one of the one or more images, and in response to the user input, removing the at least one of the images from a view of the plurality of images.
    Type: Grant
    Filed: March 6, 2018
    Date of Patent: January 12, 2021
    Assignee: Google LLC
    Inventors: Juan Carlos Anorga, David Lieb, Madhur Khandelwal, Evan Millar, Timothy Novikoff, Mugdha Kulkarni, Leslie Ikemoto, Jorge Verdu, Jingyu Cui, Sharadh Ramaswamy, Raja Ratna Murthy Ayyagari, Marc Cannon, Alexander Roe, Shaun Tungseth, Songbo Jin, Matthew Bridges, Ruirui Jiang, Jeremy Selier, Austin Suszek, Gang Song
  • Patent number: 10885389
    Abstract: A first imaging simulation section in a learning device generates image data that indicates a learning image captured by the imaging section from image data on the learning image, and a second imaging simulation section generates image data that indicates the learning image captured by an imaging section from the image data on the learning image. A plurality of parameter generation sections each generate a characteristic difference correction parameter for making characteristics of student data identical to characteristics of teacher data by learning, assuming one of the generated image data as the teacher data and the other image data as the student data, and the parameter generation sections store the generated characteristic difference correction parameters in a plurality of database sections. A characteristic difference correction section corrects one image data having a lower performance to image data having a high performance, using the stored characteristic difference parameters.
    Type: Grant
    Filed: February 27, 2017
    Date of Patent: January 5, 2021
    Assignee: SONY CORPORATION
    Inventor: Hideyuki Ichihashi
  • Patent number: 10885388
    Abstract: A method for generating training data for a deep learning network is provided.
    Type: Grant
    Filed: August 4, 2020
    Date of Patent: January 5, 2021
    Assignee: Superb AI Co., Ltd.
    Inventor: Kye-Hyeon Kim
  • Patent number: 10885370
    Abstract: An example system includes a processor to receive detections or recognitions with confidence scores for an object in a medium from a plurality of trained detection or recognition models. The processor is to generate a probability of correctness for each of the detections or recognitions based on the confidence scores via correctness mappings generated for each of the trained detection or recognition models. The processor is to also select a detection or recognition with a higher probability of correctness from the detections or recognitions. The processor is to perform a detection or recognition task based on the selected detection or recognition.
    Type: Grant
    Filed: December 16, 2018
    Date of Patent: January 5, 2021
    Assignee: International Business Machines Corporation
    Inventors: Tal Hakim, Dror Porat
  • Patent number: 10878269
    Abstract: Embodiments of the present disclosure pertain to extracting data corresponding to particular data types using neural networks. In one embodiment, a method includes receiving an image in a backend system, sending the image to an optical character recognition (OCR) component, and in accordance therewith, receiving a plurality of characters recognized in the image, sequentially processing the characters with a recurrent neural network to produce a plurality of outputs for each character, sequentially processing the plurality of outputs for each character with a masking neural network layer, and in accordance therewith, generating a first plurality of probabilities, wherein each probability corresponds to a particular character in the plurality of characters, selecting a second plurality of adjacent probabilities from the first plurality of probabilities that are above a threshold, and translating the second plurality of adjacent probabilities into output characters.
    Type: Grant
    Filed: June 19, 2018
    Date of Patent: December 29, 2020
    Assignee: SAP SE
    Inventor: Michael Stark
  • Patent number: 10880274
    Abstract: A method for authorizing online sharing of content including a digital photograph or video, includes receiving, at an electronic device, the content, identifying an image of a person in the content, identifying authorization conditions associated with the person, identifying an image of an object or audio in the content, based on both the image of the person identified and the image of the object or audio identified, determining if the authorization conditions associated with the person are met, and in response to determining that the authorization conditions are met, providing online access to the digital photograph or video.
    Type: Grant
    Filed: December 20, 2017
    Date of Patent: December 29, 2020
    Assignee: BlackBerry Limited
    Inventor: Neil Patrick Adams
  • Patent number: 10878286
    Abstract: Provided is a learning device that includes at least one processing device configured to select image data as learning data based on classification confidences from a plurality of image data. Each of the classification confidences indicates a likelihood of accuracy of classification for a respective one of the plurality of image data. The at least one processing device is configured to: store the selected image data into a storage; perform learning using the stored image data and generating a learning model; perform classification of the image data used for the learning by using the learning model and update a classification confidence for the classified image data; and update the classification confidences each associated with a respective one of the plurality of image data when a count of image data on which the classification has been performed satisfies a predetermined condition.
    Type: Grant
    Filed: February 20, 2017
    Date of Patent: December 29, 2020
    Assignee: NEC CORPORATION
    Inventor: Daichi Hisada
  • Patent number: 10878576
    Abstract: Techniques for enhancing image segmentation with the integration of deep learning are disclosed herein. An example method for atlas-based segmentation using deep learning includes: applying a deep learning model to a subject image to identify an anatomical feature, registering an atlas image to the subject image, using the deep learning segmentation data to improve a registration result, generating a mapped atlas, and identifying the feature in the subject image using the mapped atlas. Another example method for training and use of a trained machine learning classifier, in an atlas-based segmentation process using deep learning, includes: applying a deep learning model to an atlas image, training a machine learning model classifier using data from applying the deep learning model, estimating structure labels of areas of the subject image, and defining structure labels by combining the estimated structure labels with labels produced from atlas-based segmentation on the subject image.
    Type: Grant
    Filed: February 14, 2018
    Date of Patent: December 29, 2020
    Assignee: Elekta, Inc.
    Inventors: Xiao Han, Nicolette Patricia Magro
  • Patent number: 10878298
    Abstract: The present disclosure relates to a tag-based font recognition system that utilizes a multi-learning framework to develop and improve tag-based font recognition using deep learning neural networks. In particular, the tag-based font recognition system jointly trains a font tag recognition neural network with an implicit font classification attention model to generate font tag probability vectors that are enhanced by implicit font classification information. Indeed, the font recognition system weights the hidden layers of the font tag recognition neural network with implicit font information to improve the accuracy and predictability of the font tag recognition neural network, which results in improved retrieval of fonts in response to a font tag query. Accordingly, using the enhanced tag probability vectors, the tag-based font recognition system can accurately identify and recommend one or more fonts in response to a font tag query.
    Type: Grant
    Filed: March 6, 2019
    Date of Patent: December 29, 2020
    Assignee: ADOBE INC.
    Inventors: Zhaowen Wang, Tianlang Chen, Ning Xu, Hailin Jin
  • Patent number: 10872236
    Abstract: Techniques for layout-agnostic clustering-based classification of document keys and values are described. A key-value differentiation unit generates feature vectors corresponding to text elements of a form represented within an electronic image using a machine learning (ML) model. The ML model was trained utilizing a loss function that separates keys from values. The feature vectors are clustered into at least two clusters, and a cluster is determined to include either keys of the form or values of the form via identifying neighbors between feature vectors of the cluster(s) with labeled feature vectors.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: December 22, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Hadar Averbuch Elor, Oron Anschel, Or Perel, Amit Adam, Shai Mazor, Rahul Bhotika, Stefano Soatto
  • Patent number: 10867213
    Abstract: Provided is an object detection device for efficiently and simply selecting an image for creating instructor data on the basis of the number of detected objects. The object detection device is provided with: a detection unit for detecting an object from each of a plurality of input images using a dictionary; an acceptance unit for displaying, on a display device, a graph indicating the relationship between the input images and the number of subregions in which the objects are detected, and displaying, on the display device, in order to create instructor data, one input image among the plurality of input images in accordance with a position on the graph accepted by operation of an input device; a generation unit for generating the instructor data from the input image; and a learning unit for learning a dictionary from the instructor data.
    Type: Grant
    Filed: January 22, 2020
    Date of Patent: December 15, 2020
    Assignee: NEC CORPORATION
    Inventor: Tetsuo Inoshita
  • Patent number: 10867171
    Abstract: A method and apparatus for recognizing and extracting data from a form depicted within an image of a document are described. The method may include receiving the image of the document, the image depicting the form and data contained one the form. The method may also include transforming the image of the document to a set of one or more key, value pairs by processing the image of the document with a sequence of two or more trained machine learning based image analysis processes, wherein keys are relevant to forms of the type depicted in the form, and wherein each value is associated with a key. The method may also include generating a data output that comprises the set of key, value pairs for textual data recognized and extracted from the form depicted in the image.
    Type: Grant
    Filed: October 22, 2018
    Date of Patent: December 15, 2020
    Assignee: OMNISCIENCE CORPORATION
    Inventors: Alexander Wesley Contryman, Jacob Ryan van Gogh, Manu Shukla
  • Patent number: 10861187
    Abstract: There is provided a computer-implemented method of processing object detection data. The method includes receiving, from an object detection system, object detection data comprising a plurality of detection outputs associated with different respective regions of an image, wherein a first detection output of the plurality of detection outputs is associated with a first region of the image and comprises a plurality of received detection characteristics. The method includes processing the first detection output to determine one or more modified detection characteristics of said plurality of received detection characteristics. Processing the first detection output includes retrieving a mapping function and applying the mapping function, where the mapping is dependent upon at least one of the plurality of received detection characteristics.
    Type: Grant
    Filed: February 13, 2019
    Date of Patent: December 8, 2020
    Assignee: Apical Limited
    Inventors: David Packwood, Vladislav Terekhov
  • Patent number: 10861160
    Abstract: A device for assigning one of a plurality of predetermined classes to each pixel of an image, the device is configured to receive an image captured by a camera, the image comprising a plurality of pixels; use an encoder convolutional neural network to generate probability values for each pixel, each probability value indicating the probability that the respective pixel is associated with one of the plurality of predetermined classes; generate for each pixel a class prediction value from the probability values, the class prediction value predicting the class of the plurality of predetermined classes the respective pixel is associated with; use an edge detection algorithm to predict boundaries between objects shown in the image, the class prediction values of the pixels being used as input values of the edge detection algorithm; and assign a label of one of the plurality of predetermined classes to each pixel of the image.
    Type: Grant
    Filed: September 27, 2018
    Date of Patent: December 8, 2020
    Assignee: Aptiv Technologies Limited
    Inventors: Ido Freeman, Jan Siegemund
  • Patent number: 10861211
    Abstract: A method of animating a digital character according to facial expressions of a user, comprising the steps of, (a) obtaining a 2D image and 3D depth map of the face of the user, (b) determining expression parameters for a user expression model so that a facial expression of the user-specific expression model represents the face of the user shown in the 2D image and 3D depth map (c) using the expression parameters and an animation prior to determine animation parameters usable to animate a digital character, wherein the animation prior is a sequence of animation parameters which represent predefined animations of a digital character (d) using the animation parameters to animate a digital character so that the digital character mimics the face of the user.
    Type: Grant
    Filed: July 2, 2018
    Date of Patent: December 8, 2020
    Assignee: Apple Inc.
    Inventors: Thibaut Weise, Sofien Bouaziz, Hao Li, Mark Pauly
  • Patent number: 10863660
    Abstract: A component picked up by a suction nozzle of a component mounting machine is imaged by a camera, the captured image is processed by an image recognition system to recognize the component, the image is determined to be normal or abnormal based on the recognition result, and in addition to classifying the image as a normal image or an abnormal image and storing the image in a storage device, component mounting boards, unloaded from a component mounting machine, are inspected with an inspection device. A stored image reclassification computer acquires the inspection result from the inspection device, reclassifies the normal image stored in the storage device, based on the inspection result, as an image whose determination as a normal image is suspect or as an image whose determination as a normal image is not suspect, and then stores the normal image in the storage device.
    Type: Grant
    Filed: November 14, 2016
    Date of Patent: December 8, 2020
    Assignee: FUJI CORPORATION
    Inventors: Kenji Sugiyama, Hiroshi Oike, Shuichiro Kito
  • Patent number: 10855930
    Abstract: This video system includes a camera system and a video converter. The camera system includes an image pickup unit that captures a subject and obtains a pixel signal thereof, and a first processing circuit that generates two video signals from the pixel signal while respectively carrying out mutually-different adjustments and transmits transmission information obtained by adding information related to the respective adjustments of the video signals to one of the video signals.
    Type: Grant
    Filed: December 22, 2016
    Date of Patent: December 1, 2020
    Assignee: SONY CORPORATION
    Inventor: Koji Kamiya
  • Patent number: 10838968
    Abstract: Embodiments for recommending exemplars of a data-set by a processor. A selected number of exemplars may be labeled from one or more classes in a data-set. One or more class exemplars for each of the one or more classes in the data-set may be recommended according to similarities between the selected number of labeled exemplars and remaining data of the data-set.
    Type: Grant
    Filed: January 10, 2018
    Date of Patent: November 17, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Shrihari Vasudevan, Joydeep Mondal, Richard H. Zhou, Michael Peran, Michael W. Ticknor, Daniel Augenstern
  • Patent number: 10839269
    Abstract: In the field of computer vision, without sufficient labeled images, it is challenging to train an accurate model. But through visual adaptation from source to target domains, a relevant labeled dataset can help solve such problem. Many methods apply adversarial learning to diminish cross-domain distribution difference. They are able to greatly enhance the performance on target classification tasks. GAN (Generative Adversarial Networks) loss is widely used in adversarial adaptation learning methods to reduce a across-domain distribution difference. However, it becomes difficult to decline such distribution difference if generator or discriminator in GAN fails to work as expected and degrades its performance. To solve such cross-domain classification problems, an adaptation algorithm and system called as Generative Adversarial Distribution Matching (GADM) is implemented.
    Type: Grant
    Filed: March 20, 2020
    Date of Patent: November 17, 2020
    Assignee: King Abdulaziz University
    Inventors: Yusuf Al-Turki, Abdullah Abusorrah, Qi Kang, Siya Yao, Kai Zhang, MengChu Zhou
  • Patent number: 10836393
    Abstract: A smart traffic control device implements a method for detecting faulty sensors that result in false green requests and/or undetected true green requests. The faulty sensors may include inductive loops and pedestrian pushbuttons. The traffic control device may communicate with remote resources, including computer systems connected to human resources, to help identify the faulty sensors. The human resources may be crowdsourced. The traffic control device's communications with the remote resources may go through vehicle computers and smart phones of vehicles' occupants and pedestrians. The traffic control device may emit signals to enable vehicle computers to improve accuracy of position location. The traffic control device may emit its state (e.g., Green/Yellow/Red in various directions) and time to state changes, in real or substantially real time. A vehicle computer may receive the state and/or time to state changes and vary operation of the vehicle's power train in response thereto.
    Type: Grant
    Filed: August 26, 2018
    Date of Patent: November 17, 2020
    Inventor: Anatoly S. Weiser
  • Patent number: 10831996
    Abstract: A problem to be solved is to assist input of information by a user. An information processing device of one embodiment includes a keyword sequence generator generating a keyword sequence by performing morphological analysis on text data in natural language; a category information sequence generator acquiring category information corresponding to each keyword of the keyword sequence based on a database in which keywords are associated with category information and generating a category information sequence; the pattern extractor selecting a category information sequence pattern from category information sequence patterns according to correlation between the category information sequence and each category information sequence pattern; and a determiner comparing the selected category information sequence pattern with the category information sequence and generating presentation information according to a difference between the selected category information sequence pattern and the category information sequence.
    Type: Grant
    Filed: July 13, 2016
    Date of Patent: November 10, 2020
    Assignee: TEIJIN LIMITED
    Inventors: Kentaro Torii, Chikashi Sugiura, Shuichi Mitsuda, Satoshi Aida, Taro Ikezaki
  • Patent number: 10824775
    Abstract: Methods and apparatus provide for, at each of plural calculation timings, associating pieces of position information indicating the positions of objects in a virtual space at the calculation timing with leaves and creates a complete binary tree in which position information reflecting pieces of the position information of child nodes is associated with an internal node, and a node shuffling section that shuffles 2·2n (n?1) child nodes regarding each group of 2n nodes on the basis of the position information associated with each of the 2·2n child nodes belonging to the 2n nodes in each layer sequentially from the immediately-upper layer of the lowermost layer in the complete binary tree; and carrying out collision determination between objects by using the complete binary tree resulting from the shuffling by the node shuffling section.
    Type: Grant
    Filed: April 25, 2014
    Date of Patent: November 3, 2020
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Hitoshi Ishikawa, Hiroshi Matsuike, Koichi Yoshida
  • Patent number: 10819903
    Abstract: An imaging device, comprising an image sensor that images a shooting target and outputs image data, a memory that stores inference models for performing guidance display or automatic control, when shooting the shooting target, an inference engine that performs inference for the control of the guidance display or automatic control, using the inference model, and a controller that determines whether or not the inference model is suitable for shooting a shooting target that the user wants, and if it is determined that the inference model is not suitable for shooting, requests generation of an inference model that is suitable for a shooting target the user wants to an external learning device.
    Type: Grant
    Filed: January 29, 2019
    Date of Patent: October 27, 2020
    Assignee: Olympus Corporation
    Inventors: Yoshiyuki Fukuya, Kazuhiko Shimura, Hisashi Yoneyama, Zhen Li, Atsushi Kohashi, Dai Ito, Nobuyuki Shima, Yoichi Yoshida, Kazuhiko Osa, Osamu Nonaka
  • Patent number: 10817752
    Abstract: A method for training a machine learning model includes receiving real data comprising a real element in a real environment. The training also includes annotating the real element with a first annotation based on predicted attributes of the real element. The first annotation having a first format. The training further includes converting the first format of the first annotation to a second format corresponding to a ground truth annotation of the real element. The training still further includes adjusting parameters of the machine learning model to minimize a difference between values of the ground truth annotation of the real element and the converted first annotation.
    Type: Grant
    Filed: May 31, 2018
    Date of Patent: October 27, 2020
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Wadim Kehl, German Ros Sanchez
  • Patent number: 10810782
    Abstract: A semantic texture map system to generate a semantic texture map based on a 3D model that comprises a plurality of vertices that include coordinate that indicate positions of the plurality of vertices, a UV map, and a semantic segmentation image that comprises a set of semantic labels.
    Type: Grant
    Filed: April 1, 2019
    Date of Patent: October 20, 2020
    Assignee: Snap Inc.
    Inventors: Piers Cowburn, David Li, Isac Andreas Müller Sandvik, Qi Pan
  • Patent number: 10810411
    Abstract: A performing device of a gesture recognition system for reducing a false alarm rate executes a performing procedure of a gesture recognition method for reducing the false alarm rate. The gesture recognition system includes two neural networks. A first recognition neural network is used to classify a gesture event, and a first noise neural network is used to determine whether the sensing signal is the noise. Since the first noise neural network can determine whether the sensing signal is the noise, the gesture event may not be executed when the sensing signal is the noise. Therefore, the false alarm rate may be reduced.
    Type: Grant
    Filed: August 14, 2018
    Date of Patent: October 20, 2020
    Assignee: KAIKUTEK INC.
    Inventors: Tsung-Ming Tai, Yun-Jie Jhang, Wen-Jyi Hwang, Chun-Hsuan Kuo
  • Patent number: 10805587
    Abstract: A method is disclosed for automatic white balance correction of color cameras. The method includes capturing a first image of a target object, the first image containing a calibration area defined by a symbology and a calibration zone within the calibration area. The symbology encodes data and has first-color elements and second-color elements. The calibration zone is defined by at least some of at least one of the first-color elements and second-color elements. The method includes obtaining a location of the calibration area in the first image, and capturing a second image of the target object, the second image being multicolor. The method includes locating the calibration area in the second image based on the location, and analyzing the calibration zone within the calibration area of the second image. The method includes calculating and applying at least one white balance compensation bias based on the analyzed calibration zone.
    Type: Grant
    Filed: July 9, 2019
    Date of Patent: October 13, 2020
    Assignee: Zebra Technologies Corporation
    Inventor: Sajan Wilfred
  • Patent number: 10803544
    Abstract: The disclosed technology includes systems and methods for enhancing machine vision object recognition based on a plurality of captured images and an accumulation of corresponding classification analysis scores. A method is provided for capturing, with a camera of a mobile computing device, a plurality of images, each image of the plurality of images comprising a first object. The method includes processing, with a classification module comprising a trained neural network processing engine, at least a portion of the plurality of images. The method includes generating, with the classification module and based on the processing, one or more object classification scores associated with the first object. The method includes accumulating, with an accumulating module, the one or more object classification scores. And responsive to a timeout or an accumulated score exceeding a predetermined threshold, the method includes outputting classification information of the first object.
    Type: Grant
    Filed: May 29, 2019
    Date of Patent: October 13, 2020
    Assignee: CAPITAL ONE SERVICES, LLC
    Inventors: Qiaochu Tang, Geoffrey Dagley, Micah Price, Sunil Vasisht, Stephen Wylie, Jason Hoover
  • Patent number: 10803585
    Abstract: The present disclosure relates to the classification of images, such as medical images using machine learning techniques. In certain aspects, the technique may employ a distance metric for the purpose of classification, where the distance metric determined for a given image with respect to a homogenous group or class of images is used to classify the image.
    Type: Grant
    Filed: October 9, 2018
    Date of Patent: October 13, 2020
    Assignee: GENERAL ELECTRIC COMPANY
    Inventors: Andre de Almeida Maximo, Chitresh Bhushan, Thomas Kwok-Fah Foo, Desmond Teck Beng Yeo
  • Patent number: 10803619
    Abstract: A method for identifying a feature in a first image comprises establishing an initial database of image triplets, and in a pose estimation processor, training a deep learning neural network using the initial database of image triplets, calculating a pose for the first image using the deep learning neural network, comparing the calculated pose to a validation database populated with images data to identify an error case in the deep learning neural network, creating a new set of training data including a plurality of error cases identified in a plurality of input images and retraining the deep learning neural network using the new set of training data. The deep learning neural network may be iteratively retrained with a series of new training data sets. Statistical analysis is performed on a plurality of error cases to select a subset of the error cases included in the new set of training data.
    Type: Grant
    Filed: March 13, 2017
    Date of Patent: October 13, 2020
    Assignee: Siemens Mobility GmbH
    Inventors: Kai Ma, Shanhui Sun, Stefan Kluckner, Ziyan Wu, Terrence Chen, Jan Ernst
  • Patent number: 10796450
    Abstract: A method for detecting and tracking human head in an image by an electronic device is disclosed. The method may include segmenting the image into one or more sub-images; inputting each sub-image to a convolutional neural network trained according to training images having marked human head positions; outputting by a preprocessing layer of the convolutional neural network comprising a first convolutional layer and a pooling layer, a first feature corresponding to each sub-image; mapping through a second convolutional layer the first feature corresponding to each sub-image to a second feature corresponding to each sub-image; mapping through a regression layer the second feature corresponding to each sub-image to a human head position corresponding to each sub-image and a corresponding confidence level of the human head position; and filtering, according to the corresponding confidence level, human head positions corresponding to the one or more sub-images, to acquire detected human head positions in the image.
    Type: Grant
    Filed: March 12, 2019
    Date of Patent: October 6, 2020
    Assignee: Tencent Technology (Shenzhen) Company Limited
    Inventor: Deqiang Jiang
  • Patent number: 10796198
    Abstract: Some embodiments include a special-purpose hardware accelerator that can perform specialized machine learning tasks during both training and inference stages. For example, this hardware accelerator uses a systolic array having a number of data processing units (“DPUs”) that are each connected to a small number of other DPUs in a local region. Data from the many nodes of a neural network is pulsed through these DPUs with associated tags that identify where such data was originated or processed, such that each DPU has knowledge of where incoming data originated and thus is able to compute the data as specified by the architecture of the neural network. These tags enable the systolic neural network engine to perform computations during backpropagation, such that the systolic neural network engine is able to support training.
    Type: Grant
    Filed: December 27, 2018
    Date of Patent: October 6, 2020
    Assignee: Western Digital Technologies, Inc.
    Inventor: Luiz M. Franca-Neto
  • Patent number: 10798566
    Abstract: Facilitating secure conveyance of location information and other information in advanced networks (e.g., 4G, 5G, and beyond) is provided herein. Operations of a system can comprise transforming, at a chipset level of the device, information indicative of a location of the device into a binary representation of the information indicative of the location of the device. The operations can also comprise embedding the binary representation of the information indicative of the location of the device into a message. Further, the operations can comprise facilitating a transmission of the message and the binary representation of the information indicative of the location of the device to a network device of a communications network.
    Type: Grant
    Filed: December 5, 2018
    Date of Patent: October 6, 2020
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Mark D. Austin, Sheldon Meredith
  • Patent number: 10796793
    Abstract: Example methods and systems for generating an aggregated artificial intelligence (AI) engine for radiotherapy treatment planning are provided. One example method may include obtaining multiple AI engines associated with respective multiple treatment planners; generating multiple sets of output data using the multiple AI engines associated with the respective multiple treatment planners: comparing the multiple AI engines associated with the respective multiple treatment planners based on the multiple sets of output data; and based on the comparison, aggregating at least some of the multiple AI engines to generate the aggregated AI engine for performing the particular treatment planning step. The multiple AI engines may be trained to perform a particular treatment planning step, and each of the multiple AI engines is trained to emulate one of the multiple treatment planners performing the particular treatment planning step.
    Type: Grant
    Filed: August 14, 2018
    Date of Patent: October 6, 2020
    Assignee: VARIAN MEDICAL SYSTEMS INTERNATIONAL AG
    Inventors: Corey Zankowski, Charles Adelsheim, Joakim Pyyry, Esa Kuusela
  • Patent number: 10789484
    Abstract: A crowd type classification system of an aspect of the present invention includes: a staying crowd detection unit that detects a local region indicating a crowd in staying from a plurality of local regions determined in an image acquired by an image acquisition device; a crowd direction estimation unit that estimates a direction of the crowd for an image of a part corresponding to the detected local region, and appends the direction of the crowd to the local region; and a crowd type classification unit that classifies a type of the crowd including a plurality of staying persons for the local region to which the direction is appended by using a relative vector indicating a relative positional relationship between two local regions and directions of crowds in the two local regions, and outputs the type and positions of the crowds.
    Type: Grant
    Filed: February 28, 2017
    Date of Patent: September 29, 2020
    Assignee: NEC CORPORATION
    Inventor: Hiroo Ikeda
  • Patent number: 10789482
    Abstract: In implementations of the subject matter described herein, an action detection scheme using a recurrent neural network (RNN) is proposed. Representation information of an incoming frame of a video and a predefined action label for the frame are obtained to train a learning network including RNN elements and a classification element. The representation information represents an observed entity in the frame. Specifically, parameters for the RNN elements are determined based on the representation information and the predefined action label. With the determined parameters, the RNN elements are caused to extract features for the frame based on the representation information and features for a preceding frame. Parameters for the classification element are determined based on the extracted features and the predefined action label. The classification element with the determined parameters generates a probability of the frame being associated with the predefined action label.
    Type: Grant
    Filed: March 28, 2017
    Date of Patent: September 29, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Cuiling Lan, Wenjun Zeng, Yanghao Li, Junliang Xing
  • Patent number: 10783404
    Abstract: A method and a device for verifying a recognition result in character recognition are provided. The device constructs a hidden Markov chain for a character string to be recognized, using recognition result output of a character recognition process. The recognition result includes candidate characters of each character in the character string. The device solves for an optimal path forming a candidate character string according to the hidden Markov chain and a pre-trained state transition matrix. The device recognizes non-Chinese characters in the character string according to state transition probabilities in the optimal path. The device verifies the recognition result according to the non-Chinese characters. The device feeds back a verification result to the character recognition process, wherein the character recognition process applied to the character string to be recognized is modified by the verification result.
    Type: Grant
    Filed: December 3, 2018
    Date of Patent: September 22, 2020
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Sheng Han, Hongfa Wang, Longsha Zhou, Hui Song
  • Patent number: 10776642
    Abstract: Systems and methods to efficiently and effectively train artificial intelligence and neural networks for an autonomous or semi-autonomous vehicle are disclosed. The systems and methods provide for the minimization of the labeling cost by sampling images from a raw video file which are mis-detected, i.e., false positive and false negative detections, or indicate abnormal or unexpected driver behavior. Supplemental information such as controller area network signals and data may be used to augment and further encapsulate desired images from video.
    Type: Grant
    Filed: January 25, 2019
    Date of Patent: September 15, 2020
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventor: Mitsuhiro Mabuchi
  • Patent number: 10776000
    Abstract: A method of and system for receiving, processing, converting and verifying digital ink input is carried out by receiving digital ink input, collecting data relating to the received digital ink input, and receiving a request to convert the received digital ink input. Upon receiving the request, the received digital ink input may be recognized as text characters based at least in part on an analysis of the digital ink input and the converted characters may be displayed on a screen adjacent to the received digital ink input, at which point a user may be able to compare the received digital ink input with the recognized characters to initiate any corrections needed.
    Type: Grant
    Filed: December 19, 2018
    Date of Patent: September 15, 2020
    Assignee: Microsoft Technology Licensing, LLC.
    Inventors: Elise Leigh Livingston, Patrick Edgar Schreiber, Tracy ThuyDuyen Tran, Heather Strong Eden, Rachel Ann Keirouz
  • Patent number: 10776918
    Abstract: The present application relates to method and device for determining image similarity that includes: dividing a target image into multiple regions based on positions of pixels relative to a reference point in the target image, and dividing a reference image into multiple regions based on positions of pixels relative to a reference point in the reference image; determining, based on feature points in the target image and feature points in the reference image as well as the regions obtained by dividing the target image and the regions obtained by dividing the reference image, similarity between a distribution of the feature points in the target image and a distribution of the feature points in the reference image. According to the method of the present application, the similarity is described more reasonably.
    Type: Grant
    Filed: November 17, 2017
    Date of Patent: September 15, 2020
    Assignee: FUJITSU LIMITED
    Inventors: Mengjiao Wang, Rujie Liu
  • Patent number: 10766137
    Abstract: A machine learning system builds and uses computer models for identifying how to evaluate the level of success reflected in a recorded observation of a task. Such computer models may be used to generate a policy for controlling a robotic system performing the task. The computer models can also be used to evaluate robotic task performance and provide feedback for recalibrating the robotic control policy.
    Type: Grant
    Filed: November 3, 2017
    Date of Patent: September 8, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Brandon William Porter, Leonardo Ruggiero Bachega, Brian C. Beckman, Benjamin Lev Snyder, Michael Vogelsong, Corrinne Yu