Patents Examined by Juan A. Torres
  • Patent number: 11443134
    Abstract: A method of performing a convolutional operation in a convolutional neural network includes: obtaining input activation data quantized with a first bit from an input image; obtaining weight data quantized with a second bit representing a value of a parameter learned through the convolutional neural network; binarizing each of the input activation data and the weight data to obtain a binarization input activation vector and a binarization weight vector; performing an inner operation of the input activation data and weight data based on a binary operation with respect to the binarization input activation vector and the binarization weight vector and distance vectors having the same length as each of the first bit and the second bit, respectively; and storing a result obtained by the inner operation as output activation data.
    Type: Grant
    Filed: August 27, 2020
    Date of Patent: September 13, 2022
    Assignee: Hyperconnect Inc.
    Inventors: Sang Il Ahn, Sung Joo Ha, Dong Young Kim, Beom Su Kim, Martin Kersner
  • Patent number: 11436432
    Abstract: An apparatus for an artificial neural network includes a format converter, a sampling unit, and a learning unit. The format converter generates a first format image and a second format image based on an input image. The sampling unit samples the first format image using a first sampling scheme to generate a first feature map, and samples the second format image using a second sampling scheme different from the first sampling scheme to generate a second feature map. The learning unit operates the artificial neural network using the first feature map and the second feature map.
    Type: Grant
    Filed: September 15, 2020
    Date of Patent: September 6, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Sangmin Suh, Sangsoo Ko, Byeoungsu Kim, Sanghyuck Ha
  • Patent number: 11436438
    Abstract: (A) Conditional vectors are defined. (B) Latent observation vectors are generated using a predefined noise distribution function. (C) A forward propagation of a generator model is executed with the conditional vectors and the latent observation vectors as input to generate an output vector. (D) A forward propagation of a decoder model of a trained autoencoder model is executed with the generated output vector as input to generate a plurality of decoded vectors. (E) Transformed observation vectors are selected from transformed data based on the defined plurality of conditional vectors. (F) A forward propagation of a discriminator model is executed with the transformed observation vectors, the conditional vectors, and the decoded vectors as input to predict whether each transformed observation vector and each decoded vector is real or fake. (G) The discriminator and generator models are updated and (A) through (G) are repeated until training is complete.
    Type: Grant
    Filed: December 22, 2021
    Date of Patent: September 6, 2022
    Assignee: SAS Institute Inc.
    Inventors: Ruiwen Zhang, Weichen Wang, Jorge Manuel Gomes da Silva, Ye Liu, Hamoon Azizsoltani, Prathaban Mookiah
  • Patent number: 11436447
    Abstract: A target detection method a is provided, which relates to the fields of deep learning, computer vision, and artificial intelligence. The method comprises: classifying, by using a first classification model, a plurality of image patches comprised in an input image, to obtain one or more candidate image patches, in the plurality of image patches, that are preliminarily classified as comprising a target; extracting a corresponding salience area for each candidate image patch; constructing a corresponding target feature vector for each candidate image patch based on the corresponding salience area for each candidate image patch; and classifying, by using a second classification model, the target feature vector to determine whether each candidate image patch comprises the target.
    Type: Grant
    Filed: September 30, 2020
    Date of Patent: September 6, 2022
    Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventors: Yehui Yang, Lei Wang, Yanwu Xu
  • Patent number: 11436436
    Abstract: Provided is a data augmentation system including at least one processor, the at least one processor being configured to: input, to a machine learning model configured processor to perform recognition, input data; identify a feature portion of the input data to serve as a basis for recognition by the machine learning model in which the input data is used as input; acquire processed data by processing at least a part of the feature portion; and perform data augmentation based on the processed data.
    Type: Grant
    Filed: May 28, 2020
    Date of Patent: September 6, 2022
    Assignee: RAKUTEN GROUP, INC.
    Inventor: Mitsuru Nakazawa
  • Patent number: 11429815
    Abstract: Methods, systems and media for deep neural network interpretation via rule extraction. The interpretation of the deep neural network is based on extracting one or more rules approximating classification behavior of the network. Rules are defined by identifying a set of hyperplanes through the data space that collectively define a convex polytope that separates a target class of input samples from input samples of different classes. Each rule corresponds to a set of decision boundaries between two different decision outcomes. Human-understandable representations of rules may be generated. One or more rules may be used to generate a classifier. The representations and interpretations exhibit faithfulness, robustness, and comprehensiveness relative to other known approaches.
    Type: Grant
    Filed: October 30, 2020
    Date of Patent: August 30, 2022
    Assignee: HUAWEI CLOUD COMPUTING TECHNOLOGIES CO., LTD.
    Inventors: Cho Ho Lam, Lingyang Chu, Yong Zhang, Lanjun Wang
  • Patent number: 11423598
    Abstract: A method for generating a synthetic image with predefined properties. The method includes the steps of providing first values which characterize the predefined properties of the image that is to be generated and attention weights which characterize a weighting of one of the first values and feeding sequentially the first values and assigned attention weights as input value pairs into an generative automated learning system that includes at least a recurrent connection. An image generation system and a computer program that are configured to carry out the method are also described.
    Type: Grant
    Filed: February 7, 2020
    Date of Patent: August 23, 2022
    Assignee: Robert Bosch GmbH
    Inventors: Wenling Shang, Kihyuk Sohn
  • Patent number: 11417148
    Abstract: A human face image classification method includes: acquiring a human face image to be classified; inputting the human face image into a pre-set convolutional neural network model, and according to intermediate data output by a convolutional layer of the convolutional neural network model, identifying gender information of the human face image; and according to final data output by the convolutional layer of the convolutional neural network model, carrying out pre-set content understanding classification on the human face image in a range defined by the gender information, so that data regarding deciding a classification result output by the convolutional neural network model comprises a difference attribute for distinguishing between different genders.
    Type: Grant
    Filed: October 11, 2018
    Date of Patent: August 16, 2022
    Assignee: Beijing Dajia Internet Information Technology Co., Ltd.
    Inventors: Xuanping Li, Fan Yang, Yan Li
  • Patent number: 11416715
    Abstract: The present disclosure relates to face recognition apparatus and method capable of increasing a rate of face recognition using artificial intelligence by changing a captured image or video in contrast value in a dynamic manner. An operational method of a face recognition electronic apparatus using an artificial neural network may include: receiving from an ISP an image processed based on a set contrast parameter; detecting a facial image from the image; determining match probability values between the detected facial image with a plurality of facial images; determining whether or not a subject matching with the detected facial image is present on the basis of the match probability values; and if not, changing the contrast parameter. Accordingly, the face recognition performance can be improved by correcting an image in contrast that provides the best face recognition capability for each image when identifying a subject of a facial image by using artificial intelligence.
    Type: Grant
    Filed: March 26, 2020
    Date of Patent: August 16, 2022
    Assignee: LG ELECTRONICS INC.
    Inventors: Jiwon Lee, Aram Kim, Jinkyoo Lee
  • Patent number: 11410000
    Abstract: A computer-implemented method is provided. The computer-implemented method includes classifying an image using a classification model having a residual network. Classifying the image using the classification model includes inputting an input image into the residual network having N number of residual blocks sequentially connected, N?2, (N?1) number of pooling layers respectively between two adjacent residual blocks of the N number of residual blocks, and (N?1) number of convolutional layers respectively connected to first to (N?1)-th residual blocks of the N number of residual blocks; processing outputs from the first to the (N?1)-th residual blocks of the N number of residual blocks respectively through the (N?1) number of convolutional layers; vectorizing outputs respectively from the (N?1) number of convolutional layers to generate (N?1) number of vectorized outputs; vectorizing an output from a last residual block of the N number of residual blocks to generate a last vectorized output.
    Type: Grant
    Filed: August 8, 2019
    Date of Patent: August 9, 2022
    Assignees: BEIJING BOE HEALTH TECHNOLOGY CO., LTD., BOE Technology Group Co., Ltd.
    Inventor: Xinyue Hu
  • Patent number: 11410031
    Abstract: Methods, systems and computer program products for updating a word embedding model are provided. Aspects include receiving a first data set comprising a relational database having a plurality of words. Aspects also include generating a word embedding model comprising a plurality of word vectors by training a neural network using unsupervised machine learning based on the first data set. Each word vector of the plurality of word vector corresponds to a unique word of the plurality of words. Aspects also include storing the plurality of word vectors and a representation of a hidden layer of the neural network. Aspects also include receiving a second data set comprising data that has been added to the relational database. Aspects also include updating the word embedding model based on the second data set and the stored representation of the hidden layer of the neural network.
    Type: Grant
    Filed: November 29, 2018
    Date of Patent: August 9, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Thomas Conti, Stephen Warren, Rajesh Bordawekar, Jose Neves, Christopher Harding
  • Patent number: 11403495
    Abstract: A machine learning system efficiently detects faults from three-dimensional (ā€œ3Dā€) seismic images, in which the fault detection is considered as a binary segmentation problem. Because the distribution of fault and nonfault samples is heavily biased, embodiments of the present disclosure use a balanced loss function to optimize model parameters. Embodiments of the present disclosure train a machine learning system by using a selected number of pairs of 3D synthetic seismic and fault volumes, which may be automatically generated by randomly adding folding, faulting, and noise in the volumes. Although trained by using only synthetic data sets, the machine learning system can accurately detect faults from 3D field seismic volumes that are acquired at totally different surveys.
    Type: Grant
    Filed: November 25, 2020
    Date of Patent: August 2, 2022
    Assignee: Board of Regents, The University of Texas System
    Inventors: Xinming Wu, Yunzhi Shi, Sergey Fomel
  • Patent number: 11403486
    Abstract: Methods and systems for updating the weights of a set of convolution kernels of a convolutional layer of a neural network are described. A set of convolution kernels having attention-infused weights is generated by using an attention mechanism based on characteristics of the weights. For example, a set of location-based attention multipliers is applied to weights in the set of convolution kernels, a magnitude-based attention function is applied to the weights in the set of convolution kernels, or both. An output activation map is generated using the set of convolution kernels with attention-infused weights. A loss for the neural network is computed, and the gradient is back propagated to update the attention-infused weights of the convolution kernels.
    Type: Grant
    Filed: November 11, 2020
    Date of Patent: August 2, 2022
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Niamul Quader, Md Ibrahim Khalil, Juwei Lu, Peng Dai, Wei Li
  • Patent number: 11392830
    Abstract: An all-optical Diffractive Deep Neural Network (D2NN) architecture learns to implement various functions or tasks after deep learning-based design of the passive diffractive or reflective substrate layers that work collectively to perform the desired function or task. This architecture was successfully confirmed experimentally by creating 3D-printed D2NNs that learned to implement handwritten classifications and lens function at the terahertz spectrum. This all-optical deep learning framework can perform, at the speed of light, various complex functions and tasks that computer-based neural networks can implement, and will find applications in all-optical image analysis, feature detection and object classification, also enabling new camera designs and optical components that can learn to perform unique tasks using D2NNs. In alternative embodiments, the all-optical D2NN is used as a front-end in conjunction with a trained, digital neural network back-end.
    Type: Grant
    Filed: April 12, 2019
    Date of Patent: July 19, 2022
    Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yair Rivenson, Xing Lin, Deniz Mengu, Yi Luo
  • Patent number: 11392799
    Abstract: Training a network for image processing with temporal consistency includes obtaining un-annotated frames from a video feed. A pretrained network is applied to the first frame of first frame set comprising a plurality of frames to obtain a first prediction, wherein the pretrained network is pretrained for a first image processing task. A current version of the pretrained network is applied to each frame of the first frame set to obtain a first prediction. A content loss term is determined, based on the first prediction and a current prediction for the frame, based on the current network. A temporal consistency loss term is also determined based on a determined consistency of pixels within each frame of the first frame set. The pretrained network may be refined based on the content loss term and the temporal term to obtain a refined network.
    Type: Grant
    Filed: March 17, 2020
    Date of Patent: July 19, 2022
    Assignee: Apple Inc.
    Inventors: Atila Orhon, Marco Zuliani, Vignesh Jagadeesh
  • Patent number: 11386292
    Abstract: A method and a system for automatically generating multiple captions of an image are provided. A method for training an auto image caption generation model according to an embodiment of the present disclosure includes: generating a caption attention map by using an image; converting the generated caption attention map into a latent variable by projecting the caption attention map onto a latent space; deriving a guide map by using the latent variable; and training to generate captions of an image by using the guide map and the image. Accordingly, a plurality of captions describing various characteristics of an image and including various expressions can be automatically generated.
    Type: Grant
    Filed: September 10, 2020
    Date of Patent: July 12, 2022
    Assignee: KOREA ELECTRONICS TECHNOLOGY INSTITUTE
    Inventors: Bo Eun Kim, Hye Dong Jung
  • Patent number: 11386287
    Abstract: The method may include processing, by using a neural network, input feature maps of an image to obtain output feature maps of the image. The neural network may include a convolution part and/or a pooling part, and an aggregation part. The convolution part may include at least one parallel unit each of which contains two parallel paths, each path of the two parallel paths contains two cascaded convolution layers. The kernel sizes are 1 dimension and are different in different units. The pooling part includes at least one parallel unit each of which contains two parallel paths, each path of the two parallel paths contains two cascaded pooling layers. The size of filters of pooling is 1 dimension and is different in different units. The aggregation part is configured to concatenate results of the convolution part and/or the pooling part to obtain the output feature maps of the image.
    Type: Grant
    Filed: June 26, 2018
    Date of Patent: July 12, 2022
    Assignee: Nokia Technologies Oy
    Inventor: Xuhang Lian
  • Patent number: 11366987
    Abstract: A computer-implemented method of determining an explainability mask for classification of an input image by a trained neural network. The trained neural network is configured to determine the classification and classification score of the input image by determining a latent representation of the input image at an internal layer of the trained neural network. The method includes accessing the trained neural network, obtaining the input image and the latent representation thereof and initializing a mask for indicating modifications to the latent representation. The mask is updated by iteratively adjusting values of the mask to optimize an objective function, comprising i) a modification component indicating a degree of modifications indicated by the mask, and ii) a classification score component, determined by applying the indicated modifications to the latent representation and determining the classification score thereof. The mask is scaled to a spatial resolution of the input image and output.
    Type: Grant
    Filed: December 29, 2020
    Date of Patent: June 21, 2022
    Assignee: Robert Bosch GmbH
    Inventor: Andres Mauricio Munoz Delgado
  • Patent number: 11367268
    Abstract: Object re-identification refers to a process by which images that contain an object of interest are retrieved from a set of images captured using disparate cameras or in disparate environments. Object re-identification has many useful applications, particularly as it is applied to people (e.g. person tracking). Current re-identification processes rely on convolutional neural networks (CNNs) that learn re-identification for a particular object class from labeled training data specific to a certain domain (e.g. environment), but that do not apply well in other domains. The present disclosure provides cross-domain disentanglement of id-related and id-unrelated factors. In particular, the disentanglement is performed using a labeled image set and an unlabeled image set, respectively captured from different domains but for a same object class.
    Type: Grant
    Filed: August 20, 2020
    Date of Patent: June 21, 2022
    Assignee: NVIDIA CORPORATION
    Inventors: Xiaodong Yang, Yang Zou, Zhiding Yu, Jan Kautz
  • Patent number: 11361223
    Abstract: A multi-beam transmission method is provided for transmitting using an N-beam transmitter to a receiver having K receive beams. In the transmitter, a non-linear encoder implemented by a machine learning block, and a linear encoder are trained using gradient descent back propagation that relies on feedback from the receiver. For each input to be transmitted the machine learning block is used to process the input to produce N/K sets of L outputs. The linear encoder is used to perform linear encoding on each set of L outputs to produce a respective set of K outputs so as to produce N/K sets of K encoded outputs overall and N encoded outputs overall. One of the N/K sets of K outputs from each set of K beams.
    Type: Grant
    Filed: July 24, 2020
    Date of Patent: June 14, 2022
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Yiqun Ge, Wuxian Shi, Wen Tong