Neural Networks Patents (Class 382/156)
  • Patent number: 11509795
    Abstract: An auto-rotation module having a single-layer neural network on a user device can convert a document image to a monochrome image having black and white pixels and segment the monochrome image into bounding boxes, each bounding box defining a connected segment of black pixels in the monochrome image. The auto-rotation module can determine textual snippets from the bounding boxes and prepare them into input images for the single-layer neural network. The single-layer neural network is trained to process each input image, recognize a correct orientation, and output a set of results for each input image. Each result indicates a probability associated with a particular orientation. The auto-rotation module can examine the results, determine what degree of rotation is needed to achieve a correct orientation of the document image, and automatically rotate the document image by the degree of rotation needed to achieve the correct orientation of the document image.
    Type: Grant
    Filed: June 14, 2021
    Date of Patent: November 22, 2022
    Assignee: OPEN TEXT SA ULC
    Inventor: Christopher Dale Lund
  • Patent number: 11498500
    Abstract: An apparatus includes a capture device and a processor. The capture device may be configured to generate a plurality of video frames. The processor may be configured to perform operations to detect objects in the video frames, detect users based on the objects detected in the video frames, determine a comfort profile for the users and select a reaction to adjust vehicle components according to the comfort profile of the detected users. The comfort profile may be determined in response to characteristics of the users. The characteristics of the users may be determined by performing the operations on each of the users. The operations detect the objects by performing feature extraction based on weight values for each of a plurality of visual features that are associated with the objects extracted from the video frames. The weight values are determined by the processor analyzing training data prior to the feature extraction.
    Type: Grant
    Filed: September 22, 2020
    Date of Patent: November 15, 2022
    Assignee: Ambarella International LP
    Inventors: Shimon Pertsel, Patrick Martin
  • Patent number: 11481862
    Abstract: System and method for simultaneous object detection and semantic segmentation. The system includes a computing device. The computing device has a processor and a non-volatile memory storing computer executable code. The computer executable code, when executed at the processor, is configured to: receive an image of a scene; process the image using a neural network backbone to obtain a feature map; process the feature map using an object detection module to obtain object detection result of the image; and process the feature map using a semantic segmentation module to obtain semantic segmentation result of the image. The object detection module and the semantic segmentation module are trained using a same loss function comprising an object detection component and a semantic segmentation component.
    Type: Grant
    Filed: February 26, 2020
    Date of Patent: October 25, 2022
    Assignees: BEIJING JINGDONG SHANGKE INFORMATION TECHNOLOGY CO., LTD., JD.COM AMERICAN TECHNOLOGIES CORPORATION
    Inventors: Hongda Mao, Wei Xiang, Chumeng Lyu, Weidong Zhang
  • Patent number: 11477464
    Abstract: Systems and techniques are described herein for processing video data using a neural network system. For instance, a process can include generating, by a first convolutional layer of an encoder sub-network of the neural network system, output values associated with a luminance channel of a frame. The process can include generating, by a second convolutional layer of the encoder sub-network, output values associated with at least one chrominance channel of the frame. The process can include generating a combined representation of the frame by combining the output values associated with the luminance channel of the frame and the output values associated with the at least one chrominance channel of the frame. The process can include generating encoded video data based on the combined representation of the frame.
    Type: Grant
    Filed: February 23, 2021
    Date of Patent: October 18, 2022
    Assignee: QUALCOMM Incorporated
    Inventors: Muhammed Zeyd Coban, Ankitesh Kumar Singh, Hilmi Enes Egilmez, Marta Karczewicz
  • Patent number: 11474199
    Abstract: A method to mitigate radio frequency interference (RFI) in weather radar data may include computing p norms of radials of weather radar data to construct an p norm profile of the weather radar data as a function of azimuth angle. The weather radar data may include Level 2 or higher weather radar data in polar format. The method may include determining that a given radial in the weather radar data is an RFI radial based on the p norm profile of the weather radar data. The method may include displaying an image from the weather radar data in which at least one of: the RFI radial is identified in the image as including RFI; or the RFI radial is omitted from the image.
    Type: Grant
    Filed: September 3, 2020
    Date of Patent: October 18, 2022
    Assignee: Vaisala, Inc.
    Inventors: Evan Ruzanski, Andrew Hastings Black
  • Patent number: 11475248
    Abstract: Acquiring labeled data can be a significant bottleneck in the development of machine learning models that are accurate and efficient enough to enable safety-critical applications, such as automated driving. The process of labeling of driving logs can be automated. Unlabeled real-world driving logs, which include data captured by one or more vehicle sensors, can be automatically labeled to generate one or more labeled real-world driving logs. The automatic labeling can include analysis-by-synthesis on the unlabeled real-world driving logs to generate simulated driving logs, which can include reconstructed driving scenes or portions thereof. The automatic labeling can further include simulation-to-real automatic labeling on the simulated driving logs and the unlabeled real-world driving logs to generate one or more labeled real-world driving logs. The automatically labeled real-world driving logs can be stored in one or more data stores for subsequent training, validation, evaluation, and/or model management.
    Type: Grant
    Filed: October 30, 2018
    Date of Patent: October 18, 2022
    Assignee: Toyota Research Institute, Inc.
    Inventors: Adrien David Gaidon, James J. Kuffner, Jr., Sudeep Pillai
  • Patent number: 11468266
    Abstract: A machine receives a large image having large image dimensions that exceed memory threshold dimensions. The large image includes metadata. The machine adjusts an orientation and a scaling of the large image based on the metadata. The machine divides the large image into a plurality of image tiles, each image tile having tile dimensions smaller than or equal to the memory threshold dimensions. The machine provides the plurality of image tiles to an artificial neural network. The machine identifies, using the artificial neural network, at least a portion of the target in at least one image tile. The machine identifies the target in the large image based on at least the portion of the target being identified in at least one image tile.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: October 11, 2022
    Assignee: Raytheon Company
    Inventors: Jonathan Goldstein, Stephen J. Raif, Philip A. Sallee, Jeffrey S. Klein, Steven A. Israel, Franklin Tanner, Shane A. Zabel, James Talamonti, Lisa A. Mccoy
  • Patent number: 11467895
    Abstract: One or more computing devices, systems, and/or methods for classifier validation are provided. A set of in-sample examples are partitioned into a reduced in-sample set and a remaining in-sample set. The reduced in-sample set is processed using a set of classifiers. A subset of classifiers are identified as having error counts, over the reduced in-sample set, below a threshold number of errors. A training procedure is executed to select a classifier having a minimum error rate over the set of in-sample examples. If the classifier is within the subset of classifiers, then an out-of-sample error bound is determined for the classifier.
    Type: Grant
    Filed: September 28, 2020
    Date of Patent: October 11, 2022
    Assignee: YAHOO ASSETS LLC
    Inventors: Eric Theodore Bax, Natalie Bax
  • Patent number: 11455753
    Abstract: Systems and methods are disclosed for adjusting attributes of whole slide images, including stains therein. A portion of a whole slide image comprised of a plurality of pixels in a first color space and including one or more stains may be received as input. Based on an identified stain type of the stain(s), a machine-learned transformation associated with the stain type may be retrieved and applied to convert an identified subset of the pixels from the first to a second color space specific to the identified stain type. One or more attributes of the stain(s) may be adjusted in the second color space to generate a stain-adjusted subset of pixels, which are then converted back to the first color space using an inverse of the machine-learned transformation. A stain-adjusted portion of the whole slide image including at least the stain-adjusted subset of pixels may be provided as output.
    Type: Grant
    Filed: December 7, 2021
    Date of Patent: September 27, 2022
    Assignee: PAIGE.AI, Inc.
    Inventors: Navid Alemi, Christopher Kanan, Leo Grady
  • Patent number: 11455783
    Abstract: The present disclosure provides an image recognition method and apparatus, a device and a non-volatile computer storage medium. In embodiments of the present disclosure, it is feasible to obtain the to-be-recognized image of a designated size, extract the different-area image from the to-be-recognized image, and obtain the image feature of the different-area image according to the different-area image, so as to obtain the recognition result of the to-be-recognized image, according to the image feature of the different-area image and a preset template feature. In this way, recognition processing can be performed for images in a limited number of classes without employing the deep learning method based on hundreds of thousands of even millions of training samples.
    Type: Grant
    Filed: May 23, 2016
    Date of Patent: September 27, 2022
    Assignee: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.
    Inventors: Guoyi Liu, Guang Li
  • Patent number: 11449709
    Abstract: A neural network is trained to focus on a domain of interest. For example, in a pre-training phase, the neural network in trained using synthetic training data, which is configured to omit or limit content less relevant to the domain of interest, by updating parameters of the neural network to improve the accuracy of predictions. In a subsequent training phase, the pre-trained neural network is trained using real-world training data by updating only a first subset of the parameters associated with feature extraction, while a second subset of the parameters more associated with policies remains fixed.
    Type: Grant
    Filed: May 14, 2020
    Date of Patent: September 20, 2022
    Assignee: NVIDIA Corporation
    Inventor: Bernhard Firner
  • Patent number: 11449989
    Abstract: Super-resolution images are generated from standard-resolution images acquired with a magnetic resonance imaging (“MRI”) system. More particularly, super-resolution (e.g., sub-millimeter isotropic resolution) images are generated from standard-resolution images (e.g., images with 1 mm or coarser isotropic resolution) using a deep learning algorithm, from which accurate cortical surface reconstructions can be generated.
    Type: Grant
    Filed: March 26, 2020
    Date of Patent: September 20, 2022
    Assignee: The General Hospital Corporation
    Inventors: Qiyuan Tian, Susie Yi Huang, Berkin Bilgic, Jonathan R. Polimeni
  • Patent number: 11443505
    Abstract: A method for analyzing images to generate a plurality of output features includes receiving input features of the image and performing Fourier transforms on each input feature. Kernels having coefficients of a plurality of trained features are received and on-the-fly Fourier transforms (OTF-FTs) are performed on the coefficients in the kernels. The output of each Fourier transform and each OTF-FT are multiplied together to generate a plurality of products and each of the products are added to produce one sum for each output feature. Two-dimensional inverse Fourier transforms are performed on each sum.
    Type: Grant
    Filed: June 11, 2020
    Date of Patent: September 13, 2022
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Mihir Narendra Mody, Manu Mathew, Chaitanya Satish Ghone
  • Patent number: 11436432
    Abstract: An apparatus for an artificial neural network includes a format converter, a sampling unit, and a learning unit. The format converter generates a first format image and a second format image based on an input image. The sampling unit samples the first format image using a first sampling scheme to generate a first feature map, and samples the second format image using a second sampling scheme different from the first sampling scheme to generate a second feature map. The learning unit operates the artificial neural network using the first feature map and the second feature map.
    Type: Grant
    Filed: September 15, 2020
    Date of Patent: September 6, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Sangmin Suh, Sangsoo Ko, Byeoungsu Kim, Sanghyuck Ha
  • Patent number: 11417235
    Abstract: Described herein are systems and methods for grounded natural language learning in an interactive setting. In embodiments, during a learning process, an agent learns natural language by interacting with a teacher and learning from feedback, thus learning and improving language skills while taking part in the conversation. In embodiments, a model is used to incorporate both imitation and reinforcement by leveraging jointly sentence and reward feedback from the teacher. Various experiments are conducted to validate the effectiveness of a model embodiment.
    Type: Grant
    Filed: November 22, 2017
    Date of Patent: August 16, 2022
    Assignee: Baidu USA LLC
    Inventors: Haichao Zhang, Haonan Yu, Wei Xu
  • Patent number: 11417232
    Abstract: Disclosed is a method of providing user-customized learning content. The method includes: a step a of configuring a question database including one or more multiple-choice questions having one or more choice items and collecting choice item selection data of a user for the questions; a step b of calculating a modeling vector for the user based on the choice item data and generating modeling vectors for the questions according to each choice item; and a step c of calculating choice item selection probabilities of the user based on the modeling vectors of the user and the modeling vectors of the questions.
    Type: Grant
    Filed: February 1, 2021
    Date of Patent: August 16, 2022
    Assignee: RIIID INC.
    Inventors: Yeong Min Cha, Jae We Heo, Young Jun Jang
  • Patent number: 11403495
    Abstract: A machine learning system efficiently detects faults from three-dimensional (“3D”) seismic images, in which the fault detection is considered as a binary segmentation problem. Because the distribution of fault and nonfault samples is heavily biased, embodiments of the present disclosure use a balanced loss function to optimize model parameters. Embodiments of the present disclosure train a machine learning system by using a selected number of pairs of 3D synthetic seismic and fault volumes, which may be automatically generated by randomly adding folding, faulting, and noise in the volumes. Although trained by using only synthetic data sets, the machine learning system can accurately detect faults from 3D field seismic volumes that are acquired at totally different surveys.
    Type: Grant
    Filed: November 25, 2020
    Date of Patent: August 2, 2022
    Assignee: Board of Regents, The University of Texas System
    Inventors: Xinming Wu, Yunzhi Shi, Sergey Fomel
  • Patent number: 11392738
    Abstract: A method for generating a simulation scenario, the method may include receiving sensed information that was sensed during driving sessions of vehicles; wherein the sensed information comprises visual information regarding multiple objects; determining, by applying an unsupervised learning process on the sensed information, driving scenario building blocks and occurrence information regarding an occurrence of the driving scenario building blocks; and generating the simulation scenario based on a selected set of driving scenario building blocks and on physical guidelines, wherein the generating comprises selecting, out of the driving scenario building blocks, the selected set of driving scenario building blocks; wherein the generating is responsive to at least a part of the occurrence information and to at least one simulation scenario limitation.
    Type: Grant
    Filed: May 5, 2020
    Date of Patent: July 19, 2022
    Assignee: AUTOBRAINS TECHNOLOGIES LTD
    Inventors: Igal Raichelgauz, Karina Odinaev
  • Patent number: 11393160
    Abstract: Systems, apparatuses and methods may provide for technology that generates, by a first neural network, an initial set of model weights based on input data and iteratively generates, by a second neural network, an updated set of model weights based on residual data associated with the initial set of model weights and the input data. Additionally, the technology may output a geometric model of the input data based on the updated set of model weights. In one example, the first neural network and the second neural network reduce the dependence of the geometric model on the number of data points in the input data.
    Type: Grant
    Filed: March 23, 2018
    Date of Patent: July 19, 2022
    Assignee: Intel Corporation
    Inventors: Rene Ranftl, Vladlen Koltun
  • Patent number: 11386292
    Abstract: A method and a system for automatically generating multiple captions of an image are provided. A method for training an auto image caption generation model according to an embodiment of the present disclosure includes: generating a caption attention map by using an image; converting the generated caption attention map into a latent variable by projecting the caption attention map onto a latent space; deriving a guide map by using the latent variable; and training to generate captions of an image by using the guide map and the image. Accordingly, a plurality of captions describing various characteristics of an image and including various expressions can be automatically generated.
    Type: Grant
    Filed: September 10, 2020
    Date of Patent: July 12, 2022
    Assignee: KOREA ELECTRONICS TECHNOLOGY INSTITUTE
    Inventors: Bo Eun Kim, Hye Dong Jung
  • Patent number: 11361563
    Abstract: Embodiments of the present invention provide a system of interconnected neural networks capable audio-visual simulation generation by interpreting and processing a first image and, utilizing a given reference image or training set, modifying the first image such that the new image possesses parameters found within the reference image or training set. The images used and generated may include video. The system includes an autoposer, an automasker, an encoder, a generator, an improver, a discriminator, styler, and at least one training set of images or video. The system can also generate training sets for use within.
    Type: Grant
    Filed: November 24, 2020
    Date of Patent: June 14, 2022
    Inventor: Jacob Daniel Brumleve
  • Patent number: 11361372
    Abstract: A method of establishing a paint color to be displayed by a procurement system based on information received from an external punchout site. A formula corresponding to a desired paint color selected by a customer that is not included in a color palate of predefined colors regularly available for purchase from a paint supplier is received. Based on the formula, a standardized color value within a color gamut of the electronic display device is generated for the desired paint color. The standardized color value is stored in a network-accessible database entry specific to the customer to be retrievable by the customer over a communication network for generating a preview of the desired paint color during a subsequent purchase of paint having the desired paint color by the customer over the communication network.
    Type: Grant
    Filed: November 2, 2017
    Date of Patent: June 14, 2022
    Inventors: Meghan L Vickers, Pamela A Gillikin, William E Weber, III, James D. Bandy, Matthew G. Stec
  • Patent number: 11361225
    Abstract: A neural network architecture for attention-based efficient model adaptation is disclosed. A method includes accessing an input vector, the input vector comprising a numeric representation of an input to a neural network. The method includes providing the input vector to the neural network comprising a plurality of ordered layers, wherein each layer in at least a subset of the plurality of ordered layers is coupled with an adaptation module, wherein the adaptation module receives a same input value as a coupled layer for the adaptation module, and wherein an output value of the adaptation module is pointwise multiplied with an output value of the coupled layer to generate a next layer input value. The method includes generating an output of the neural network based on an output of a last one of the plurality of ordered layers in the neural network.
    Type: Grant
    Filed: December 18, 2018
    Date of Patent: June 14, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Mandar Dilip Dixit, Gang Hua
  • Patent number: 11360443
    Abstract: A model calculation unit for calculating a gradient with respect to a certain input variable of input variables of a predefined input variable vector for an RBF model with the aid of a hard-wired processor core designed as hardware for calculating a fixedly predefined processing algorithm in coupled functional blocks, the processor core being designed to calculate the gradient with respect to the certain input variable for an RBF model as a function of one or multiple input variable(s) of the input variable vector of an input dimension, of a number of nodes, of length scales predefined for each node and each input dimension, and of parameters of the RBF function predefined for each node.
    Type: Grant
    Filed: September 4, 2017
    Date of Patent: June 14, 2022
    Assignee: Robert Bosch GmbH
    Inventors: Andre Guntoro, Heiner Markert, Martin Schiegg
  • Patent number: 11354902
    Abstract: A method can include classifying, using a compressed and specialized convolutional neural network (CNN), an object of a video frame into classes, clustering the object based on a distance of a feature vector of the object to a feature vector of a centroid object of the cluster, storing top-k classes, a centroid identification, and a cluster identification, in response to receiving a query for objects of class X from a specific video stream, retrieving image data for each centroid of each cluster that includes the class X as one of the top-k classes, classifying, using a ground truth CNN (GT-CNN), the retrieved image data for each centroid, and for each centroid determined to be classified as a member of the class X providing image data for each object in each cluster associated with the centroid.
    Type: Grant
    Filed: May 15, 2020
    Date of Patent: June 7, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ganesh Ananthanarayanan, Paramvir Bahl, Peter Bodik, Tsuwang Hsieh, Matthai Philipose
  • Patent number: 11354846
    Abstract: There is a region of interest of a synthetic image depicting an object from a class of objects. A trained neural image generator, having been trained to map embeddings from a latent space to photorealistic images of objects in the class, is accessed. A first embedding is computed from the latent space, the first embedding corresponding to an image which is similar to the region of interest while maintaining photorealistic appearance. A second embedding is computed from the latent space, the second embedding corresponding to an image which matches the synthetic image. Blending of the first embedding and the second embedding is done to form a blended embedding. At least one output image is generated from the blended embedding, the output image being more photorealistic than the synthetic image.
    Type: Grant
    Filed: June 29, 2020
    Date of Patent: June 7, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Stephan Joachim Garbin, Marek Adam Kowalski, Matthew Alastair Johnson, Tadas Baltrusaitis, Martin De La Gorce, Virginia Estellers Casas, Sebastian Karol Dziadzio, Jamie Daniel Joseph Shotton
  • Patent number: 11348008
    Abstract: In a method and a computer for determining a training function in order to generate annotated training images, a training image and training-image information are provided to a computer, together with an isolated item of image information that is independent of the training image. A first calculation is made in the computer by applying an image-information-processing first function to the isolated item of image information, and a second calculation is made by applying an image-information-processing second function to the training image. Adjustments to the first and second functions are made based on these calculation results, from which a determination of a training function is then made in the computer.
    Type: Grant
    Filed: December 18, 2017
    Date of Patent: May 31, 2022
    Assignee: Siemens Healthcare GmbH
    Inventors: Olivier Pauly, Philipp Seegerer
  • Patent number: 11341632
    Abstract: A method is for obtaining at least one feature of interest, especially a biomarker, from an input image acquired by a medical imaging device. The at least one feature of interest is the output of a respective node of a machine learning network, in particular a deep learning network. The machine learning network processes at least part of the input image as input data. The used machine learning network is trained by machine learning using at least one constraint for the output of at least one inner node of the machine learning network during the machine learning.
    Type: Grant
    Filed: May 9, 2019
    Date of Patent: May 24, 2022
    Assignee: SIEMENS HEALTHCARE GMBH
    Inventors: Alexander Muehlberg, Rainer Kaergel, Alexander Katzmann, Michael Suehling
  • Patent number: 11341375
    Abstract: Apparatuses and methods for image processing are provided. The image processing apparatus performs area classification and object detection in an image, and includes a feature map generator configured to generate the feature map of the input image using the neural network, and an image processor configured to classify the areas and to detect the objects in the image using the generated feature map.
    Type: Grant
    Filed: November 14, 2019
    Date of Patent: May 24, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Hyun Jin Choi, Chang Hyun Kim, Eun Soo Shim, Dong Hwa Lee, Ki Hwan Choi
  • Patent number: 11335045
    Abstract: In some embodiments, a system includes an artificial intelligence (AI) chip and a processor coupled to the AI chip and configured to receive an input image, crop the input image into a plurality of cropped images, and execute the AI chip to produce a plurality of feature maps based on at least a subset of the plurality of cropped images. The system may further merge at least a subset of the plurality of feature maps to form a merged feature map, and produce an output image based on the merged feature map. The cropping and merging operations may be performed according to a same pattern. The system may also include a training network configured to train weights of the CNN model in the AI chip in a gradient descent network. Cropping and merging may be performed over the training sample images in the training work in a similar manner.
    Type: Grant
    Filed: January 3, 2020
    Date of Patent: May 17, 2022
    Assignee: Gyrfalcon Technology Inc.
    Inventors: Bin Yang, Lin Yang, Xiaochun Li, Yequn Zhang, Yongxiong Ren, Yinbo Shi, Patrick Dong
  • Patent number: 11329952
    Abstract: A computer-implemented method for domain analysis comprises: obtaining, by a computing device, a domain; and inputting, by the computing device, the obtained domain to a trained detection model to determine if the obtained domain was generated by one or more domain generation algorithms. The detection model comprises a neural network model, a n-gram-based machine learning model, and an ensemble layer. Inputting the obtained domain to the detection model comprises inputting the obtained domain to each of the neural network model and the n-gram-based machine learning model. The neural network model and the n-gram-based machine learning model both output to the ensemble layer. The ensemble layer outputs a probability that the obtained domain was generated by the domain generation algorithms.
    Type: Grant
    Filed: July 22, 2020
    Date of Patent: May 10, 2022
    Assignee: Beijing DiDi Infinity Technology and Development Co., Ltd.
    Inventors: Tao Huang, Shuaiji Li, Yinhong Chang, Fangfang Zhang, Zhiwei Qin
  • Patent number: 11314985
    Abstract: The disclosure provides method of training a machine-learning model employing a procedurally synthesized training dataset, a machine that includes a trained machine-learning model, and a method of operating a machine. In one example, the method of training includes: (1) generating training image definitions in accordance with variations in content of training images to be included in a training dataset, (2) rendering the training images corresponding to the training image definitions, (3) generating, at least partially in parallel with the rendering, ground truth data corresponding to the training images, the training images and the ground truth comprising the training dataset, and (4) training a machine-learning model using the training dataset and the ground truth data.
    Type: Grant
    Filed: May 5, 2020
    Date of Patent: April 26, 2022
    Assignee: Nvidia Corporation
    Inventors: Jesse Clayton, Vladimir Glavtchev
  • Patent number: 11314988
    Abstract: This application provides an image aesthetic processing method and an electronic device. A method for generating an image aesthetic scoring model includes: constructing a first neural network based on a preset convolutional structure set; obtaining an image classification neural network, where the image classification neural network is used to classify image scenarios; obtaining a second neural network based on the first neural network and the image classification neural network, where the second neural network is a neural network containing scenario information; and determining an image aesthetic scoring model based on the second neural network, where output information of the image aesthetic scoring model includes image scenario classification information. In this method, scenario information is integrated into a backbone neural network, so that a resulting image aesthetic scoring model is interpretable.
    Type: Grant
    Filed: March 26, 2018
    Date of Patent: April 26, 2022
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Qing Zhang, Miao Xie, Shangling Jui
  • Patent number: 11315040
    Abstract: The disclosure relates to system and method for detecting an instance of lie using a Machine Learning (ML) model. In one example, the method may include extracting a set of features from an input data received from a plurality of data sources at predefined time intervals and combining the set of features from each of the plurality of data sources to obtain a multimodal data. The method may further include processing the multimodal data through an ML model to generate a label for the multimodal data. The label is generated based on a confidence score of the ML model. The label is one of a true value that corresponds to an instance of truth or a false value that corresponds to an instance of lie.
    Type: Grant
    Filed: March 26, 2020
    Date of Patent: April 26, 2022
    Assignee: Wipro Limited
    Inventors: Vivek Kumar Varma Nadimpalli, Gopichand Agnihotram
  • Patent number: 11315219
    Abstract: Remote distribution of multiple neural network models to various client devices over a network can be implemented by identifying a native neural network and remotely converting the native neural network to a target neural network based on a given client device operating environment. The native neural network can be configured for execution using efficient parameters, and the target neural network can use less efficient but more precise parameters.
    Type: Grant
    Filed: May 29, 2020
    Date of Patent: April 26, 2022
    Assignee: Snap Inc.
    Inventors: Guohui Wang, Sumant Milind Hanumante, Ning Xu, Yuncheng Li
  • Patent number: 11308365
    Abstract: An image classification system is provided for determining a likely classification of an image using multiple machine learning models that share a base machine learning model. The image classification system may be a browser-based system on a user computing device that obtains multiple machine learning models over a network from a remote system once, stores the models locally in the image classification system, and uses the models multiple times without needing to subsequently request the machine learning models again from the remote system. The image classification system may therefore determine likely a classification associated with an image by running the machine learning models on a user computing device.
    Type: Grant
    Filed: June 13, 2019
    Date of Patent: April 19, 2022
    Assignee: Expedia, Inc.
    Inventors: Li Wen, Zhanpeng Huo, Jingya Jiang
  • Patent number: 11299169
    Abstract: A computer, including a processor and a memory, the memory including instructions to be executed by the processor to determine six degree of freedom (DoF) data for a first object in a first video image and generate a synthetic video image corresponding to the first video image including a synthetic object and a synthetic object label based on the six DoF data. The instructions can include further instructions to train a generative adversarial network (GAN) based on a paired first video image and a synthetic video image to generate a modified synthetic image and train a deep neural network to locate the synthetic object in the modified synthetic video image based on the synthetic object. The instructions can include further instructions to download the trained deep neural network to a computing device in a vehicle.
    Type: Grant
    Filed: January 24, 2020
    Date of Patent: April 12, 2022
    Assignee: Ford Global Technologies, LLC
    Inventors: Punarjay Chakravarty, Ashley Elizabeth Micks
  • Patent number: 11301723
    Abstract: A data generation device includes one or more processors. The processors input input data into a neural network and obtain an inference result of the neural network The processors calculate a first loss and a second loss. The first loss becomes smaller in value as a degree of matching between the inference result and a target label becomes larger. The target label indicates a correct answer of the inference. The second loss is a loss based on a contribution degree to the inference result of a plurality of elements included in the input data and the target label. The processors update the input data based on the first loss and the second loss.
    Type: Grant
    Filed: February 24, 2020
    Date of Patent: April 12, 2022
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventor: Shuhei Nitta
  • Patent number: 11295084
    Abstract: In an approach for detecting key messages for a video, a processor builds a role model based on data from one or more data sources, with an identification feature of each role in a video. A processor samples a plurality of frames of the video. A processor identifies a key object presented in the plurality of frames. The key object is a role in the video. A processor recognizes a movement scenario associated with the role. A processor dynamically updates the role model based on the movement scenario. A processor identifies a role name based on the movement scenario. A processor generates a description script associated with the movement scenario for the role. A processor outputs the description script.
    Type: Grant
    Filed: September 16, 2019
    Date of Patent: April 5, 2022
    Assignee: International Business Machines Corporation
    Inventors: Wei Qin, Jing Jing Zhang, Xi Juan Men, Xiaoli Duan, Yue Chen, Dong Jun Zong
  • Patent number: 11295240
    Abstract: The invention includes systems and methods, including computer programs encoded on computer storage media, for classifying inputs as belonging to a known or unknown class as well as for updating the system to improve is performance. In one system, there is a desired feature representation for unknown inputs, e.g., a zero vector, and the system includes transforming input data to produce a feature representation, using that to compute dissimilarity with the desired feature representation for unknown inputs and combining dissimilarity with other transformations of the feature representation to determine if the input is from a specific known class or if it is unknown. In one embodiment, the system transforms the magnitude of the feature representation into a confidence score.
    Type: Grant
    Filed: June 15, 2019
    Date of Patent: April 5, 2022
    Inventor: Terrance E Boult
  • Patent number: 11295412
    Abstract: An image processing apparatus applies an image to a first learning network model to optimize the edges of the image, applies the image to a second learning network model to optimize the texture of the image, and applies a first weight to the first image and a second weight to the second image based on information on the edge areas and the texture areas of the image to acquire an output image.
    Type: Grant
    Filed: April 2, 2020
    Date of Patent: April 5, 2022
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Cheon Lee, Donghyun Kim, Yongsup Park, Jaeyeon Park, Iljun Ahn, Hyunseung Lee, Taegyoung Ahn, Youngsu Moon, Tammy Lee
  • Patent number: 11288822
    Abstract: The present subject matter refers a method for training image-alignment procedures in a computing environment. The method comprises communicating one or more images of an object to a user and receiving a plurality of user-selected zones within said one or more through a user-interface. An augmented data-set is generated based on said one or more images comprising the user-selected zones, wherein such augmented data set comprises a plurality of additional images defining variants of said one or more communicated images. Thereafter, a machine-learning based image alignment is trained based on at-least one of the augmented data set and the communicated images.
    Type: Grant
    Filed: March 23, 2020
    Date of Patent: March 29, 2022
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventors: Xibeijia Guan, Chandra Suwandi Wijaya, Vasileios Vonikakis, Ariel Beck
  • Patent number: 11289175
    Abstract: A method is disclosed. The method models a plurality of visual cortex neurons, models one or more connections between at least two visual cortex neurons in the plurality of visual cortex neurons, assigns synaptic weight value to at least one of the one or more connections, simulates application of one or more electrical signals to at least one visual cortex neuron in the plurality of visual cortex neurons, adjusts the synaptic weight value assigned to at least one of the one or more connection based on the one or more electrical signals, and generates an orientation map of the plurality of visual cortex neurons based on the adjusted synaptic weight values.
    Type: Grant
    Filed: November 30, 2012
    Date of Patent: March 29, 2022
    Assignee: HRL Laboratories, LLC
    Inventors: Narayan Srinivasa, Qin Jiang
  • Patent number: 11281928
    Abstract: Disclosed herein are system, method, and computer program product embodiments for querying document terms and identifying target data from documents. In an embodiment, a document processing system may receive a document and a query string. The document processing system may perform optical character recognition to obtain character information and positioning information for the characters of the document. The document processing system may generate a two-dimensional character grid for the document. The document processing system may apply a convolutional neural network to the character grid and the query string to identify target data from the document corresponding to the query string. The convolutional neural network may then produce a segmentation mask and/or bounding boxes to identify the targeted data.
    Type: Grant
    Filed: September 23, 2020
    Date of Patent: March 22, 2022
    Assignee: SAP SE
    Inventors: Johannes Hoehne, Christian Reisswig
  • Patent number: 11276000
    Abstract: An image analysis method for analyzing an image of a tissue collected from a subject using a deep learning algorithm of a neural network structure. The image analysis method includes generating analysis data from the analysis target image that includes the tissue to be analyzed, inputting the analysis data to a deep learning algorithm, and generating data indicating a layer structure configuring a tissue in the analysis target image by the deep learning algorithm.
    Type: Grant
    Filed: February 25, 2019
    Date of Patent: March 15, 2022
    Assignee: SYSMEX CORPORATION
    Inventors: Kohei Yamada, Kazumi Hakamada, Yuki Aihara, Kanako Masumoto, Yosuke Sekiguchi, Krupali Jain
  • Patent number: 11275381
    Abstract: A trained model is described in the form of a Bayesian neural network (BNN) which provides a quantification of its inference uncertainty during use and which is trained using marginal likelihood maximization. A Probably Approximately Correct (PAC) bound may be used in the training to incorporate prior knowledge and to improve training stability even when the network architecture is deep.
    Type: Grant
    Filed: March 26, 2020
    Date of Patent: March 15, 2022
    Assignee: Robert Bosch GmbH
    Inventors: Melih Kandemir, Manuel Haussmann
  • Patent number: 11270105
    Abstract: A method and system for extracting information from a drawing. The method includes classifying nodes in the drawing, extracting attributes from the nodes, determining whether there are errors in the node attributes, and removing the nodes from the drawing. The method also includes identifying edges in the drawing, extracting attributes from the edges, and determining whether there are errors in the edge attributes. The system includes at least one processing component, at least one memory component, an identification component, an extraction component, and a correction component. The identification component is configured to classify nodes in the drawing, remove the nodes from the drawing, and identify edges in the drawing. The extraction component is configured to extract attributes from the nodes and edges. The correction component is configured to determine whether there are errors in the extracted attributes.
    Type: Grant
    Filed: September 24, 2019
    Date of Patent: March 8, 2022
    Assignee: International Business Machines Corporation
    Inventors: Mahmood Saajan Ashek, Raghu Kiran Ganti, Shreeranjani Srirangamsridharan, Mudhakar Srivatsa, Asif Sharif, Ramey Ghabros, Somesh Jha, Mojdeh Sayari Nejad, Mohammad Siddiqui, Yusuf Mai
  • Patent number: 11270203
    Abstract: There is provided is a method and an apparatus for training a neural network capable of improving the performance of the neural network by performing intelligent normalization according to a target task of the neural network. The method according to some embodiments of the present disclosure includes transforming the output data into first normalized data using a first normalization technique, transforming the output data into second normalized data using a second normalization technique and generating target normalized data by aggregating the first normalized data and the second normalized data based on a learnable parameter. At this time, a rate at which the first normalization data is applied in the target normalization data is adjusted by the learnable parameter so that the intelligent normalization according to the target task can be performed, and the performance of the neural network can be improved.
    Type: Grant
    Filed: June 12, 2019
    Date of Patent: March 8, 2022
    Assignee: LUNIT INC.
    Inventors: Hyeon Seob Nam, Hyo Eun Kim
  • Patent number: 11263747
    Abstract: An example method includes generating, using a multi-scale block of a convolutional neural network (CNN), a first output image based on an optical coherence tomography (OCT) reflectance image of a retina and an OCT angiography (OCTA) image of the retina. The method further includes generating, using an encoder of the CNN, at least one second output image based on the first output image and generating, using a decoder of the CNN, a third output image based on the at least one second output image. An avascular map is generated based on the third output image. The avascular map indicates at least one avascular area of the retina depicted in the OCTA image.
    Type: Grant
    Filed: April 24, 2020
    Date of Patent: March 1, 2022
    Assignee: Oregon Health & Science University
    Inventors: Yali Jia, Yukun Guo
  • Patent number: 11265549
    Abstract: A method for image decoding performed by a decoding apparatus, according to the present disclosure, comprises the steps of: obtaining residual information for a current block from a bitstream; deriving a prediction sample for the current block; deriving a residual sample for the current block on the basis of the residual information; deriving a reconstructed picture on the basis of the prediction sample and the residual sample; and performing filtering on the reconstructed picture on the basis of a convolution neural network (CNN).
    Type: Grant
    Filed: March 27, 2019
    Date of Patent: March 1, 2022
    Assignee: LG Electronics Inc.
    Inventors: Mehdi Salehifar, Seunghwan Kim