Neural Networks Patents (Class 382/156)
  • Patent number: 11676049
    Abstract: The present disclosure relates to systems and methods for updating static machine-learning models (e.g., a Doc2Vec model) without needing to retrain the models. More particularly, the present disclosure relates to systems and methods that can be used to add new data to a base model by training a client model using the new data, and transforming the vector space of the client model to align with the vector space of the base model. The base model can then be updated using the realigned client model. As such, the base model can be updated with the new data without needing to retrain the base model, which can be burdensome to processing resources, insecure, and time consuming.
    Type: Grant
    Filed: March 13, 2020
    Date of Patent: June 13, 2023
    Assignee: Oracle International Corporation
    Inventors: Guodong Chen, Shekhar Agrawal
  • Patent number: 11663823
    Abstract: Dual-modality relation networks for audio-visual event localization can be provided. A video feed for audio-visual event localization can be received. Based on a combination of extracted audio features and video features of the video feed, informative features and regions in the video feed can be determined by running a first neural network. Based on the informative features and regions in the video feed determined by the first neural network, relation-aware video features can be determined by running a second neural network. Based on the informative features and regions in the video feed, relation-aware audio features can be determined by running a third neural network. A dual-modality representation can be obtained based on the relation-aware video features and the relation-aware audio features by running a fourth neural network. The dual-modality representation can be input to a classifier to identity an audio-visual event in the video feed.
    Type: Grant
    Filed: August 10, 2020
    Date of Patent: May 30, 2023
    Assignee: International Business Machines Corporation
    Inventors: Chuang Gan, Dakuo Wang, Yang Zhang, Bo Wu, Xiaoxiao Guo
  • Patent number: 11663526
    Abstract: A document processing system and method for performing document classification by machine learning include an input module, a processing module, and at least one storage module preconfigured with a classification folder matching a code. Upon completion of a first-instance model construction procedure, the input module receives a document image. The processing module compares the document image with a machine learning model information to generate a computation result and stores the document image in the classification folder according to the computation result. Therefore, classification of the document images is automated according to the code of the corresponding classification folder, thereby enhancing the accuracy and efficiency of document classification.
    Type: Grant
    Filed: January 27, 2021
    Date of Patent: May 30, 2023
    Assignee: AVISION INC.
    Inventor: Chun-Chieh Liao
  • Patent number: 11657288
    Abstract: Disclosed embodiments provide for deep convolutional neural network computing. The convolutional computing is accomplished using a multilayered analysis engine. The multilayered analysis engine includes a deep learning network using a convolutional neural network (CNN). The multilayered analysis engine is used to analyze multiple images in a supervised or unsupervised learning process. Multiple images are provided to the multilayered analysis engine, and the multilayered analysis engine is trained with those images. A subject image is then evaluated by the multilayered analysis engine. The evaluation is accomplished by analyzing pixels within the subject image to identify a facial portion and identifying a facial expression based on the facial portion. The results of the evaluation are output. The multilayered analysis engine is retrained using a second plurality of images.
    Type: Grant
    Filed: June 8, 2020
    Date of Patent: May 23, 2023
    Assignee: Affectiva, Inc.
    Inventors: Panu James Turcot, Rana el Kaliouby, Daniel McDuff
  • Patent number: 11651206
    Abstract: Embodiments of the present invention are directed to a computer-implemented method for multiscale representation of input data. A non-limiting example of the computer-implemented method includes a processor receiving an original input. The processor downsamples the original input into a downscaled input. The processor runs a first convolutional neural network (“CNN”) on the downscaled input. The processor runs a second CNN on the original input, where the second CNN has fewer layers than the first CNN. The processor merges the output of the first CNN with the output of the second CNN and provides a result following the merging of the outputs.
    Type: Grant
    Filed: June 27, 2018
    Date of Patent: May 16, 2023
    Assignee: International Business Machines Corporation
    Inventors: Quanfu Fan, Richard Chen
  • Patent number: 11645839
    Abstract: An apparatus and method for lane feature detection from an image is performed according to predetermined path geometry. An image including at least one path is received. The image may be an aerial image. Map data, corresponding to the at least one path and defining the predetermined path geometry is selected. The image is modified according to the selected map data including the predetermined path geometry. A lane feature prediction model is generated or configured based on the modified image. A subsequent image is provided to the lane feature prediction model for a prediction of at least one lane feature.
    Type: Grant
    Filed: May 11, 2021
    Date of Patent: May 9, 2023
    Assignee: HERE Global B.V.
    Inventor: Abhilshit Soni
  • Patent number: 11645784
    Abstract: In various example embodiments, relevant changes between 3D models of a scene are detected and classified by transforming the 3D models into point clouds and applying a deep learning model to the point clouds. The model may employ a Siamese arrangement of sparse lattice networks each including a number of modified BCLs. The sparse lattice networks may each take a point cloud as input and extract features in 3D space to provide a primary output with features in 3D space and an intermediate output with features in lattice space. The intermediate output from both sparse lattice networks may be compared using a lattice convolution layer. The results may be projected into the 3D space of the point clouds using a slice process and concatenated to the primary io outputs of the sparse lattice networks. Each concatenated output may be subject to a convolutional network to detect and classify relevant changes.
    Type: Grant
    Filed: August 5, 2021
    Date of Patent: May 9, 2023
    Assignee: Bentley Systems, Incorporated
    Inventors: Christian Xu, Renaud Keriven
  • Patent number: 11645502
    Abstract: A model calculation unit for calculating an RBF model is described, including a hard-wired processor core designed as hardware for calculating a fixedly predefined processing algorithm in coupled functional blocks, the processor core being designed to calculate an output variable for an RBF model as a function of one or multiple input variable(s) of nodes V[j,k], of length scales (L[j,k]), of weighting parameters p3[j,k] predefined for each node, the output variable being formed as a sum of a value calculated for each node V[j,k], the value resulting from a product of a weighting parameter p3[j,k] assigned to the particular node V[j,k], and a result of an exponential function of a value resulting from the input variable vector as a function of a square distance of the particular node (V[j,k]), weighted by the length scales (L[j,k]), the length scales (L[j,k]) being provided separately for each of the nodes as local length scales.
    Type: Grant
    Filed: September 5, 2017
    Date of Patent: May 9, 2023
    Assignee: ROBERT BOSCH GMBH
    Inventors: Andre Guntoro, Ernst Kloppenburg, Heiner Markert, Holger Ulmer
  • Patent number: 11625581
    Abstract: A method in a hardware implementation of a Convolutional Neural Network (CNN), includes receiving a first subset of data having at least a portion of weight data and at least a portion of input data for a CNN layer and performing, using at least one convolution engine, a convolution of the first subset of data to generate a first partial result; receiving a second subset of data comprising at least a portion of weight data and at least a portion of input data for the CNN layer and performing, using the at least one convolution engine, a convolution of the second subset of data to generate a second partial result; and combining the first partial result and the second partial result to generate at least a portion of convolved data for a layer of the CNN.
    Type: Grant
    Filed: May 3, 2017
    Date of Patent: April 11, 2023
    Assignee: Imagination Technologies Limited
    Inventors: Clifford Gibson, James Imber
  • Patent number: 11625822
    Abstract: Systems and methods are described for determining quality attributes of raw material of textile.
    Type: Grant
    Filed: October 12, 2018
    Date of Patent: April 11, 2023
    Inventor: Ashok Oswal
  • Patent number: 11625911
    Abstract: An image recognition neural network processing method includes: a compiler segments an image recognition neural network to obtain tiles of at least one network layer group; classifies the tiles of each network layer group; and for each network layer group, generates an assembly code and tile information of the network layer group according to a tile result and a classification result of the network layer group. The same type of tiles correspond to the same assembly function, each assembly code includes a code segment of the assembly function corresponding to each type of tiles, the tile information includes block information of each tile in the network layer group, the tile information used to instruct a neural network processor to, according to the block information therein, invoke a corresponding code segment to process image data of a corresponding tile when a target image is identified by the image recognition neural network.
    Type: Grant
    Filed: October 27, 2020
    Date of Patent: April 11, 2023
    Assignee: Shenzhen Intellifusion Technologies Co., Ltd.
    Inventor: Qingxin Cao
  • Patent number: 11600078
    Abstract: An information processing apparatus recognizes a target within an actual image by executing processing of a neural network. The information processing apparatus obtains intermediate outputs which correspond to the actual image and a computer graphics (CG) image and which are from a hidden layer when each of the actual image and the CG image has been separately input to the neural network, and causes the neural network to perform learning with use of an evaluation values based on a first evaluation function and a second evaluation function, the first evaluation function causing the evaluation value to decrease as a difference between a recognition result and training data decreases, the second evaluation function causing the evaluation value to decrease as a difference between the intermediate outputs corresponding to the actual image and the CG image decreases.
    Type: Grant
    Filed: March 11, 2021
    Date of Patent: March 7, 2023
    Assignee: HONDA MOTOR CO., LTD.
    Inventor: Yuji Yasui
  • Patent number: 11594011
    Abstract: In one embodiment, a method for extracting point cloud features for use in localizing an autonomous driving vehicle (ADV) includes selecting a first set of keypoints from an online point cloud, the online point cloud generated by a LiDAR device on the ADV for a predicted pose of the ADV; and extracting a first set of feature descriptors from the first set of keypoints using a feature learning neural network running on the ADV, The method further includes locating a second set of keypoints on a pre-built point cloud map, each keypoint of the second set of keypoints corresponding to a keypoint of the first set of keypoint; extracting a second set of feature descriptors from the pre-built point cloud map; and estimating a position and orientation of the ADV based on the first set of feature descriptors, the second set of feature descriptors, and a predicted pose of the ADV.
    Type: Grant
    Filed: January 30, 2019
    Date of Patent: February 28, 2023
    Assignees: BAIDU USA LLC, BAIDU.COM TIMES TECHNOLOGY (BEIJING) CO., LTD.
    Inventors: Weixin Lu, Yao Zhou, Guowei Wan, Shenhua Hou, Shiyu Song
  • Patent number: 11593662
    Abstract: A method that may include (a) feeding multiple tagged media units to a neural network to provide, from one or more intermediate layers of the neural network, multiple feature vectors of segments of the media units; wherein the neural network was trained to detect current objects within media units; wherein the new category differs from each one of the current categories; wherein at least one media unit comprises at least one segment that is tagged as including the new object; (b) calculating similarities between the multiple feature vectors; (c) clustering the multiple feature vectors to feature vector clusters, based on the similarities; and (d) finding, out of the feature vector clusters, a new feature vector cluster that identifies media unit segments that comprise the new object.
    Type: Grant
    Filed: December 12, 2019
    Date of Patent: February 28, 2023
    Assignee: AUTOBRAINS TECHNOLOGIES LTD
    Inventors: Igal Raichelgauz, Tomer Livne, Adrian Kaho Chan
  • Patent number: 11593614
    Abstract: A neural network system is configured to receive an input image and to generate a classification output for the input image. The neural network system includes: a separable convolution subnetwork comprising a plurality of separable convolutional neural network layers arranged in a stack one after the other, in which each separable convolutional neural network layer is configured to: separately apply both a depthwise convolution and a pointwise convolution during processing of an input to the separable convolutional neural network layer to generate a layer output.
    Type: Grant
    Filed: October 6, 2017
    Date of Patent: February 28, 2023
    Assignee: Google LLC
    Inventors: Francois Chollet, Andrew Gerald Howard
  • Patent number: 11586886
    Abstract: A processor-implemented neural network processing method includes: obtaining a kernel bit-serial block corresponding to first data of a weight kernel of a layer in a neural network; generating a feature map bit-serial block based on second data of one or more input feature maps of the layer; and generating at least a portion of an output feature map by performing a convolution operation of the layer using a bitwise operation between the kernel bit-serial block and the feature map bit-serial block.
    Type: Grant
    Filed: August 16, 2019
    Date of Patent: February 21, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jinwoo Son, Changyong Son, Seohyung Lee, Sangil Jung, Chang Kyu Choi, Jaejoon Han
  • Patent number: 11574170
    Abstract: In one embodiment, an image processing system includes a memory and processing circuitry. The memory is configured to store a predetermined program. The processing circuitry is configured, by executing the predetermined program, to perform processing on an input image by exploiting a neural network having an input layer, an output layer, and an intermediate layer provided between the input layer and the output layer, the input image being inputted to the input layer, and adjust an internal parameter based on data related to the input image, while performing the processing on the input image after training of the neural network, the internal parameter being at least one internal parameter of at least one node included in the intermediate layer, and the input parameter having been calculated by the training of the neural network.
    Type: Grant
    Filed: May 31, 2018
    Date of Patent: February 7, 2023
    Assignees: KABUSHIKI KAISHA TOSHIBA, CANON MEDICAL SYSTEMS CORPORATION
    Inventors: Kenzo Isogawa, Tomoyuki Takeguchi
  • Patent number: 11569056
    Abstract: Methods and apparatuses are disclosed herein for parameter estimation for metrology. An example method at least includes optimizing, using a parameter estimation network, a parameter set to fit a feature in an image based on one or more models of the feature, the parameter set defining the one or more models, and providing metrology data of the feature in the image based on the optimized parameter set.
    Type: Grant
    Filed: October 29, 2019
    Date of Patent: January 31, 2023
    Assignee: FEI Company
    Inventors: Brad Larson, John Flanagan, Maurice Peemen
  • Patent number: 11564590
    Abstract: Techniques for generating magnetic resonance (MR) images of a subject from MR data obtained by a magnetic resonance imaging (MRI) system, the techniques include: obtaining input MR spatial frequency data obtained by imaging the subject using the MRI system; generating an MR image of the subject from the input MR spatial frequency data using a neural network model comprising: a pre-reconstruction neural network configured to process the input MR spatial frequency data; a reconstruction neural network configured to generate at least one initial image of the subject from output of the pre-reconstruction neural network; and a post-reconstruction neural network configured to generate the MR image of the subject from the at least one initial image of the subject.
    Type: Grant
    Filed: March 12, 2020
    Date of Patent: January 31, 2023
    Assignee: Hyperfine Operations, Inc.
    Inventors: Jo Schlemper, Seyed Sadegh Mohseni Salehi, Michal Sofka, Prantik Kundu, Carole Lazarus, Hadrien A. Dyvorne, Rafael O'Halloran, Laura Sacolick
  • Patent number: 11568303
    Abstract: An electronic apparatus is provided. The electronic apparatus includes a first memory configured to store a first artificial intelligence (AI) model including a plurality of first elements and a processor configured to include a second memory. The second memory is configured to store a second AI model including a plurality of second elements. The processor is configured to acquire output data from input data based on the second AI model. The first AI model is trained through an AI algorithm. Each of the plurality of second elements includes at least one higher bit of a plurality of bits included in a respective one of the plurality of first elements.
    Type: Grant
    Filed: October 5, 2018
    Date of Patent: January 31, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Kyoung-hoon Kim, Young-hwan Park, Dong-soo Lee, Dae-hyun Kim, Han-su Cho, Hyun-jung Kim
  • Patent number: 11562181
    Abstract: In one embodiment, an apparatus comprises a memory and a processor. The memory is to store visual data associated with a visual representation captured by one or more sensors. The processor is to: obtain the visual data associated with the visual representation captured by the one or more sensors, wherein the visual data comprises uncompressed visual data or compressed visual data; process the visual data using a convolutional neural network (CNN), wherein the CNN comprises a plurality of layers, wherein the plurality of layers comprises a plurality of filters, and wherein the plurality of filters comprises one or more pixel-domain filters to perform processing associated with uncompressed data and one or more compressed-domain filters to perform processing associated with compressed data; and classify the visual data based on an output of the CNN.
    Type: Grant
    Filed: September 11, 2020
    Date of Patent: January 24, 2023
    Assignee: Intel Corporation
    Inventors: Yen-Kuang Chen, Shao-Wen Yang, Ibrahima J. Ndiour, Yiting Liao, Vallabhajosyula S. Somayazulu, Omesh Tickoo, Srenivas Varadarajan
  • Patent number: 11556732
    Abstract: A method for extracting rivet points in large scale three-dimensional point cloud based on deep learning is provided. Geometric attribute scalar of a point cloud of aircraft skin is calculated point by point, and the scalar attribute domain is mapped to the two-dimensional image to obtain a two-dimensional attribute scalar map of the point cloud. The 2D attribute scalar map is processed using a convolutional neural network and the probability that each point belongs to a rivet point is calculated. The rivet point cloud is divided through a threshold according to the probability; and the point clouds belonging to a same rivet is clustered from the divided rivet point cloud using Euclidean cluster.
    Type: Grant
    Filed: February 7, 2021
    Date of Patent: January 17, 2023
    Assignee: NANJING UNIVERSITY OF AERONAUTICS AND ASTRONAUTICS
    Inventors: Jun Wang, Kun Long, Qian Xie, Dening Lu
  • Patent number: 11551090
    Abstract: The present disclosure relates to a system and method for image processing. In some embodiments, an exemplary image processing method includes: receiving an image; compressing, with a compression neural network, the image into a compressed representation; and performing, with a processing neural network, a machine learning task on the compressed representation to generate a learning result. The compression neural network and the processing neural network are jointly trained.
    Type: Grant
    Filed: August 28, 2020
    Date of Patent: January 10, 2023
    Assignee: Alibaba Group Holding Limited
    Inventors: Sicheng Li, Zihao Liu, Yen-Kuang Chen
  • Patent number: 11551116
    Abstract: A signal analysis method is described. The signal analysis method comprises the following steps: An input signal function associated with a time domain is obtained. A window function is determined based on the input signal function via an artificial intelligence module. The artificial intelligence module comprises at least one computing parameter, wherein the window function is determined based on the at least one computing parameter. The input signal function and the window function are convolved, thereby generating a convolved signal. Further, a signal analysis module is described.
    Type: Grant
    Filed: January 29, 2020
    Date of Patent: January 10, 2023
    Assignee: Rohde & Schwarz GmbH & Co. KG
    Inventors: Baris Guezelarslan, Dominik Hettich
  • Patent number: 11543830
    Abstract: An unsupervised real to virtual domain unification model for highway driving, or DU-drive, employs a conditional generative adversarial network to transform driving images in a real domain to their canonical representations in the virtual domain, from which vehicle control commands are predicted. In the case where there are multiple real datasets, a real-to-virtual generator may be independently trained for each real domain and a global predictor could be trained with data from multiple real domains. Qualitative experiment results show this model can effectively transform real images to the virtual domain while only keeping the minimal sufficient information, and quantitative results verify that such canonical representation can eliminate domain shift and boost the performance of control command prediction task.
    Type: Grant
    Filed: September 24, 2018
    Date of Patent: January 3, 2023
    Assignee: PETUUM, INC.
    Inventors: Xiaodan Liang, Eric P Xing
  • Patent number: 11531840
    Abstract: A method may include executing a neural network to extract a first plurality of features from a plurality of first training images and a second plurality of features from a second training image; generating a model comprising a first image performance score for each of the plurality of first training images and a feature weight for each feature, the feature weight for each feature of the first plurality of features calculated based on an impact of a variation in the feature on first image performance scores of the plurality of first training images; training the model by adjusting the impact of a variation of each of a first set of features that correspond to the second plurality of features; executing the model using a third set of features from a candidate image to generate a candidate image performance score; and generating a record identifying the candidate image performance score.
    Type: Grant
    Filed: July 5, 2022
    Date of Patent: December 20, 2022
    Assignee: Vizit Labs, Inc.
    Inventors: Elham Saraee, Jehan Hamedi, Zachary Halloran
  • Patent number: 11521131
    Abstract: A machine learning model can be trained using a first set of degraded images for each of a plurality of combinations and corresponding reference images, where a number of degraded images in the first set corresponding to a particular combination of the plurality of combinations is selected in accordance with a probability value associated with the particular combination. A validation process can be used to determine a loss value for each of the plurality of combinations of degradations. Updates to the probability values associated with the plurality of combinations can be calculated based on the loss values. The machine learning model can be updated using a second set of degraded images for each of the plurality of combinations, and the corresponding reference images, where a number of degraded images in the second set corresponding to the particular combination is selected based on the updated probability value.
    Type: Grant
    Filed: January 24, 2019
    Date of Patent: December 6, 2022
    Assignee: Jumio Corporation
    Inventors: Sai Narsi Reddy Donthi Reddy, Qiqin Dai
  • Patent number: 11509795
    Abstract: An auto-rotation module having a single-layer neural network on a user device can convert a document image to a monochrome image having black and white pixels and segment the monochrome image into bounding boxes, each bounding box defining a connected segment of black pixels in the monochrome image. The auto-rotation module can determine textual snippets from the bounding boxes and prepare them into input images for the single-layer neural network. The single-layer neural network is trained to process each input image, recognize a correct orientation, and output a set of results for each input image. Each result indicates a probability associated with a particular orientation. The auto-rotation module can examine the results, determine what degree of rotation is needed to achieve a correct orientation of the document image, and automatically rotate the document image by the degree of rotation needed to achieve the correct orientation of the document image.
    Type: Grant
    Filed: June 14, 2021
    Date of Patent: November 22, 2022
    Assignee: OPEN TEXT SA ULC
    Inventor: Christopher Dale Lund
  • Patent number: 11498500
    Abstract: An apparatus includes a capture device and a processor. The capture device may be configured to generate a plurality of video frames. The processor may be configured to perform operations to detect objects in the video frames, detect users based on the objects detected in the video frames, determine a comfort profile for the users and select a reaction to adjust vehicle components according to the comfort profile of the detected users. The comfort profile may be determined in response to characteristics of the users. The characteristics of the users may be determined by performing the operations on each of the users. The operations detect the objects by performing feature extraction based on weight values for each of a plurality of visual features that are associated with the objects extracted from the video frames. The weight values are determined by the processor analyzing training data prior to the feature extraction.
    Type: Grant
    Filed: September 22, 2020
    Date of Patent: November 15, 2022
    Assignee: Ambarella International LP
    Inventors: Shimon Pertsel, Patrick Martin
  • Patent number: 11481862
    Abstract: System and method for simultaneous object detection and semantic segmentation. The system includes a computing device. The computing device has a processor and a non-volatile memory storing computer executable code. The computer executable code, when executed at the processor, is configured to: receive an image of a scene; process the image using a neural network backbone to obtain a feature map; process the feature map using an object detection module to obtain object detection result of the image; and process the feature map using a semantic segmentation module to obtain semantic segmentation result of the image. The object detection module and the semantic segmentation module are trained using a same loss function comprising an object detection component and a semantic segmentation component.
    Type: Grant
    Filed: February 26, 2020
    Date of Patent: October 25, 2022
    Assignees: BEIJING JINGDONG SHANGKE INFORMATION TECHNOLOGY CO., LTD., JD.COM AMERICAN TECHNOLOGIES CORPORATION
    Inventors: Hongda Mao, Wei Xiang, Chumeng Lyu, Weidong Zhang
  • Patent number: 11474199
    Abstract: A method to mitigate radio frequency interference (RFI) in weather radar data may include computing p norms of radials of weather radar data to construct an p norm profile of the weather radar data as a function of azimuth angle. The weather radar data may include Level 2 or higher weather radar data in polar format. The method may include determining that a given radial in the weather radar data is an RFI radial based on the p norm profile of the weather radar data. The method may include displaying an image from the weather radar data in which at least one of: the RFI radial is identified in the image as including RFI; or the RFI radial is omitted from the image.
    Type: Grant
    Filed: September 3, 2020
    Date of Patent: October 18, 2022
    Assignee: Vaisala, Inc.
    Inventors: Evan Ruzanski, Andrew Hastings Black
  • Patent number: 11475248
    Abstract: Acquiring labeled data can be a significant bottleneck in the development of machine learning models that are accurate and efficient enough to enable safety-critical applications, such as automated driving. The process of labeling of driving logs can be automated. Unlabeled real-world driving logs, which include data captured by one or more vehicle sensors, can be automatically labeled to generate one or more labeled real-world driving logs. The automatic labeling can include analysis-by-synthesis on the unlabeled real-world driving logs to generate simulated driving logs, which can include reconstructed driving scenes or portions thereof. The automatic labeling can further include simulation-to-real automatic labeling on the simulated driving logs and the unlabeled real-world driving logs to generate one or more labeled real-world driving logs. The automatically labeled real-world driving logs can be stored in one or more data stores for subsequent training, validation, evaluation, and/or model management.
    Type: Grant
    Filed: October 30, 2018
    Date of Patent: October 18, 2022
    Assignee: Toyota Research Institute, Inc.
    Inventors: Adrien David Gaidon, James J. Kuffner, Jr., Sudeep Pillai
  • Patent number: 11477464
    Abstract: Systems and techniques are described herein for processing video data using a neural network system. For instance, a process can include generating, by a first convolutional layer of an encoder sub-network of the neural network system, output values associated with a luminance channel of a frame. The process can include generating, by a second convolutional layer of the encoder sub-network, output values associated with at least one chrominance channel of the frame. The process can include generating a combined representation of the frame by combining the output values associated with the luminance channel of the frame and the output values associated with the at least one chrominance channel of the frame. The process can include generating encoded video data based on the combined representation of the frame.
    Type: Grant
    Filed: February 23, 2021
    Date of Patent: October 18, 2022
    Assignee: QUALCOMM Incorporated
    Inventors: Muhammed Zeyd Coban, Ankitesh Kumar Singh, Hilmi Enes Egilmez, Marta Karczewicz
  • Patent number: 11468266
    Abstract: A machine receives a large image having large image dimensions that exceed memory threshold dimensions. The large image includes metadata. The machine adjusts an orientation and a scaling of the large image based on the metadata. The machine divides the large image into a plurality of image tiles, each image tile having tile dimensions smaller than or equal to the memory threshold dimensions. The machine provides the plurality of image tiles to an artificial neural network. The machine identifies, using the artificial neural network, at least a portion of the target in at least one image tile. The machine identifies the target in the large image based on at least the portion of the target being identified in at least one image tile.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: October 11, 2022
    Assignee: Raytheon Company
    Inventors: Jonathan Goldstein, Stephen J. Raif, Philip A. Sallee, Jeffrey S. Klein, Steven A. Israel, Franklin Tanner, Shane A. Zabel, James Talamonti, Lisa A. Mccoy
  • Patent number: 11467895
    Abstract: One or more computing devices, systems, and/or methods for classifier validation are provided. A set of in-sample examples are partitioned into a reduced in-sample set and a remaining in-sample set. The reduced in-sample set is processed using a set of classifiers. A subset of classifiers are identified as having error counts, over the reduced in-sample set, below a threshold number of errors. A training procedure is executed to select a classifier having a minimum error rate over the set of in-sample examples. If the classifier is within the subset of classifiers, then an out-of-sample error bound is determined for the classifier.
    Type: Grant
    Filed: September 28, 2020
    Date of Patent: October 11, 2022
    Assignee: YAHOO ASSETS LLC
    Inventors: Eric Theodore Bax, Natalie Bax
  • Patent number: 11455753
    Abstract: Systems and methods are disclosed for adjusting attributes of whole slide images, including stains therein. A portion of a whole slide image comprised of a plurality of pixels in a first color space and including one or more stains may be received as input. Based on an identified stain type of the stain(s), a machine-learned transformation associated with the stain type may be retrieved and applied to convert an identified subset of the pixels from the first to a second color space specific to the identified stain type. One or more attributes of the stain(s) may be adjusted in the second color space to generate a stain-adjusted subset of pixels, which are then converted back to the first color space using an inverse of the machine-learned transformation. A stain-adjusted portion of the whole slide image including at least the stain-adjusted subset of pixels may be provided as output.
    Type: Grant
    Filed: December 7, 2021
    Date of Patent: September 27, 2022
    Assignee: PAIGE.AI, Inc.
    Inventors: Navid Alemi, Christopher Kanan, Leo Grady
  • Patent number: 11455783
    Abstract: The present disclosure provides an image recognition method and apparatus, a device and a non-volatile computer storage medium. In embodiments of the present disclosure, it is feasible to obtain the to-be-recognized image of a designated size, extract the different-area image from the to-be-recognized image, and obtain the image feature of the different-area image according to the different-area image, so as to obtain the recognition result of the to-be-recognized image, according to the image feature of the different-area image and a preset template feature. In this way, recognition processing can be performed for images in a limited number of classes without employing the deep learning method based on hundreds of thousands of even millions of training samples.
    Type: Grant
    Filed: May 23, 2016
    Date of Patent: September 27, 2022
    Assignee: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.
    Inventors: Guoyi Liu, Guang Li
  • Patent number: 11449989
    Abstract: Super-resolution images are generated from standard-resolution images acquired with a magnetic resonance imaging (“MRI”) system. More particularly, super-resolution (e.g., sub-millimeter isotropic resolution) images are generated from standard-resolution images (e.g., images with 1 mm or coarser isotropic resolution) using a deep learning algorithm, from which accurate cortical surface reconstructions can be generated.
    Type: Grant
    Filed: March 26, 2020
    Date of Patent: September 20, 2022
    Assignee: The General Hospital Corporation
    Inventors: Qiyuan Tian, Susie Yi Huang, Berkin Bilgic, Jonathan R. Polimeni
  • Patent number: 11449709
    Abstract: A neural network is trained to focus on a domain of interest. For example, in a pre-training phase, the neural network in trained using synthetic training data, which is configured to omit or limit content less relevant to the domain of interest, by updating parameters of the neural network to improve the accuracy of predictions. In a subsequent training phase, the pre-trained neural network is trained using real-world training data by updating only a first subset of the parameters associated with feature extraction, while a second subset of the parameters more associated with policies remains fixed.
    Type: Grant
    Filed: May 14, 2020
    Date of Patent: September 20, 2022
    Assignee: NVIDIA Corporation
    Inventor: Bernhard Firner
  • Patent number: 11443505
    Abstract: A method for analyzing images to generate a plurality of output features includes receiving input features of the image and performing Fourier transforms on each input feature. Kernels having coefficients of a plurality of trained features are received and on-the-fly Fourier transforms (OTF-FTs) are performed on the coefficients in the kernels. The output of each Fourier transform and each OTF-FT are multiplied together to generate a plurality of products and each of the products are added to produce one sum for each output feature. Two-dimensional inverse Fourier transforms are performed on each sum.
    Type: Grant
    Filed: June 11, 2020
    Date of Patent: September 13, 2022
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Mihir Narendra Mody, Manu Mathew, Chaitanya Satish Ghone
  • Patent number: 11436432
    Abstract: An apparatus for an artificial neural network includes a format converter, a sampling unit, and a learning unit. The format converter generates a first format image and a second format image based on an input image. The sampling unit samples the first format image using a first sampling scheme to generate a first feature map, and samples the second format image using a second sampling scheme different from the first sampling scheme to generate a second feature map. The learning unit operates the artificial neural network using the first feature map and the second feature map.
    Type: Grant
    Filed: September 15, 2020
    Date of Patent: September 6, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Sangmin Suh, Sangsoo Ko, Byeoungsu Kim, Sanghyuck Ha
  • Patent number: 11417235
    Abstract: Described herein are systems and methods for grounded natural language learning in an interactive setting. In embodiments, during a learning process, an agent learns natural language by interacting with a teacher and learning from feedback, thus learning and improving language skills while taking part in the conversation. In embodiments, a model is used to incorporate both imitation and reinforcement by leveraging jointly sentence and reward feedback from the teacher. Various experiments are conducted to validate the effectiveness of a model embodiment.
    Type: Grant
    Filed: November 22, 2017
    Date of Patent: August 16, 2022
    Assignee: Baidu USA LLC
    Inventors: Haichao Zhang, Haonan Yu, Wei Xu
  • Patent number: 11417232
    Abstract: Disclosed is a method of providing user-customized learning content. The method includes: a step a of configuring a question database including one or more multiple-choice questions having one or more choice items and collecting choice item selection data of a user for the questions; a step b of calculating a modeling vector for the user based on the choice item data and generating modeling vectors for the questions according to each choice item; and a step c of calculating choice item selection probabilities of the user based on the modeling vectors of the user and the modeling vectors of the questions.
    Type: Grant
    Filed: February 1, 2021
    Date of Patent: August 16, 2022
    Assignee: RIIID INC.
    Inventors: Yeong Min Cha, Jae We Heo, Young Jun Jang
  • Patent number: 11403495
    Abstract: A machine learning system efficiently detects faults from three-dimensional (“3D”) seismic images, in which the fault detection is considered as a binary segmentation problem. Because the distribution of fault and nonfault samples is heavily biased, embodiments of the present disclosure use a balanced loss function to optimize model parameters. Embodiments of the present disclosure train a machine learning system by using a selected number of pairs of 3D synthetic seismic and fault volumes, which may be automatically generated by randomly adding folding, faulting, and noise in the volumes. Although trained by using only synthetic data sets, the machine learning system can accurately detect faults from 3D field seismic volumes that are acquired at totally different surveys.
    Type: Grant
    Filed: November 25, 2020
    Date of Patent: August 2, 2022
    Assignee: Board of Regents, The University of Texas System
    Inventors: Xinming Wu, Yunzhi Shi, Sergey Fomel
  • Patent number: 11392738
    Abstract: A method for generating a simulation scenario, the method may include receiving sensed information that was sensed during driving sessions of vehicles; wherein the sensed information comprises visual information regarding multiple objects; determining, by applying an unsupervised learning process on the sensed information, driving scenario building blocks and occurrence information regarding an occurrence of the driving scenario building blocks; and generating the simulation scenario based on a selected set of driving scenario building blocks and on physical guidelines, wherein the generating comprises selecting, out of the driving scenario building blocks, the selected set of driving scenario building blocks; wherein the generating is responsive to at least a part of the occurrence information and to at least one simulation scenario limitation.
    Type: Grant
    Filed: May 5, 2020
    Date of Patent: July 19, 2022
    Assignee: AUTOBRAINS TECHNOLOGIES LTD
    Inventors: Igal Raichelgauz, Karina Odinaev
  • Patent number: 11393160
    Abstract: Systems, apparatuses and methods may provide for technology that generates, by a first neural network, an initial set of model weights based on input data and iteratively generates, by a second neural network, an updated set of model weights based on residual data associated with the initial set of model weights and the input data. Additionally, the technology may output a geometric model of the input data based on the updated set of model weights. In one example, the first neural network and the second neural network reduce the dependence of the geometric model on the number of data points in the input data.
    Type: Grant
    Filed: March 23, 2018
    Date of Patent: July 19, 2022
    Assignee: Intel Corporation
    Inventors: Rene Ranftl, Vladlen Koltun
  • Patent number: 11386292
    Abstract: A method and a system for automatically generating multiple captions of an image are provided. A method for training an auto image caption generation model according to an embodiment of the present disclosure includes: generating a caption attention map by using an image; converting the generated caption attention map into a latent variable by projecting the caption attention map onto a latent space; deriving a guide map by using the latent variable; and training to generate captions of an image by using the guide map and the image. Accordingly, a plurality of captions describing various characteristics of an image and including various expressions can be automatically generated.
    Type: Grant
    Filed: September 10, 2020
    Date of Patent: July 12, 2022
    Assignee: KOREA ELECTRONICS TECHNOLOGY INSTITUTE
    Inventors: Bo Eun Kim, Hye Dong Jung
  • Patent number: 11360443
    Abstract: A model calculation unit for calculating a gradient with respect to a certain input variable of input variables of a predefined input variable vector for an RBF model with the aid of a hard-wired processor core designed as hardware for calculating a fixedly predefined processing algorithm in coupled functional blocks, the processor core being designed to calculate the gradient with respect to the certain input variable for an RBF model as a function of one or multiple input variable(s) of the input variable vector of an input dimension, of a number of nodes, of length scales predefined for each node and each input dimension, and of parameters of the RBF function predefined for each node.
    Type: Grant
    Filed: September 4, 2017
    Date of Patent: June 14, 2022
    Assignee: Robert Bosch GmbH
    Inventors: Andre Guntoro, Heiner Markert, Martin Schiegg
  • Patent number: 11361225
    Abstract: A neural network architecture for attention-based efficient model adaptation is disclosed. A method includes accessing an input vector, the input vector comprising a numeric representation of an input to a neural network. The method includes providing the input vector to the neural network comprising a plurality of ordered layers, wherein each layer in at least a subset of the plurality of ordered layers is coupled with an adaptation module, wherein the adaptation module receives a same input value as a coupled layer for the adaptation module, and wherein an output value of the adaptation module is pointwise multiplied with an output value of the coupled layer to generate a next layer input value. The method includes generating an output of the neural network based on an output of a last one of the plurality of ordered layers in the neural network.
    Type: Grant
    Filed: December 18, 2018
    Date of Patent: June 14, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Mandar Dilip Dixit, Gang Hua
  • Patent number: 11361372
    Abstract: A method of establishing a paint color to be displayed by a procurement system based on information received from an external punchout site. A formula corresponding to a desired paint color selected by a customer that is not included in a color palate of predefined colors regularly available for purchase from a paint supplier is received. Based on the formula, a standardized color value within a color gamut of the electronic display device is generated for the desired paint color. The standardized color value is stored in a network-accessible database entry specific to the customer to be retrievable by the customer over a communication network for generating a preview of the desired paint color during a subsequent purchase of paint having the desired paint color by the customer over the communication network.
    Type: Grant
    Filed: November 2, 2017
    Date of Patent: June 14, 2022
    Inventors: Meghan L Vickers, Pamela A Gillikin, William E Weber, III, James D. Bandy, Matthew G. Stec