Learning Method Patents (Class 706/25)
-
Patent number: 12149510Abstract: A system and method are disclosed for providing a private multi-modal artificial intelligence platform. The method includes splitting a neural network into a first client-side network, a second client-side network and a server-side network and sending the first client-side network to a first client. The first client-side network processes first data from the first client, the first data having a first type. The method includes sending the second client-side network to a second client. The second client-side network processes second data from the second client, the second data having a second type. The first type and the second type have a common association. Forward and back propagation occurs between the client side networks and disparate data types on the different client side networks and the server-side network to train the neural network.Type: GrantFiled: February 19, 2021Date of Patent: November 19, 2024Assignee: TRIPLEBLIND HOLDINGS, INC.Inventors: Greg Storm, Gharib Gharibi, Riddhiman Das
-
Patent number: 12148419Abstract: Mechanisms are provided for performing machine learning training of a computer model. A perturbation generator generates a modified training data comprising perturbations injected into original training data, where the perturbations cause a data corruption of the original training data. The modified training data is input into a prediction network of the computer model and processing the modified training data through the prediction network to generate a prediction output. Machine learning training is executed of the prediction network based on the prediction output and the original training data to generate a trained prediction network of a trained computer model. The trained computer model is deployed to an artificial intelligence computing system for performance of an inference operation.Type: GrantFiled: December 13, 2021Date of Patent: November 19, 2024Assignee: International Business Machines CorporationInventors: Xiaodong Cui, Brian E. D. Kingsbury, George Andrei Saon, David Haws, Zoltan Tueske
-
Patent number: 12147901Abstract: The present disclosure provides a training and application method of a multi-layer neural network model, apparatus and a storage medium. In a forward propagation of the multi-layer neural network model, the number of input feature maps is expanded and a data computation is performed by using the expanded input feature maps.Type: GrantFiled: December 19, 2019Date of Patent: November 19, 2024Assignee: Canon Kabushiki KaishaInventors: Hongxing Gao, Wei Tao, Tsewei Chen, Dongchao Wen, Junjie Liu
-
Patent number: 12141699Abstract: The present disclosure relates to systems and methods for providing vector-wise sparsity in neural networks. In some embodiments, an exemplary method for providing vector-wise sparsity in a neural network, comprises: dividing a matrix associated with the neural network into a plurality of vectors; selecting a first subset of non-zero elements from the plurality of vectors to form a pruned matrix; and outputting the pruned matrix for executing the neural network using the pruned matrix.Type: GrantFiled: July 23, 2020Date of Patent: November 12, 2024Assignee: Alibaba Group Holding LimitedInventors: Maohua Zhu, Tao Zhang, Zhenyu Gu, Yuan Xie
-
Patent number: 12131258Abstract: A method for compressing a deep neural network includes determining a pruning ratio for a channel and a mixed-precision quantization bit-width based on an operational budget of a device implementing the deep neural network. The method further includes quantizing a weight parameter of the deep neural network and/or an activation parameter of the deep neural network based on the quantization bit-width. The method also includes pruning the channel of the deep neural network based on the pruning ratio.Type: GrantFiled: September 23, 2020Date of Patent: October 29, 2024Assignee: QUALCOMM IncorporatedInventors: Yadong Lu, Ying Wang, Tijmen Pieter Frederik Blankevoort, Christos Louizos, Matthias Reisser, Jilei Hou
-
Patent number: 12131182Abstract: Systems and methods of data processing are provided. The method comprises receiving an input data to be processed by a series of operations, identifying a first operation from the series of operations, selecting at least one second operation from the series of operations to be grouped with the first operation based at least in part on an amount of an input data and an output data of the grouped operations and the capacity of the memory unit, and processing a portion of the input data of the grouped operations. An efficiency of the series of data operations can be improved by ensuring the input data and output data of any data operations are both stored in the memory unit.Type: GrantFiled: March 22, 2019Date of Patent: October 29, 2024Assignee: Nanjing Horizon Robotics Technology Co., Ltd.Inventors: Zhenjiang Wang, Jianjun Li, Liang Chen, Kun Ling, Delin Li, Chen Sun
-
Patent number: 12124955Abstract: A hardware processor can receive a set of input data individually describing a particular asset associated with an entity. The hardware processor can receive sets of inputs individually responsive to a respective subset of queries. The hardware processor can generate a predictive model using the set of input data. The hardware processor can calculate predictive outcomes individually associated with a respective user by applying the predictive model to each respective set of inputs of the sets of inputs. The hardware processor can generate a list ranked according to the predictive outcomes for the particular asset.Type: GrantFiled: June 30, 2023Date of Patent: October 22, 2024Assignee: Cangrade, Inc.Inventors: Steven Lehr, Gershon Goren, Liana Epstein
-
Patent number: 12124779Abstract: A method of construction of a feedforward neural network includes a step of initialization of a neural network according to an initial topology, and at least one topological optimization phase, of which each phase includes: an additive phase including a modification of the network topology by adding at least one node and/or a connection link between the input of a node of a layer and the output of a node of any one of the preceding layers, and/or a subtractive phase including a modification of the network topology by removing at least one node and/or a connection link between two layers. Each topology modification includes the selection of a topology modification among several candidate modifications, based on an estimation of the variation in the network error between the previous topology and each topology modified according to a candidate modification.Type: GrantFiled: November 7, 2019Date of Patent: October 22, 2024Assignee: ADAGOSInventors: Manuel Bompard, Mathieu Causse, Florent Masmoudi, Mohamed Masmoudi, Houcine Turki
-
Patent number: 12124957Abstract: Provided are an apparatus and method of compressing an artificial neural network. According to the method and the apparatus, an optimal compression rate and an optimal operation accuracy are determined by compressing an artificial neural network, determining a task accuracy of a compressed artificial neural network, and automatically calculating a compression rate and a compression ratio based on the determined task accuracy. The method includes obtaining an initial value of a task accuracy for a task processed by the artificial neural network, compressing the artificial neural network by adjusting weights of connections among layers of the artificial neural network included in information regarding the connections, determining a compression rate for the compressed artificial neural network based on the initial value of the task accuracy and a task accuracy of the compressed artificial neural network, and re-compressing the compressed artificial neural network according to the compression rate.Type: GrantFiled: July 29, 2019Date of Patent: October 22, 2024Assignee: Samsung Electronics Co., Ltd.Inventor: Youngmin Oh
-
Patent number: 12124958Abstract: A computer-implemented method for enforcing an idempotent-constrained characteristic during training of a neural network may be provided. The method comprises training of a neural network by minimizing a loss function, wherein the loss function comprises an additional term imposing an idempotence-based regularization to the neural network during the training.Type: GrantFiled: January 22, 2020Date of Patent: October 22, 2024Assignee: International Business Machines CorporationInventors: Antonio Foncubierta Rodriguez, Matteo Manica, Joris Cadow
-
Patent number: 12124855Abstract: The present disclosure relates to a training method for a parameter configuration model, a parameter configuration method, and a parameter configuration device.Type: GrantFiled: September 15, 2022Date of Patent: October 22, 2024Assignee: SHENZHEN MICROBT ELECTRONICS TECHNOLOGY CO., LTD.Inventors: Guo Ai, Haifeng Guo, Zuoxing Yang
-
Patent number: 12124956Abstract: A hardware processor can receive a set of input data individually describing a particular asset associated with an entity. The hardware processor can receive a set of inputs individually responsive to a respective subset of a plurality of queries for a particular user. The hardware processor can generate a predictive model based on the set of input data. The hardware processor can calculate a predictive outcome for the particular user by applying the predictive model to the set of inputs. The hardware processor can identify a target score impacting the predictive outcome for the particular user. The hardware processor can assign a training program to the particular user corresponding to the target score.Type: GrantFiled: July 7, 2023Date of Patent: October 22, 2024Assignee: Cangrade, Inc.Inventors: Steven Lehr, Gershon Goren, Liana Epstein
-
Patent number: 12124960Abstract: An object of the present invention is to provide a learning apparatus and a learning method capable of appropriately learning pieces of data that belong to the same category and are acquired under different conditions. In a learning apparatus according a first aspect of the present invention, first data and second data are respectively input to a first input layer and a second input layer that are independent of each other, and feature quantities are calculated. Thus, the feature quantity calculation in one of the first and second input layers is not affected by the feature quantity calculation in the other input layer. In addition to feature extraction performed in the input layers, each of a first intermediate feature quantity calculation process and a second intermediate feature quantity calculation process is performed at least once in an intermediate layer that is shared by the first and second input layers.Type: GrantFiled: January 13, 2021Date of Patent: October 22, 2024Assignee: FUJIFILM CorporationInventors: Masaaki Oosake, Makoto Ozeki
-
Patent number: 12124963Abstract: Disclosed is a disentangled personalized federated learning method via consensus representation extraction and diversity propagation provided by embodiments of the present application. The method includes: receiving, by a current node, local consensus representation extraction models and unique representation extraction models corresponding to other nodes, respectively; extracting, by the current node, the representations of the data of the current node by using the unique representation extraction models of other nodes respectively, and calculating first mutual information between different sets of representation distributions, determining similarity of the data distributions between the nodes based on the size of the first mutual information, and determining aggregation weights corresponding to the other nodes based on the first mutual information; the current node obtains the global consensus representation aggregation model corresponding to the current node.Type: GrantFiled: June 1, 2024Date of Patent: October 22, 2024Assignee: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCESInventors: Zhenan Sun, Yunlong Wang, Zhengquan Luo, Kunbo Zhang, Qi Li, Yong He
-
Patent number: 12117917Abstract: A method of using a computing device to compare performance of multiple algorithms. The method includes receiving, by a computing device, multiple algorithms to assess. The computing device further receives a total amount of resources to allocate to the multiple algorithms. The computing device additionally assigns a fair share of the total amount of resources to each of the multiple algorithms. The computing device still further executes each of the multiple algorithms using the assigned fair share of the total amount of resources. The computing device additionally compares the performance of each of the multiple based on at least one of multiple hardware relative utility metrics describing a hardware relative utility of any given resource allocation for each of the multiple algorithms.Type: GrantFiled: April 29, 2021Date of Patent: October 15, 2024Assignee: International Business Machines CorporationInventors: Robert Engel, Aly Megahed, Eric Kevin Butler, Nitin Ramchandani, Yuya Jeremy Ong
-
Patent number: 12118056Abstract: Methods and apparatus for performing matrix transforms within a memory fabric. Various embodiments of the present disclosure are directed to converting a memory array into a matrix fabric for matrix transformations and performing matrix operations therein. Exemplary embodiments described herein perform matrix transformations within a memory device that includes a matrix fabric and matrix multiplication unit (MMU). In one exemplary embodiment, the matrix fabric uses a “crossbar” construction of resistive elements. Each resistive element stores a level of impedance that represents the corresponding matrix coefficient value. The crossbar connectivity can be driven with an electrical signal representing the input vector as an analog voltage. The resulting signals can be converted from analog voltages to a digital values by an MMU to yield a vector-matrix product. In some cases, the MMU may additionally perform various other logical operations within the digital domain.Type: GrantFiled: May 3, 2019Date of Patent: October 15, 2024Assignee: Micron Technology, Inc.Inventor: Fa-Long Luo
-
Patent number: 12118662Abstract: In an approach to improve the generation of a virtual object in a three-dimensional virtual environment, embodiments of the present invention identify a virtual object to be generated in a three-dimensional virtual environment based on a natural language utterance. Additionally, embodiments generate the virtual object based on a CLIP-guided Generative Latent Space (CLIP-GLS) analysis, and monitor usage of the generated virtual object in the three-dimensional virtual space. Moreover, embodiments infer human perception data from the monitoring, and generate a utility score for the virtual object based on the human perception data.Type: GrantFiled: September 19, 2022Date of Patent: October 15, 2024Assignee: International Business Machines CorporationInventors: Jeremy R. Fox, Martin G. Keen, Alexander Reznicek, Bahman Hekmatshoartabari
-
Patent number: 12112260Abstract: Disclosed is a method of determining a characteristic of interest relating to a structure on a substrate formed by a lithographic process, the method comprising: obtaining an input image of the structure; and using a trained neural network to determine the characteristic of interest from said input image. Also disclosed is a reticle comprising a target forming feature comprising more than two sub-features each having different sensitivities to a characteristic of interest when imaged onto a substrate to form a corresponding target structure on said substrate. Related methods and apparatuses are also described.Type: GrantFiled: May 29, 2019Date of Patent: October 8, 2024Assignee: ASML Netherlands B.V.Inventors: Lorenzo Tripodi, Patrick Warnaar, Grzegorz Grzela, Mohammadreza Hajiahmadi, Farzad Farhadzadeh, Patricius Aloysius Jacobus Tinnemans, Scott Anderson Middlebrooks, Adrianus Cornelis Matheus Koopman, Frank Staals, Brennan Peterson, Anton Bernhard Van Oosten
-
Patent number: 12106218Abstract: Modifying digital content based on predicted future user behavior is provided. Trends in propagation values corresponding to a layer of nodes in an artificial neural network are identified based on measuring the propagation values at each run of the artificial neural network. The trends in the propagation values are forecasted to generate predicted propagation values at a specified future point in time. The predicted propagation values are applied to the layer of nodes in the artificial neural network. Predicted website analytics values corresponding to a set of website variables of interest for the specified future point in time are generated based on running the artificial neural network with the predicted propagation values. A website corresponding to the set of website variables of interest is modified based on the predicted website analytics values corresponding to the set of website variables of interest for the specified future point in time.Type: GrantFiled: February 19, 2018Date of Patent: October 1, 2024Assignee: International Business Machines CorporationInventors: Aaron K. Baughman, Gray F. Cannon, Ryan L. Whitman
-
Patent number: 12100017Abstract: A unified model for a neural network can be used to predict a particular value, such as a customer value. In various instances, customer value may have particular sub-components. Taking advantage of this fact, a specific learning architecture can be used to predict not just customer value (e.g. a final objective) but also the sub-components of customer value. This allows improved accuracy and reduced error in various embodiments.Type: GrantFiled: November 30, 2021Date of Patent: September 24, 2024Assignee: PayPal, Inc.Inventors: Shiwen Shen, Danielle Zhu, Feng Pan
-
Patent number: 12100445Abstract: An interface circuit includes an integrator circuit and a buffer circuit. The integrator circuit is configured to be electrically coupled to a column of memory cells, receive a signal corresponding to a sum of currents flowing through the memory cells of the column, and integrate the signal over time to generate an intermediate voltage. The buffer circuit is electrically coupled to an output of the integrator circuit to receive the intermediate voltage, and is configured to be electrically coupled to a row of further memory cells, generate an analog voltage corresponding to the intermediate voltage, and output the analog voltage to the further memory cells of the row.Type: GrantFiled: July 31, 2023Date of Patent: September 24, 2024Assignee: TAIWAN SEMICONDUCTOR MANUFACTURING COMPANY, LTD.Inventor: Mei-Chen Chuang
-
Patent number: 12093531Abstract: A hardware accelerator is provided. The hardware accelerator includes a first memory; a source address generation unit coupled to the first memory; a data collection unit coupled to the first memory; a first data queue coupled to the data collection unit; a data dispersion unit coupled to the first data queue; a destination address generation unit coupled to the data dispersion unit; an address queue coupled to the destination address generation unit; a second data queue coupled to the data dispersion unit; and a second memory coupled to the second data queue. The hardware accelerator can perform anyone or any combination of tensor stride, tensor reshape and tensor transpose to achieve tensorflow depth-to-space permutation or tensorflow space-to-depth permutation.Type: GrantFiled: October 21, 2021Date of Patent: September 17, 2024Assignee: Cvitek Co. Ltd.Inventors: Wei-Chun Chang, Yuan-Hsiang Kuo, Chia-Lin Lu, Hsueh-Chien Lu
-
Patent number: 12093813Abstract: Techniques related to compressing a pre-trained dense deep neural network to a sparsely connected deep neural network for efficient implementation are discussed. Such techniques may include iteratively pruning and splicing available connections between adjacent layers of the deep neural network and updating weights corresponding to both currently disconnected and currently connected connections between the adjacent layers.Type: GrantFiled: September 30, 2016Date of Patent: September 17, 2024Assignee: Intel CorporationInventors: Anbang Yao, Yiwen Guo, Yan Li, Yurong Chen
-
Patent number: 12093805Abstract: This disclosure relates to method and system for optimal policy learning and recommendation for distribution task using deep RL model, in applications where when the action space has a probability simplex structure. The method includes training a RL agent by defining a policy network for learning the optimal policy using a policy gradient (PG) method, where the policy network comprising an artificial neural network (ANN) with a set of outputs. A continuous action space having a continuous probability simplex structure is defined. The learning of the optimal policy is updated based on one of stochastic and deterministic PG. For stochastic PG, a Dirichlet distribution based stochastic policy parameterized by output of the ANN with an activation function at an output layer of the ANN is selected. For deterministic PG, a soft-max function is selected as activation function at the output layer of the ANN to maintain the probability simplex structure.Type: GrantFiled: March 26, 2021Date of Patent: September 17, 2024Assignee: Tata Consultancy Services LimitedInventors: Avinash Achar, Easwara Subramanian, Sanjay Purushottam Bhat, Vignesh Lakshmanan Kangadharan Palaniradja
-
Patent number: 12093836Abstract: Automatic multi-objective hardware optimization for processing a deep learning network is disclosed. An example of a storage medium includes instructions for obtaining client preferences for a plurality of performance indicators for processing of a deep learning workload; generating a workload representation for the deep learning workload; providing the workload representation to machine learning processing to generate a workload executable, the workload executable including hardware mapping based on the client preferences; and applying the workload executable in processing of the deep learning workload.Type: GrantFiled: December 21, 2020Date of Patent: September 17, 2024Assignee: INTEL CORPORATIONInventors: Mattias Marder, Estelle Aflalo, Avrech Ben-David, Shauharda Khadka, Somdeb Majumdar, Santiago Miret, Hanlin Tang
-
Patent number: 12088823Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for encoding video comprising a sequence of video frames. In one aspect, a method comprises for one or more of the video frames: obtaining a feature embedding for the video frame; processing the feature embedding using a rate control machine learning model to generate a respective score for each of multiple quantization parameter values; selecting a quantization parameter value using the scores; determining a cumulative amount of data required to represent: (i) an encoded representation of the video frame and (ii) encoded representations of each preceding video frame; determining, based on the cumulative amount of data, that a feedback control criterion for the video frame is satisfied; updating the selected quantization parameter value; and processing the video frame using an encoding model to generate the encoded representation of the video frame.Type: GrantFiled: November 3, 2021Date of Patent: September 10, 2024Assignee: DeepMind Technologies LimitedInventors: Chenjie Gu, Hongzi Mao, Ching-Han Chiang, Cheng Chen, Jingning Han, Ching Yin Derek Pang, Rene Andre Claus, Marisabel Guevara Hechtman, Daniel James Visentin, Christopher Sigurd Fougner, Charles Booth Schaff, Nishant Patil, Alejandro Ramirez Bellido
-
Patent number: 12086713Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for evaluating candidate output sequences using language model neural networks. In particular, an auto-regressive language model neural network is used to generate a candidate output sequence. The same auto-regressive language model neural network is used to evaluate the candidate output sequence to determine rating scores for each of one or more criteria. The rating score(s) are then used to determine whether to provide the candidate output sequence.Type: GrantFiled: July 28, 2022Date of Patent: September 10, 2024Assignee: Google LLCInventors: Daniel De Freitas Adiwardana, Noam M. Shazeer
-
Patent number: 12086993Abstract: A method for tracking and/or characterizing multiple objects in a sequence of images. The method includes: assigning a neural network to each object to be tracked; providing a memory shared by all neural networks, and designed to map an address vector of address components, via differentiable operations, onto one or multiple memory locations, and to read data from these memory locations or write data into these memory locations; supplying images from the sequence, and/or details of these images, to each neural network; during the processing of each image and/or image detail by one of the neural networks, generating an address vector from at least one processing product of this neural network; based on this address vector, writing at least one further processing product of the neural network into the shared memory, and/or reading out data from this shared memory and further processing the data by the neural network.Type: GrantFiled: March 16, 2022Date of Patent: September 10, 2024Assignee: ROBERT BOSCH GMBHInventor: Cosmin Ionut Bercea
-
Patent number: 12086572Abstract: Embodiments herein describe techniques for expressing the layers of a neural network in a software model. In one embodiment, the software model includes a class that describes the various functional blocks (e.g., convolution units, max-pooling units, rectified linear units (ReLU), and scaling functions) used to execute the neural network layers. In turn, other classes in the software model can describe the operation of each of the functional blocks. In addition, the software model can include conditional logic for expressing how the data flows between the functional blocks since different layers in the neural network can process the data differently. A compiler can convert the high-level code in the software model (e.g., C++) into a hardware description language (e.g., register transfer level (RTL)) which is used to configure a hardware system to implement a neural network accelerator.Type: GrantFiled: October 17, 2017Date of Patent: September 10, 2024Assignee: XILINX, INC.Inventors: Yongjun Wu, Jindrich Zejda, Elliott Delaye, Ashish Sirasao
-
Patent number: 12086715Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for performing sequence modeling tasks using insertions. One of the methods includes receiving a system input that includes one or more source elements from a source sequence and zero or more target elements from a target sequence, wherein each source element is selected from a vocabulary of source elements and wherein each target element is selected from a vocabulary of target elements; generating a partial concatenated sequence that includes the one or more source elements from the source sequence and the zero or more target elements from the target sequence, wherein the source and target elements arranged in the partial concatenated sequence according to a combined order; and generating a final concatenated sequence that includes a finalized source sequence and a finalized target sequence, wherein the finalized target sequence includes one or more target elements.Type: GrantFiled: May 22, 2023Date of Patent: September 10, 2024Assignee: Google LLCInventors: William Chan, Mitchell Thomas Stern, Nikita Kitaev, Kelvin Gu, Jakob D. Uszkoreit
-
Patent number: 12080289Abstract: Disclosed is an electronic apparatus. The electronic apparatus includes: a communication interface, a memory, and a processor connected to the memory and the communication interface, the processor configured to control the electronic apparatus to, based on receiving a speech related to a function of the electronic apparatus, obtain text information corresponding to the received speech, control the communication interface to transmit the obtained text information to a server including a first neural network model corresponding to the function, execute the function based on response information received from the server, and based on identifying that an update period of the first neural network model is greater than or equal to a first threshold period based on the information related to the function of the electronic apparatus, the electronic apparatus may receive the information about the first neural network model from the server and store the information in the memory.Type: GrantFiled: September 27, 2021Date of Patent: September 3, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Hyeonmok Ko, Dayoung Kwon, Jonggu Kim, Seoha Song, Kyenghun Lee, Hojung Lee, Saebom Jang, Pureum Jung, Changho Paeon, Jiyeon Hong
-
Patent number: 12079722Abstract: The embodiments of this application provide a method and device for optimizing neural network. The method includes: binarizing and bit-packing input data of a convolution layer along a channel direction, and obtaining compressed input data; binarizing and bit-packing respectively each convolution kernel of the convolution layer along the channel direction, and obtaining each corresponding compressed convolution kernel; dividing the compressed input data sequentially in a convolutional computation order into blocks of the compressed input data with the same size of each compressed convolution kernel, wherein the data input to one time convolutional computation form a data block; and, taking a convolutional computation on each block of the compressed input data and each compressed convolution kernel sequentially, obtaining each convolutional result data, and obtaining multiple output data of the convolution layer according to each convolutional result data.Type: GrantFiled: February 1, 2023Date of Patent: September 3, 2024Assignee: Beijing Tusen Zhitu Technology Co., Ltd.Inventors: Yuwei Hu, Jiangming Jin, Lei Su, Dinghua Li
-
Patent number: 12079713Abstract: Methods and apparatus for discrimitive semantic transfer and physics-inspired optimization in deep learning are disclosed. A computation training method for a convolutional neural network (CNN) includes receiving a sequence of training images in the CNN of a first stage to describe objects of a cluttered scene as a semantic segmentation mask. The semantic segmentation mask is received in a semantic segmentation network of a second stage to produce semantic features. Using weights from the first stage as feature extractors and weights from the second stage as classifiers, edges of the cluttered scene are identified using the semantic features.Type: GrantFiled: May 3, 2023Date of Patent: September 3, 2024Assignee: Intel CorporationInventors: Anbang Yao, Hao Zhao, Ming Lu, Yiwen Guo, Yurong Chen
-
Patent number: 12079695Abstract: A computer-implemented method of generating scale-permuted models can generate models having improved accuracy and reduced evaluation computational requirements. The method can include defining, by a computing system including one or more computing devices, a search space including a plurality of candidate permutations of a plurality of candidate feature blocks, each of the plurality of candidate feature blocks having a respective scale. The method can include performing, by the computing system, a plurality of search iterations by a search algorithm to select a scale-permuted model from the search space, the scale-permuted model based at least in part on a candidate permutation of the plurality of candidate permutations.Type: GrantFiled: October 1, 2020Date of Patent: September 3, 2024Assignee: GOOGLE LLCInventors: Xianzhi Du, Yin Cui, Tsung-Yi Lin, Quoc V. Le, Pengchong Jin, Mingxing Tan, Golnaz Ghiasi, Xiaodan Song
-
Patent number: 12069082Abstract: A method, computer system, and computer program product are provided for mitigating network risk. A plurality of risk reports corresponding to a plurality of network devices in a network are processed to determine a multidimensional risk score for the network. The plurality of risk reports are analyzed using a semantic analysis model to identify one or more factors that contribute to the multidimensional risk score. One or more actions are determined using a trained learning model to mitigate one or more dimensions of the multidimensional risk score. The outcomes of applying the one or more actions are presented to a user to indicate an effect of each of the one or more actions on the multidimensional risk score for the network.Type: GrantFiled: June 11, 2021Date of Patent: August 20, 2024Assignee: CISCO TECHNOLOGY, INC.Inventors: Qihong Shao, Xinjun Zhang, Yue Liu, Kevin Broich, Kenneth Charles Croley, Gurvinder P. Singh
-
Patent number: 12061673Abstract: Described is a system for controlling multiple autonomous platforms. A training process is performed to produce a trained learning agent in a simulation environment. In each episode, each controlled platform is assigned to one target platform that produces an observation. A learning agent processes the observation using a deep learning network and produces an action corresponding to each controlled platform until an action has been produced for each controlled platform. A reward value is obtained corresponding to the episode. The trained learning agent is executed to control each autonomous platform, where the trained agent receives one or more observations from one or more platform sensors and produces an action based on the one or more observations. The action is then used to control one or more platform actuators.Type: GrantFiled: February 3, 2021Date of Patent: August 13, 2024Assignee: HRL LABORATORIES, LLCInventors: Sean Soleyman, Deepak Khosla
-
Patent number: 12061985Abstract: A system for automated construction of an artificial neural network architecture is provided. The system includes a set of interfaces and data links configured to receive and send signals, wherein the signals include datasets of training data, validation data and testing data, wherein the signals include a set of random number factors in multi-dimensional signals X, wherein part of the random number factors are associated with task labels Y to identify, and nuisance variations S. The system further includes a set of memory banks to store a set of reconfigurable deep neural network (DNN) blocks, hyperparameters, trainable variables, intermediate neuron signals, and temporary computation values including forward-pass signals and backward-pass gradients.Type: GrantFiled: July 2, 2020Date of Patent: August 13, 2024Assignee: Mitsubishi Electric Research Laboratories, Inc.Inventors: Toshiaki Koike-Akino, Ye Wang, Andac Demir, Deniz Erdogmus
-
Patent number: 12056615Abstract: A method for generating a convolutional neural network to operate on a spherical manifold, generates locally-defined gauges at multiple positions on the spherical manifold. A convolution is defined at each of the positions on the spherical manifold with respect to an arbitrarily selected locally-defined gauge. The results of the convolution that is defined at each position based on gauge equivariance is translated to obtain a manifold convolution.Type: GrantFiled: September 23, 2020Date of Patent: August 6, 2024Assignee: QUALCOMM IncorporatedInventors: Berkay Kicanaoglu, Taco Sebastiaan Cohen, Pim De Haan
-
Patent number: 12050991Abstract: The present disclosure provides systems and methods that generate new architectures for artificial neural networks based on connectomics data that describes connections between biological neurons of a biological organism. In particular, in some implementations, a computing system can identify one or more new artificial neural network architectures by performing a neural architecture search over a search space that is constrained based at least in part on the connectomics data.Type: GrantFiled: May 21, 2019Date of Patent: July 30, 2024Assignee: GOOGLE LLCInventors: Viren Jain, Jeffrey Adgate Dean
-
Patent number: 12051240Abstract: The present invention relates to a method and apparatus that can predict the visible-infrared band images of a region of the Earth's surface that would be observed by an Earth Observation (EO) satellite or other high-altitude imaging platform, using data from radar reflectance/backscatter of the same region. The method and apparatus can be used to predict images of the Earth's surface in the visible-infrared bands when the view between an imaging instrument and the ground is obscured by cloud or some other medium that is opaque to electromagnetic (EM) radiation in the visible-infrared spectral range, approximately spanning 400-2300 nanometres (nm), but transparent to EM radiation in the radio-/microwave part of the spectrum. Regular, uninterrupted monitoring of the Earth's surface is important for a wide range of applications, from agriculture to defence.Type: GrantFiled: February 14, 2022Date of Patent: July 30, 2024Assignee: UNIVERSITY OF HERTFORDSHIRE HIGHER EDUCATION CORPORATIONInventors: James Edward Geach, Michael James Smith
-
Patent number: 12050936Abstract: The present disclosure generally relates to evaluating communication workflows comprised of tasks using machine-learning techniques. More particularly, the present disclosure relates to systems and methods for generating a prediction of a task outcome of a communication workflow, generating a recommendation of one or more tasks to add to a partial communication workflow to complete the communication workflow, and generating a vector representation of a communication workflow.Type: GrantFiled: February 25, 2020Date of Patent: July 30, 2024Assignee: Oracle International CorporationInventors: Sudhakar Kalluri, Venkata Chandrashekar Duvvuri
-
Patent number: 12045714Abstract: A method of operation of a semiconductor device that includes the steps of coupling each of a plurality of digital inputs to a corresponding row of non-volatile memory (NVM) cells that stores an individual weight, initiating a read operation based on a digital value of a first bit of the plurality of digital inputs, accumulating along a first bit-line coupling a first array column weighted bit-line current, in which the weighted bit-line current corresponds to a product of the individual weight stored therein and the digital value of the first bit, and converting and scaling, an accumulated weighted bit-line current of the first column, into a scaled charge of the first bit in relation to a significance of the first bit.Type: GrantFiled: February 17, 2023Date of Patent: July 23, 2024Assignee: Infineon Technologies LLCInventors: Ramesh Chettuvetty, Vijay Raghavan, Hans Van Antwerpen
-
Patent number: 12045340Abstract: The terminal apparatus comprises a machine learning part that can execute a process of computing a first model update parameter of a first neural network using training data and a process of computing a second model update parameter of a second neural network using training data for a simulated attack; an encryption processing part that encrypts the first, the second model update parameter using a predetermined homomorphic encryption; a data transmission part that transmits the encrypted first, second model update parameters to a predetermined computation apparatus; and an update part that receives from the computation apparatus model update parameters of the first, the second neural networks computed using the first, the second model update parameters received from another terminal apparatus and updates the first, the second neural networks.Type: GrantFiled: November 26, 2019Date of Patent: July 23, 2024Assignee: NEC CORPORATIONInventor: Isamu Teranishi
-
Patent number: 12045716Abstract: A method of updating a first neural network is disclosed. The method includes providing a computer system with a computer-readable memory that stores specific computer-executable instructions for the first neural network and a second neural network separate from the first neural network. The method also includes providing one or more processors in communication with the computer-readable memory. The one or more processors are programmed by the computer-executable instructions to at least process a first data with the first neural network, process a second data with the second neural network, update a weight in a node of the second neural network by a delta amount as a function of the processing of the second data with the second neural network, and update a weight in a node of the first neural network as a function of the delta amount. A computer system for updating a first neural network is also disclosed. Other features of the preferred embodiments are also disclosed.Type: GrantFiled: September 14, 2020Date of Patent: July 23, 2024Assignee: Lucinity ehfInventors: Justin Bercich, Theresa Bercich, Gudmundur Runar Kristjansson, Anush Vasudevan
-
Patent number: 12039439Abstract: An overall gradient vector is computed at a server from a set of ISA vectors corresponding to a set of worker machines. An ISA vector of a worker machine including ISA instructions corresponding to a set of gradients, each gradient corresponding to a weight of a node of a neural network being distributedly trained in the worker machine. A set of register values is optimized for use in an approximation computation with an opcode to produce an x-th approximate gradient of an x-th gradient. A server ISA vector is constructed in which a server ISA instruction in an x-th position corresponds to the x-th gradient in the overall gradient vector. A processor at the worker machine is caused to update a set of weights of the neural network, using the set of optimized register values and the server ISA vector, thereby completing one iteration of training.Type: GrantFiled: December 21, 2020Date of Patent: July 16, 2024Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Minsik Cho, Ulrich A. Finkler
-
Patent number: 12035380Abstract: An industrial 5G dynamic multi-priority multi-access method based on deep reinforcement learning includes the following steps: establishing an industrial 5G network model; establishing a dynamic multi-priority multi-channel access neural network model based on deep reinforcement learning; collecting state, action and reward information of multiple time slots of all industrial 5G terminals in the industrial 5G network as training data; training the neural network model by using the collected data until the packet loss ratio and end-to-end latency meet industrial communication requirements; collecting the state information of all the industrial 5G terminals in the industrial 5G network at the current time slot as the input of the neural network model; conducting multi-priority channel allocation; and conducting multi-access by the industrial 5G terminals according to a channel allocation result.Type: GrantFiled: December 25, 2020Date of Patent: July 9, 2024Assignee: SHENYANG INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCESInventors: Haibin Yu, Xiaoyu Liu, Chi Xu, Peng Zeng, Xi Jin, Changqing Xia
-
Patent number: 12032535Abstract: Disclosed examples to estimate audience sizes of media include a coefficient generator to determine coefficient values for a polynomial based on normalized weighted sums of variances, a normalized weighted sum of covariances, and cardinalities corresponding to a first plurality of vectors of counts from a first database proprietor and a second plurality of vectors of counts from a second database proprietor, a real roots solver to determine a real root value of the polynomial, the real root value indicative of a number of audience members represented in the first plurality of vectors of counts that are also represented in the second plurality of vectors of counts, and an audience size generator to determine the audience size based on the real root value and the cardinalities of the first plurality of vectors of counts and the second plurality of vectors of counts.Type: GrantFiled: June 30, 2020Date of Patent: July 9, 2024Assignee: The Nielsen Company (US), LLCInventors: Michael R. Sheppard, Jake Ryan Dailey, Damien Forthomme, Jonathan Sullivan, Jessica Brinson, Christie Nicole Summers, Diane Morovati Lopez, Molly Poppie
-
Patent number: 12033064Abstract: The present disclosure provides a neural network weight matrix adjusting method, a writing control method and a related apparatus, The method comprises: judging whether a weight distribution of a neural network weight matrix is lower than a first preset threshold; if yes, multiplying all weight values in the neural network weight matrix by a first constant; if no, judging whether the weight distribution of the neural network weight matrix is higher than a second preset threshold, wherein the second preset threshold is greater than the first preset threshold; and dividing all weight values in the neural network weight matrix by a second constant, if the weight distribution of the neural network weight matrix is higher than the second preset threshold; wherein the first constant and the second constant are both greater than 1, thereby improving the operation precision.Type: GrantFiled: July 6, 2020Date of Patent: July 9, 2024Assignee: HANGZHOU ZHICUN INTELLIGENT TECHNOLOGY CO., LTD.Inventor: Shaodi Wang
-
Patent number: 12032711Abstract: A method for evaluating an external machine learning program while limiting access to internal training data includes providing labeled training data from a first source, receiving, by the first source, a machine learning program from a second source different from the first source, blocking, by the first source, access by the second source to the labeled training data, and training, by the first source, the machine learning program according to a supervised machine learning process using the labeled training data. The method further includes generating a first set of metrics from the supervised machine learning process that provide feedback about training of the neural network model, analyzing the first set of metrics to identify subset data therein, and, in order to permit evaluation of the neural network model, transmitting, to the second source, those metrics from the first set of metrics that do not include the subset data.Type: GrantFiled: January 28, 2021Date of Patent: July 9, 2024Assignee: OLYMPUS CORPORATIONInventor: Steven Paul Lansel
-
Patent number: 12026556Abstract: A method for processing a neural network includes receiving a graph corresponding to an artificial neural network including multiple nodes connected by edges. The method determines a set of independent nodes of multiple nodes to be executed in a neural network. The method also determines a next node in the set of independent nodes to add to an ordered set of the multiple nodes corresponding to an order of execution via a hardware resource for processing the neural network. The next node is determined based on a common hardware resource with a first preceding node in the ordered set or a frequency of nodes in the set of independent nodes to be executed via a same hardware resource. The ordered set of the plurality of nodes is generated based on the next node. The method may be repeated until each of the nodes of the graph are included in the ordered set of the nodes.Type: GrantFiled: May 28, 2021Date of Patent: July 2, 2024Assignee: QUALCOMM IncorporatedInventors: Zakir Hossain Syed, Durk Van Veen, Nathan Omer Kaslan