Patents Examined by Moriam Mosunmola Godo
-
Patent number: 11972345Abstract: A multi-label ranking method includes receiving, at a processor and from a first set of artificial neural networks (ANNs), multiple signals representing a first set of ANN output pairs for a first label. A signal representing a second set of ANN output pairs for a second label different from the first label is received at the processor from a second set of ANNs different from the first set of ANNs, substantially concurrently with the first set of ANN output pairs. A first activation function is solved based on the first set of ANN output pairs, and a second activation function is solved based on the second set of ANN output pairs. Loss values are calculated based on the solved activations, and a mask is generated based on at least one ground truth label. A signal, including a representation of the mask, is sent from the processor to each of the sets of ANNs.Type: GrantFiled: April 11, 2019Date of Patent: April 30, 2024Inventors: Vincent Poon, Nigel Paul Duffy, Ravi Kiran Reddy Palla
-
Patent number: 11941513Abstract: Provided is a device for ensembling data received from prediction devices and a method of operating the same. The device includes a data manager, a learner, and a predictor. The data manager receives first and second device prediction results from first and second prediction devices, respectively. The learner may adjust a weight group of a prediction model for generating first and second item weights, first and second device weights, based on the first and second device prediction results. The first and second item weights depend on first and second item values, respectively, of the first and second device prediction results. The first device weight corresponds to the first prediction device, and the second device weight corresponds to the second prediction device. The predictor generates an ensemble result of the first and second device prediction results, based on the first and second item weights and the first and second device weights.Type: GrantFiled: November 28, 2019Date of Patent: March 26, 2024Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Myung-Eun Lim, Jae Hun Choi, Youngwoong Han
-
Patent number: 11928576Abstract: The present disclosure describes an artificial neural network circuit including: at least one crossbar circuit to transmit a signal between layered neurons of an artificial neural network, the crossbar circuit including multiple input bars, multiple output bars arranged intersecting the input bars, and multiple memristors that are disposed at respective intersections of the input bars and the output bars to give a weight to the signal to be transmitted; a processing circuit to calculate a sum of signals flowing into each of the output bars while a weight to a corresponding signal is given by each of the memristors; a temperature sensor to detect environmental temperature; and an update portion that updates a trained value used in the crossbar circuit and/or the processing circuit.Type: GrantFiled: October 16, 2019Date of Patent: March 12, 2024Assignee: DENSO CORPORATIONInventors: Irina Kataeva, Shigeki Otsuka
-
Patent number: 11928600Abstract: A method for sequence-to-sequence prediction using a neural network model includes generating an encoded representation based on an input sequence using an encoder of the neural network model and predicting an output sequence based on the encoded representation using a decoder of the neural network model. The neural network model includes a plurality of model parameters learned according to a machine learning process. At least one of the encoder or the decoder includes a branched attention layer. Each branch of the branched attention layer includes an interdependent scaling node configured to scale an intermediate representation of the branch by a learned scaling parameter. The learned scaling parameter depends on one or more other learned scaling parameters of one or more other interdependent scaling nodes of one or more other branches of the branched attention layer.Type: GrantFiled: January 30, 2018Date of Patent: March 12, 2024Assignee: Salesforce, Inc.Inventors: Nitish Shirish Keskar, Karim Ahmed, Richard Socher
-
Patent number: 11875260Abstract: The architectural complexity of a neural network is reduced by selectively pruning channels. A cost metric for a convolution layer is determined. The cost metric indicates a resource cost per channel for the channels of the layer. Training the neural network includes, for channels of the layer, updating a channel-scaling coefficient based on the cost metric. The channel-scaling coefficient linearly scales the output of the channel. A constant channel is identified based on the channel-scaling coefficients. The neural network is updated by pruning the constant channel. Model weights are updated via a stochastic gradient descent of a training loss function evaluated on training data. The channel-scaling coefficients are updated via an iterative-thresholding algorithm that penalizes a batch normalization loss function based on the cost metric for the layer and a norm of the channel-scaling coefficients.Type: GrantFiled: February 13, 2018Date of Patent: January 16, 2024Assignee: Adobe Inc.Inventors: Xin Lu, Zhe Lin, Jianbo Ye
-
Patent number: 11822616Abstract: Disclosed are a method and an apparatus for performing an operation of a convolutional layer in a convolutional neural network.Type: GrantFiled: November 28, 2018Date of Patent: November 21, 2023Assignee: Nanjing Horizon Robotics Technology Co., Ltd.Inventors: Delin Li, Kun Ling, Liang Chen, Jianjun Li
-
Patent number: 11755879Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for processing and storing inputs for use in a neural network. One of the methods includes receiving input data for storage in a memory system comprising a first set of memory blocks, the memory blocks having an associated order; passing the input data to a highest ordered memory block; for each memory block for which there is a lower ordered memory block: applying a filter function to data currently stored by the memory block to generate filtered data and passing the filtered data to a lower ordered memory block; and for each memory block: combining the data currently stored in the memory block with the data passed to the memory block to generate updated data, and storing the updated data in the memory block.Type: GrantFiled: February 11, 2019Date of Patent: September 12, 2023Assignee: DeepMind Technologies LimitedInventors: Razvan Pascanu, William Clinton Dabney, Thomas Stepleton
-
Patent number: 11727252Abstract: The present disclosure relates to a neuromorphic neuron apparatus comprising an output generation block and at least one adaptation block. The apparatus has a current adaptation state variable corresponding to previously generated one or more signals. The output generation block is configured to use an activation function for generating a current output value based on the current adaptation state variable. The adaptation block is configured to repeatedly: compute an adaptation value of its current adaptation state variable using the current output value and a correction function; use the adaption value to update the current adaptation state variable to obtain an updated adaptation state variable, the updated adaptation state variable becoming the current adaptation state variable; receive a current signal; and cause the output generation block to generate a current output value based on the current adaptation state variable and input value that obtained from the received signal.Type: GrantFiled: August 30, 2019Date of Patent: August 15, 2023Assignee: International Business Machines CorporationInventors: Stanislaw Andrzej Wozniak, Angeliki Pantazi
-
Patent number: 11599787Abstract: A hardware-implemented multi-layer perceptron model calculation unit includes: a processor core to calculate output quantities of a neuron layer based on input quantities of an input vector; a memory that has, for each neuron layer, a respective configuration segment for storing configuration parameters and a respective data storage segment for storing the input quantities of the input vector and the one or more output quantities; and a DMA unit to successively instruct the processor core to: calculate respective neuron layers based on the configuration parameters of each configuration segment, calculate input quantities of the input vector defined thereby, and store respectively resulting output quantities in a data storage segment defined by the corresponding configuration parameters, the configuration parameters of configuration segments successively taken into account indicating a data storage region for the resulting output quantities corresponding to the data storage region for the input quantities forType: GrantFiled: September 4, 2017Date of Patent: March 7, 2023Assignee: ROBERT BOSCH GMBHInventors: Andre Guntoro, Heiner Markert
-
Patent number: 11568197Abstract: Embodiments of the present disclosure provide methods, systems, apparatuses, and computer program products for generating, training, and utilizing a digital signal processor (DSP) to evaluate graph data that may include irregular grid graph data. An example DSP that may be generated, trained, and used may include a set of hidden layers, wherein each hidden layer of the set of hidden layers comprises a set of heterogeneous kernels (HKs), and wherein each HK of the set of HKs includes a corresponding set of filters selected from the constructed set of filters and associated with one or more initial Laplacian operators and corresponding initial filter parameters.Type: GrantFiled: August 2, 2018Date of Patent: January 31, 2023Assignee: OPTUM SERVICES (IRELAND) LIMITEDInventors: Dong Fang, Peter Cogan
-
Patent number: 11531880Abstract: A memory-based CNN, includes an input module, a convolution layer circuit module, a pooling layer circuit module, an activation function module, a fully connected layer circuit module, a softmax function module and an output module, convolution kernel values or synapse weights are stored in the NOR FLASH units; the input module converts an input signal into a voltage signal required by the convolutional neural network; the convolutional layer circuit module convolves the voltage signal corresponding to the input signal with the convolution kernel values, and transmits the result to the activation function module; the activation function module activates the signal; the pooling layer circuit module performs a pooling operation on the activated signal; the fully connected layer circuit module multiplies the pooled signal with the synapse weights to achieve classification; the softmax function module normalizes the classification result into probability values as an output of the entire network.Type: GrantFiled: June 7, 2018Date of Patent: December 20, 2022Assignee: HUAZHONG UNIVERSITY OF SCIENCE AND TECHNOLOGYInventors: Yi Li, Wenqian Pan, Xiangshui Miao
-
Patent number: 11526722Abstract: Facilitation of an explanation about an object to be analyzed is realized with high accuracy and with efficiency. A data analysis apparatus is disclosed which uses a first neural network configured with an input layer, an output layer, and two or more intermediate layers provided between the input layer and the output layer. Each performs a calculation by giving data from a layer of a previous stage and a first learning parameter to a first activation function and outputs a calculation result to a layer of a subsequent stage. The data analysis apparatus includes a conversion section; a reallocation section; and an importance calculation section.Type: GrantFiled: August 30, 2018Date of Patent: December 13, 2022Assignee: HITACHI, LTD.Inventors: Takuma Shibahara, Mayumi Suzuki, Ken Naono
-
Patent number: 11507797Abstract: An information processing apparatus having an input device for receiving data, an operation unit for constituting a convolutional neural network for processing data, a storage area for storing data to be used by the operation unit and an output device for outputting a result of the processing. The convolutional neural network is provided with a first intermediate layer for performing a first processing including a first inner product operation and a second intermediate layer for performing a second processing including a second inner product operation, and is configured so that the bit width of first filter data for the first inner product operation and the bit width of second filter data for the second inner product operation are different from each other.Type: GrantFiled: January 26, 2018Date of Patent: November 22, 2022Assignee: Hitachi, Ltd.Inventors: Toru Motoya, Goichi Ono, Hidehiro Toyoda