Learning Method Patents (Class 706/25)
  • Patent number: 11734566
    Abstract: A hardware processor can receive sets of input data describing assets associated with an entity. The hardware processor can receive inputs responsive to queries of a user. The hardware processor can individually generate predictive models based on a respective set of input data. The hardware processor can calculate predicted outcomes for the user by applying each of models to the inputs. The hardware processor can generate a user interface comprising the predictive outcomes for the user for each of the predictive models.
    Type: Grant
    Filed: July 28, 2022
    Date of Patent: August 22, 2023
    Assignee: Cangrade, Inc.
    Inventors: Steven Lehr, Gershon Goren, Liana Epstein
  • Patent number: 11733976
    Abstract: The objective of the present invention is to provide a software creating device and the like with which labor savings can be made when creating software. A software creating device according to the present invention creates software for controlling equipment such as a certification photograph machine. The software creating device 1 includes, for example: a storage part for storing a plurality of basic modules for executing each of a plurality of processes; and a software creating part for employing the basic modules to perform deep reinforcement learning to create, by part of a combination of the basic modules, software for consecutively performing the plurality of processes in equipment such as a certification photograph machine.
    Type: Grant
    Filed: November 5, 2020
    Date of Patent: August 22, 2023
    Assignee: DAI NIPPON PRINTING CO., LTD.
    Inventor: Toshihiko Ochiai
  • Patent number: 11727252
    Abstract: The present disclosure relates to a neuromorphic neuron apparatus comprising an output generation block and at least one adaptation block. The apparatus has a current adaptation state variable corresponding to previously generated one or more signals. The output generation block is configured to use an activation function for generating a current output value based on the current adaptation state variable. The adaptation block is configured to repeatedly: compute an adaptation value of its current adaptation state variable using the current output value and a correction function; use the adaption value to update the current adaptation state variable to obtain an updated adaptation state variable, the updated adaptation state variable becoming the current adaptation state variable; receive a current signal; and cause the output generation block to generate a current output value based on the current adaptation state variable and input value that obtained from the received signal.
    Type: Grant
    Filed: August 30, 2019
    Date of Patent: August 15, 2023
    Assignee: International Business Machines Corporation
    Inventors: Stanislaw Andrzej Wozniak, Angeliki Pantazi
  • Patent number: 11727264
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network used to select actions to be performed by an agent interacting with an environment. One of the methods includes obtaining data identifying (i) a first observation characterizing a first state of the environment, (ii) an action performed by the agent in response to the first observation, and (iii) an actual reward received resulting from the agent performing the action in response to the first observation; determining a pseudo-count for the first observation; determining an exploration reward bonus that incentivizes the agent to explore the environment from the pseudo-count for the first observation; generating a combined reward from the actual reward and the exploration reward bonus; and adjusting current values of the parameters of the neural network using the combined reward.
    Type: Grant
    Filed: May 18, 2017
    Date of Patent: August 15, 2023
    Assignee: DeepMind Technologies Limited
    Inventors: Marc Gendron-Bellemare, Remi Munos, Srinivasan Sriram
  • Patent number: 11720400
    Abstract: A multi-layer serverless sizing stack may determine a compute sizing correction for a serverless function. The serverless sizing stack may analyze historical data to determine a base compute allocation and compute buffer range. The serverless sizing stack may traverse the compute buffer range in an iterative analysis to determine a compute size for the serverless function to support efficient computational-operation when the serverless function is instantiated.
    Type: Grant
    Filed: June 22, 2021
    Date of Patent: August 8, 2023
    Assignee: Accenture Global Solutions Limited
    Inventors: Madhan Kumar Srinivasan, Samba Sivachari Rage, Kishore Kumar Gajula
  • Patent number: 11715000
    Abstract: Systems and methods are disclosed for inquiry-based deep learning. In one implementation, a first content segment is selected from a body of content. The content segment includes a first content element. The first content segment is compared to a second content segment to identify a content element present in the first content segment that is not present in the second content segment. Based on an identification of the content element present in the first content segment that is not present in the second content segment, the content element is stored in a session memory. A first question is generated based on the first content segment. The session memory is processed to compute an answer to the first question. An action is initiated based on the answer. Using deep learning, content segments can be encoded into memory. Incremental questioning can serve to focus various deep learning operations on certain content segments.
    Type: Grant
    Filed: June 30, 2017
    Date of Patent: August 1, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Fethiye Asli Celikyilmaz, Li Deng, Lihong Li, Chong Wang
  • Patent number: 11715003
    Abstract: An optimization apparatus calculates a first portion, among energy change caused by change in value of a neuron of a neuron group, caused by influence of another neuron of the neuron group, determines whether to allow updating the value, based on a sum of the first and second portions of the energy change, and repeats a process of updating or maintaining the value according to the determination. An arithmetic processing apparatus calculates the second portion caused by influence of a neuron not belonging to the neuron group and an initial value of the sum. A control apparatus transmits data for calculating the second portion and the initial value to the arithmetic processing apparatus, and the initial value and data for calculating the first portion to the optimization apparatus, and receives the initial value from the arithmetic processing apparatus, and a value of the neuron group from the optimization apparatus.
    Type: Grant
    Filed: February 4, 2019
    Date of Patent: August 1, 2023
    Assignee: FUJITSU LIMITED
    Inventors: Sanroku Tsukamoto, Satoshi Matsubara, Hirotaka Tamura
  • Patent number: 11706499
    Abstract: A method and system for providing synchronized input feedback, comprising receiving an input event, encoding the input event in an output stream wherein the encoding of the input event is synchronized to a specific event and reproducing the output stream through an output device whereby the encoded input event in the reproduced output stream is imperceptible to the user.
    Type: Grant
    Filed: October 31, 2018
    Date of Patent: July 18, 2023
    Assignee: SONY INTERACTIVE ENTERTAINMENT INC.
    Inventor: Matthew Bennett
  • Patent number: 11704571
    Abstract: A method for pruning weights of an artificial neural network based on a learned threshold includes determining a pruning threshold for pruning a first set of pre-trained weights of multiple pre-trained weights based on a function of a classification loss and a regularization loss. Weights are pruned from the first set of pre-trained weights when a first value of the weight is less than the pruning threshold. A second set of pre-trained weights of the multiple pre-trained weights is fine-tuned or adjusted in response to a second value of each pre-trained weight in the second set of pre-trained weights being greater than the pruning threshold.
    Type: Grant
    Filed: October 9, 2020
    Date of Patent: July 18, 2023
    Assignee: QUALCOMM Incorporated
    Inventors: Kambiz Azarian Yazdi, Tijmen Pieter Frederik Blankevoort, Jin Won Lee, Yash Sanjay Bhalgat
  • Patent number: 11704538
    Abstract: A data processing method and device are provided. The method includes: performing a forward calculation of a neural network on global data to obtain intermediate data for a reverse calculation of the neural network; storing the intermediate data in a buffer unit; reading the intermediate data from the buffer unit; and performing the reverse calculation of the neural network on the intermediate data to obtain a result of the reverse calculation. According to embodiments, in the reverse calculation of the neural network, the number of accessing the global memory is reduced, thereby reducing the computational time cost and increasing the data processing speed.
    Type: Grant
    Filed: November 21, 2019
    Date of Patent: July 18, 2023
    Assignee: Baidu Online Network Technology (Beijing) Co., Ltd.
    Inventors: Huanxin Zheng, Guibin Wang
  • Patent number: 11704561
    Abstract: A method for realizing an artificial neural network via an electronic integrated circuit (FPGA), wherein artificial neurons grouped into different interlinked layers for the artificial neural network, where a functional description is created for each neuron of the artificial neural network, taking into account a specifiable starting weighting, a synthesis is performed for each neuron based on the associated functional description with the associated specified starting weighting, a network list is determined as the synthesis result, in which at least a base element and a starting configuration belonging to the base element are stored for each neuron, a base element is formed as a lookup table (LUT) unit and an associated dynamic configuration cell, in which a current configuration for the LUT unit or the base element is stored, and where the network list is implemented as a starting configuration of the artificial neural network in the electronic integrated circuit.
    Type: Grant
    Filed: December 12, 2018
    Date of Patent: July 18, 2023
    Assignee: SIEMENS AKTIENGESELLSCHAFT
    Inventors: Thomas Hinterstoisser, Martin Matschnig, Herbert Taucher
  • Patent number: 11705147
    Abstract: Systems, methods and computer-readable media are provided for speech enhancement using a hybrid neural network. An example process can include receiving, by a first neural network portion of the hybrid neural network, audio data and reference data, the audio data including speech data, noise data, and echo data; filtering, by the first neural network portion, a portion of the audio data based on adapted coefficients of the first neural network portion, the portion of the audio data including the noise data and/or echo data; based on the filtering, generating, by the first neural network portion, filtered audio data including the speech data and an unfiltered portion of the noise data and/or echo data; and based on the filtered audio data and the reference data, extracting, by a second neural network portion of the hybrid neural network, the speech data from the filtered audio data.
    Type: Grant
    Filed: April 28, 2021
    Date of Patent: July 18, 2023
    Assignee: QUALCOMM Incorporated
    Inventors: Erik Visser, Vahid Montazeri, Shuhua Zhang, Lae-Hoon Kim
  • Patent number: 11704570
    Abstract: A learning device includes a structure search unit that searches for a first learned model structure obtained by selecting search space information in accordance with a target constraint condition of target hardware for each of a plurality of convolution processing blocks included in a base model structure in a neural network model; a parameter search unit that searches for a learning parameter of the neural network model in accordance with the target constraint condition; and a pruning unit that deletes a unit of at least one of the plurality of convolution processing blocks in the first learned model structure based on the target constraint condition and generates a second learned model structure.
    Type: Grant
    Filed: February 26, 2020
    Date of Patent: July 18, 2023
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventors: Akiyuki Tanizawa, Wataru Asano, Atsushi Yaguchi, Shuhei Nitta, Yukinobu Sakata
  • Patent number: 11699064
    Abstract: A neural network system executable on a processor. The neural network system, when executed on the processor, comprises a merged layer shareable between a first neural network and a second neural network. The merged layer is configured to receive input data from a prior layer of at least one of the first and second neural networks. The merged layer is configured to apply a superset of weights to the input data to generate intermediate feature data representative of at least one feature of the input data, the superset of weights being combined from a first set of weights associated with the first neural network and a second set of weights associated with the second neural network. The merged layer is also configured to output the intermediate feature data to at least one subsequent layer, the at least one subsequent layer serving the first and second neural networks.
    Type: Grant
    Filed: April 23, 2019
    Date of Patent: July 11, 2023
    Assignee: Arm Limited
    Inventors: Daren Croxford, Roberto Lopez Mendez
  • Patent number: 11698386
    Abstract: An encoder device for determining a kinematic value of the movement of a first object relative to a second object is provided, wherein the encoder device comprises a standard associated with the first object and at least one scanning unit associated with the second object for producing at least one scanning signal by detection of the standard and a control and evaluation unit that is configured to determine the kinematic value from the scanning signal. The control and evaluation unit is here further configured to determine the kinematic value by an evaluation of the scanning signal using a method of machine learning, with the evaluation being trained with a plurality of scanning signals and associated kinematic values.
    Type: Grant
    Filed: December 11, 2020
    Date of Patent: July 11, 2023
    Assignee: SICK AG
    Inventors: Simon Brugger, Christian Sellmer, David Hopp, Dominic Thomae
  • Patent number: 11694067
    Abstract: An operating method of a neuromorphic processor which processes data based on a neural network including a first layer including axons and a second layer including neurons includes receiving synaptic weights between the first layer and the second layer, decomposing the synaptic weights into presynaptic weights, a number of which is identical to a number of the axons, and postsynaptic weights, a number of which is identical to a number of the synaptic weights, and storing the presynaptic weights and the postsynaptic weights. A precision of each of the synaptic weights is a first number of bits, a precision of each of the presynaptic weights is a second number of bits, and a precision of each of the postsynaptic weights is a third number of bits. The third number of the bits is smaller than the first number of the bits.
    Type: Grant
    Filed: June 25, 2019
    Date of Patent: July 4, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jae-Joon Kim, Jinseok Kim, Taesu Kim
  • Patent number: 11694073
    Abstract: A method and apparatus for generating a fixed point neural network are provided. The method includes selecting at least one layer of a neural network as an object layer, wherein the neural network includes a plurality of layers, each of the plurality of layers corresponding to a respective one of plurality of quantization parameters; forming a candidate parameter set including candidate parameter values with respect to a quantization parameter of the plurality of quantization parameters corresponding to the object layer; determining an update parameter value from among the candidate parameter values based on levels of network performance of the neural network, wherein each of the levels of network performance correspond to a respective one of the candidate parameter values; and updating the quantization parameter with respect to the object layer based on the update parameter value.
    Type: Grant
    Filed: November 20, 2018
    Date of Patent: July 4, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Han-young Yim, Do-yun Kim, Byeoung-su Kim, Nak-Woo Sung, Jong-han Lim, Sang-hyuck Ha
  • Patent number: 11694066
    Abstract: Embodiments herein describe techniques for interfacing a neural network application with a neural network accelerator using a library. The neural network application may execute on a host computing system while the neural network accelerator executes on a massively parallel hardware system, e.g., a FPGA. The library operates a pipeline for submitting the tasks received from the neural network application to the neural network accelerator. In one embodiment, the pipeline includes a pre-processing stage, an FPGA execution stage, and a post-processing stage which each correspond to different threads. When receiving a task from the neural network application, the library generates a packet that includes the information required for the different stages in the pipeline to perform the tasks. Because the stages correspond to different threads, the library can process multiple packets in parallel which can increase the utilization of the neural network accelerator on the hardware system.
    Type: Grant
    Filed: October 17, 2017
    Date of Patent: July 4, 2023
    Assignee: XILINX, INC.
    Inventors: Aaron Ng, Jindrich Zejda, Elliott Delaye, Xiao Teng, Sonal Santan, Soren T. Soe, Ashish Sirasao, Ehsan Ghasemi, Sean Settle
  • Patent number: 11687623
    Abstract: Systems, methods, apparatuses, and computer program products for providing an anti-piracy framework for Deep Neural Networks (DNN). A method may include receiving authorized raw input at a protective transform module. The method may also include receiving unauthorized raw input at a restrictive deep neural network. The method may further include processing the authorized raw input at the protective transform module to generate a processed input. In addition, the method may include feeding the processed input into the restrictive deep neural network. The method may also include generating a result based on the processed input and the unauthorized raw input. Further, the result may include a different learning performance between the authorized raw input and the unauthorized raw input.
    Type: Grant
    Filed: December 10, 2019
    Date of Patent: June 27, 2023
    Assignee: UNIVERSITY OF MARYLAND, COLLEGE PARK
    Inventors: Min Wu, Mingliang Chen
  • Patent number: 11687577
    Abstract: A machine learning system may be used to suggest clinical questions to ask during or after a patient appointment. A first encoder may encode information and a second encoder may encode second information related to the current patient appointment. An aggregate encoding may be generated using the encoded first information and encoded second information. The current patient appointment may be clustered with similar appointments based on the aggregate encoding. Outlier analysis may be performed to determine if the appointment is an outlier, and, if so, which features contribute the most to outlier status. The system may generate one or more questions to ask about the features that contribute the most to outlier status.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: June 27, 2023
    Assignee: DRCHRONO INC.
    Inventors: Daniel Kivatinos, Michael Nusimow, Martin Borgt, Soham Waychal
  • Patent number: 11681911
    Abstract: Methods for training a neural sequence-to-sequence (seq2seq) model. A processor receives the model and training data comprising a plurality of training source sequences and corresponding training target sequences, and generates corresponding predicted target sequences. Model parameters are updated based on a comparison of predicted target sequences to training target sequences to reduce or minimize both a local loss in the predicted target sequences and an expected loss of one or more global or semantic features or constraints between the predicted target sequences and the training target sequences given the training source sequences. Expected loss is based on global or semantic features or constraints of general target sequences given general source sequences.
    Type: Grant
    Filed: October 15, 2019
    Date of Patent: June 20, 2023
    Assignee: NAVER CORPORATION
    Inventors: Vu Cong Duy Hoang, Ioan Calapodescu, Marc Dymetman
  • Patent number: 11681778
    Abstract: An analysis data processing method for processing analysis data collected with an analyzing device for each of a plurality of samples, by applying an analytical technique using statistical machine learning to multidimensional analysis data formed by output values obtained from a plurality of channels of a multichannel detector provided in the analyzing device, the method including: acquiring a non-linear regression or non-linear discrimination function expressing analysis data obtained for known samples; calculating a contribution value of each of the output values obtained from the plurality of channels forming the analysis data of the known samples, to the acquired non-linear regression or non-linear discrimination function, based on a differential value of the non-linear regression function or non-linear discrimination function; and identifying one or more of the plurality of channels of the detector, which are to be used for processing analysis data obtained for an unknown sample, based on the contributio
    Type: Grant
    Filed: August 3, 2016
    Date of Patent: June 20, 2023
    Assignee: SHIMADZU CORPORATION
    Inventor: Akira Noda
  • Patent number: 11681915
    Abstract: A processor-implemented method of performing a convolution operation is provided. The method includes obtaining input feature map data and kernel data, determine the kernel data based on a number of input channels of the input feature map, a number of output channels of an output feature map, and a number of groups of the input feature map data and a number of groups of the kernel data related to the convolution operation, and performing the convolution operation based on the input feature map data and the determined kernel data.
    Type: Grant
    Filed: June 10, 2020
    Date of Patent: June 20, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Songyi Han, Seungwon Lee, Minkyoung Cho
  • Patent number: 11676024
    Abstract: Artificial neural network systems involve the receipt by a computing device of input data that defines a pattern to be recognized (such as faces, handwriting, and voices). The computing device may then decompose the input data into a first subband and a second subband, wherein the first and second subbands include different characterizing features of the pattern in the input data. The first and second subbands may then be fed into first and second neural networks being trained to recognize the pattern. Reductions in power expenditure, memory usage, and time taken, for example, allow resource-limited computing devices to perform functions they otherwise could not.
    Type: Grant
    Filed: February 24, 2017
    Date of Patent: June 13, 2023
    Assignee: SRI International
    Inventors: Sek Meng Chai, David Zhang, Mohamed Amer, Timothy J. Shields, Aswin Nadamuni Raghavan
  • Patent number: 11669742
    Abstract: Methods, systems, and computer-readable media for multi-model processing on resource-constrained devices. A resource-constrained device can determine, based on a battery-life for a battery of the device, whether to process input through a first model or a second model. The first model can be a gating model that is more energy efficient to execute, and the second model can be a main model that is more accurate than the gating model. Depending on the current battery-life and/or other criteria, the system can process, through the gating model, sensor input that can record activity performed by a user of the resource-constrained device. If the gating model predicts an activity performed by the user that is recorded by the sensor data, the device can process the same or additional input through the main model. Overall power consumption can be reduced with a minimum accuracy maintained over processing input only through the main model.
    Type: Grant
    Filed: November 17, 2020
    Date of Patent: June 6, 2023
    Assignee: Google LLC
    Inventors: Chun-Te Chu, Claire Jaja, Kara Vaillancourt, Oleg Veryovka
  • Patent number: 11657265
    Abstract: Described herein are systems and methods for training first and second neural network models. A system comprises a memory comprising instruction data representing a set of instructions and a processor configured to communicate with the memory and to execute the set of instructions. The set of instructions, when executed by the processor, cause the processor to set a weight in the second model based on a corresponding weight in the first model, train the second model on a first dataset, wherein the training comprises updating the weight in the second model and adjust the corresponding weight in the first model based on the updated weight in the second model.
    Type: Grant
    Filed: November 15, 2018
    Date of Patent: May 23, 2023
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventors: Binyam Gebre, Erik Bresch, Dimitrios Mavroeidis, Teun van den Heuvel, Ulf Grossekathöfer
  • Patent number: 11657285
    Abstract: Methods, systems and media for random semi-structured row-wise pruning of filters of a convolutional neural network are described. Rows of weights are pruned from kernels of filters of a convolutional layer of a convolutional neural network according to a pseudo-randomly-generated row pruning mask. The convolutional neural network is trained to perform a particular task using the pruned filters that include the rows of weights that have not been pruned from the kernels of filters. The process may be repeated multiple times, with the best-performing row pruning mask being selected for use in pruning row weights from kernel filters when the trained convolutional neural network is deployed to processing system and used for an inference. Computation time may be decreased further with the use of multiple parallel hardware computation units of a processing system performing pipelined row-wise convolution.
    Type: Grant
    Filed: July 30, 2020
    Date of Patent: May 23, 2023
    Assignee: XFUSION DIGITAL TECHNOLOGIES CO., LTD.
    Inventors: Vanessa Courville, Mehdi Ahmadi, Mahdi Zolnouri
  • Patent number: 11657284
    Abstract: An electronic apparatus for compressing a neural network model may acquire training data pairs based on an original, trained neural network model and train a compressed neural network model compressed from the original, trained neural network model using the acquired training data pairs.
    Type: Grant
    Filed: May 6, 2020
    Date of Patent: May 23, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Jaedeok Kim, Chiyoun Park, Youngchul Sohn, Inkwon Choi
  • Patent number: 11651228
    Abstract: Systems and methods related to dual-momentum gradient optimization with reduced memory requirements are described. An example method in a system comprising a gradient optimizer and a memory configured to store momentum values associated with a neural network model comprising L layers is described. The method includes retrieving from the memory a first set of momentum values and a second set of momentum values, corresponding to a layer of the neural network model, having a selected storage format. The method further includes converting the first set of momentum values to a third set of momentum values having a training format associated with the gradient optimizer and converting the second set of momentum values to a fourth set of momentum values having a training format associated with the gradient optimizer. The method further includes performing gradient optimization using the third set of momentum values and the fourth set of momentum values.
    Type: Grant
    Filed: April 17, 2020
    Date of Patent: May 16, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jinwen Xi, Bharadwaj Pudipeddi, Marc Tremblay
  • Patent number: 11651226
    Abstract: A data processing system for training a neural network, the data processing system comprising: a first set of one or more processing units running one model of the neural network, a second set of one or more processing units running another model of the neural network, a data storage, and an interconnect between the first set of one or more processing units, the second set of processing units and the data storage, wherein the data storage is configured to provide over the interconnect, training data to the first set of one or more processing units and the second set of one more processing units, wherein each of the first and second set of processing units is configured to, when performing the training, evaluate loss for the respective training iteration including a measure of the dissimilarity between the output values calculated based on the different modes running on the first and second set of processing units, wherein the dissimilarity measure is weighted in the evaluation of the loss in accordance with a
    Type: Grant
    Filed: February 12, 2020
    Date of Patent: May 16, 2023
    Assignee: GRAPHCORE LIMITED
    Inventors: Helen Byrne, Luke Benjamin Hudlass-Galley, Carlo Luschi
  • Patent number: 11645512
    Abstract: Memory layout and conversion are disclosed to improve neural network (NN) inference performance. For one example, a NN selects a memory layout for a neural network (NN) among a plurality of different memory layouts based on thresholds derived from performance simulations of the NN. The NN stores multi-dimensional NN kernel computation data using the selected memory layout during NN inference. The memory layouts to be selected can be a channel, height, width, and batches (CHWN) layout, a batches, height, width and channel (NHWC) layout, and a batches, channel, height and width (NCHW) layout. If the multi-dimensional NN kernel computation data is not in the selected memory layout, the NN transforms the multi-dimensional NN kernel computation data for the selected memory layout.
    Type: Grant
    Filed: April 30, 2019
    Date of Patent: May 9, 2023
    Assignee: BAIDU USA LLC
    Inventor: Min Guo
  • Patent number: 11645509
    Abstract: Embodiments for training a neural network using sequential tasks are provided. A plurality of sequential tasks are received. For each task in the plurality of tasks a copy of the neural network that includes a plurality of layers is generated. From the copy of the neural network a task specific neural network is generated by performing an architectural search on the plurality of layers in the copy of the neural network. The architectural search identifies a plurality of candidate choices in the layers of the task specific neural network. Parameters in the task specific neural network that correspond to the plurality of candidate choices and that maximize architectural weights at each layer are identified. The parameters are retrained and merged with the neural network. The neural network trained on the plurality of sequential tasks is a trained neural network.
    Type: Grant
    Filed: October 31, 2018
    Date of Patent: May 9, 2023
    Assignee: Salesforce.com, Inc.
    Inventors: Yingbo Zhou, Xilai Li, Caiming Xiong
  • Patent number: 11645537
    Abstract: Disclosed are a neural network training method, a neural network training device and an electronic device. The neural network training method includes: training a first neural network to be trained by using sample data; determining an indicator parameter of the first neural network in a current training process; determining an update manner corresponding to a preset condition if the indicator parameter meets the preset condition; and updating a parameter of a batch normalization layer in the first neural network based on the update manner. In this way, sparsing of a feature map output by a neural network is implemented, thereby reducing an amount of data to be transmitted and improving computation speed of a chip.
    Type: Grant
    Filed: January 19, 2020
    Date of Patent: May 9, 2023
    Assignee: Beijing Horizon Robotics Technology Research and Development Co., Ltd.
    Inventors: Zhichao Li, Yushu Gao, Yifeng Geng, Heng Luo
  • Patent number: 11645590
    Abstract: Described is a system for learning and predicting key phrases. The system learns based on a dataset of historical forecasting questions, their associated time-series data for a quantity of interest, and associated keyword sets. The system learns the optimal policy of actions to take given the associated keyword sets and the optimal set of keywords which are predictive of the quantity of interest. Given a new forecasting question, the system extracts an initial keyword set from a new forecasting question, which are perturbed to generate an optimal predictive key-phrase set. Key-phrase time-series data are extracted for the optimal predictive key-phrase set, which are used to generate a forecast of future values for a value of interest. The forecast can be used for a variety of purposes, such as advertising online.
    Type: Grant
    Filed: April 27, 2022
    Date of Patent: May 9, 2023
    Assignee: HRL LABORATORIES, LLC
    Inventors: Victor Ardulov, Aruna Jammalamadaka, Tsai-Ching Lu
  • Patent number: 11645574
    Abstract: A non-transitory, computer-readable recording medium stores therein a reinforcement learning program that uses a value function and causes a computer to execute a process comprising: estimating first coefficients of the value function represented in a quadratic form of inputs at times in the past than a present time and outputs at the present time and the times in the past, the first coefficients being estimated based on inputs at the times in the past, the outputs at the present time and the times in the past, and costs or rewards that corresponds to the inputs at the times in the past; and determining second coefficients that defines a control law, based on the value function that uses the estimated first coefficients and determining input values at times after estimation of the first coefficients.
    Type: Grant
    Filed: September 13, 2018
    Date of Patent: May 9, 2023
    Assignees: FUJITSU LIMITED KAWASAKI, JAPAN, OKINAWA INSTITUTE OF SCIENCE AND TECHNOLOGY SCHOOL CORPORATION
    Inventors: Tomotake Sasaki, Eiji Uchibe, Kenji Doya, Hirokazu Anai, Hitoshi Yanami, Hidenao Iwane
  • Patent number: 11645524
    Abstract: A computer system and method for machine inductive learning on a graph is provided. In the inductive learning computational approach, an iterative approach is used for sampling a set of seed nodes and then considering their k-degree (hop) neighbors for aggregation and propagation. The approach is adapted to enhance privacy of edge weights by adding noise during a forward pass and a backward pass step of an inductive learning computational approach. Accordingly, it becomes more technically difficult for a malicious user to attempt to reverse engineer the edge weight information. Applicants were able to experimentally validate that acceptable privacy costs could be achieved in various embodiments described herein.
    Type: Grant
    Filed: May 9, 2020
    Date of Patent: May 9, 2023
    Assignee: ROYAL BANK OF CANADA
    Inventors: Nidhi Hegde, Gaurav Sharma, Facundo Sapienza
  • Patent number: 11636317
    Abstract: Long-short term memory (LSTM) cells on spiking neuromorphic hardware are provided. In various embodiments, such systems comprise a spiking neurosynaptic core. The neurosynaptic core comprises a memory cell, an input gate operatively coupled to the memory cell and adapted to selectively admit an input to the memory cell, and an output gate operatively coupled to the memory cell an adapted to selectively release an output from the memory cell. The memory cell is adapted to maintain a value in the absence of input.
    Type: Grant
    Filed: February 16, 2017
    Date of Patent: April 25, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Rathinakumar Appuswamy, Michael Beyeler, Pallab Datta, Myron Flickner, Dharmendra S. Modha
  • Patent number: 11636285
    Abstract: Examples of systems and method described herein provide for the processing of image codes (e.g., a binary embedding) at a memory die. Such images codes may generated by various endpoint computing devices, such as Internet of Things (IoT) computing devices, Such devices can generate a Hamming processing command, having an image code of the image, to compare that representation of the image to other images (e.g., in an image dataset) to identify a match or a set of neural network results. Advantageously, examples described herein may be used in neural networks to facilitate the processing of datasets, so as to increase the rate and amount of processing of such datasets. For example, comparisons of image codes can be performed on a memory die itself, like a memory die of a NAND memory device.
    Type: Grant
    Filed: September 9, 2020
    Date of Patent: April 25, 2023
    Assignee: Micron Technology, Inc.
    Inventors: David Hulton, Jeremy Chritz, Tamara Schmitz
  • Patent number: 11628562
    Abstract: A method for producing a strategy for a robot. The method includes the following steps: initializing the strategy and an episode length; repeated execution of the loop including the following steps: producing a plurality of further strategies as a function of the strategy; applying the plurality of the further strategies for the length of the episode length; ascertaining respectively a cumulative reward, which is obtained in the application of the respective further strategy; updating the strategy as a function of a second plurality of the further strategies that obtained the greatest cumulative rewards. After each execution of the loop, the episode length is increased. A computer program, a device for carrying out the method, and a machine-readable memory element on which the computer program is stored, are also described.
    Type: Grant
    Filed: July 6, 2020
    Date of Patent: April 18, 2023
    Assignee: ROBERT BOSCH GMBH
    Inventors: Frank Hutter, Lior Fuks, Marius Lindauer, Noor Awad
  • Patent number: 11630990
    Abstract: The present disclosure provides systems, methods and computer-readable media for optimizing the neural architecture search for the automated machine learning process. In one aspect, neural architecture search method including selecting a neural architecture for training as part of an automated machine learning process; collecting statistical parameters on individual nodes of the neural architecture during the training; determining, based on the statistical parameters, active nodes of the neural architecture to form a candidate neural architecture; and validating the candidate neural architecture to produce a trained neural architecture to be used in implemented an application or a service.
    Type: Grant
    Filed: March 19, 2019
    Date of Patent: April 18, 2023
    Assignee: Cisco Technology, Inc.
    Inventors: Abhishek Singh, Debojyoti Dutta
  • Patent number: 11625606
    Abstract: A neural processing system includes a first frontend module, a second frontend module, a first backend module, and a second backend module. The first frontend module executes a feature extraction operation using a first feature map and a first weight, and outputs a first operation result and a second operation result. The second frontend module executes the feature extraction operation using a second feature map and a second weight, and outputs a third operation result and a fourth operation result. The first backend module receives an input of the first operation result provided from the first frontend module and the fourth operation result provided from the second frontend module via a second bridge to sum up the first operation result and the fourth operation result.
    Type: Grant
    Filed: August 25, 2022
    Date of Patent: April 11, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jin Ook Song, Jun Seok Park, Yun Kyo Cho
  • Patent number: 11625607
    Abstract: A method of pruning a convolutional neural network, comprising at least one of determining a number of channels (N) between a network input and a network output, constructing N lookup tables, each lookup table matched to a respective channel and pruning filters in the convolutional neural network to create a shortcut between the network input and the network output based on the N lookup tables.
    Type: Grant
    Filed: February 27, 2019
    Date of Patent: April 11, 2023
    Assignee: BLACK SESAME TECHNOLOGIES INC.
    Inventors: Zuoguan Wang, Yilin Song, Qun Gu
  • Patent number: 11620554
    Abstract: An electronic clinical decision support (CDS) device (10) employs a trained CDS algorithm (30) that operates on values of a set of covariates to output a prediction of a medical condition. The CDS algorithm was trained on a training data set (22). The CDS device includes a computer (12) that is programmed to provide a user interface (62) for completing clinical survey questions using the display and the one or more user input devices. Marginal probability distributions (42) for the covariates of the set of covariates are generated from the completed clinical survey questions. The trained CDS algorithm is adjusted for covariate shift using the marginal probability distributions. A prediction of the medical condition is generated for a medical subject using the trained CDS algorithm adjusted for covariate shift (50) operating on values for the medical subject of the covariates of the set of covariates.
    Type: Grant
    Filed: August 1, 2017
    Date of Patent: April 4, 2023
    Assignee: Koninklijke Philips N.V.
    Inventors: Bryan Conroy, Cristhian Mauricio Potes Blandon, Minnan Xu
  • Patent number: 11620505
    Abstract: A neuromorphic package device includes a systolic array package and a controller. The systolic array package includes neuromorphic chips arranged in a systolic array along a first direction and a second direction. The controller communicates with a host controls the neuromorphic chips. Each of the neuromorphic chips sequentially transfers weights of a plurality layers of a neural network system in the first direction to store the weights. A first neuromorphic chip performs a calculation based on stored weights therein and an input data received in the second direction, and provides a result of the calculation to at least one of a second neuromorphic chip and a third neuromorphic chip which are adjacent to the first neuromorphic chip. The at least one of the second and third neuromorphic chips performs a calculation based on a provided result of the calculation and stored weights therein.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: April 4, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Jaehun Jang, Hongrak Son, Changkyu Seol, Pilsang Yoon, Junghyun Hong
  • Patent number: 11620504
    Abstract: A neuromorphic device includes a memory cell array that includes first memory cells corresponding to a first address and storing first weights and second memory cells corresponding to a second address and storing second weights, and a neuron circuit that includes an integrator summing first read signals from the first memory cells and an activation circuit outputting a first activation signal based on a first sum signal of the first read signals output from the integrator.
    Type: Grant
    Filed: June 4, 2020
    Date of Patent: April 4, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Hak-Soo Yu, Nam Sung Kim, Kyomin Sohn, Jaeyoun Youn
  • Patent number: 11620830
    Abstract: Autonomous vehicles may utilize neural networks for image classification in order to navigate infrastructures and foreign environments, using context dependent transfer learning adaptation. Techniques include receiving a transferable output layer from the infrastructure, which is a model suitable for the infrastructure and the local environment. Sensor data from the autonomous vehicle may then be passed through the neural network and classified. The classified data can map to an output of the transferable output layer, allowing the autonomous vehicle to obtain particular outputs for particular context dependent inputs, without requiring further parameters within the neural network.
    Type: Grant
    Filed: March 31, 2020
    Date of Patent: April 4, 2023
    Assignee: Ford Global Technologies, LLC
    Inventors: Omar Makke, Oleg Yurievitch Gusikhin
  • Patent number: 11615317
    Abstract: A system and method for operating a neural network. In some embodiments, the neural network includes a variational autoencoder, and the training of the neural network includes training the variational autoencoder with a plurality of samples of a first random variable; and a plurality of samples of a second random variable, the plurality of samples of the first random variable and the plurality of samples of the second random variable being unpaired, the training of the neural network including updating weights in the neural network based on a first loss function, the first loss function being based on a measure of deviation from consistency between: a conditional generation path from the first random variable to the second random variable, and a conditional generation path from the second random variable to the first random variable.
    Type: Grant
    Filed: May 28, 2020
    Date of Patent: March 28, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Yoo Jin Choi, Jongha Ryu, Mostafa El-Khamy, Jungwon Lee, Young-Han Kim
  • Patent number: 11609792
    Abstract: The present disclosure relates to a method for allocating resources of an accelerator to two or more neural networks for execution. The two or more neural networks may include a first neural network and a second neural network. The method comprises analyzing workloads of the first neural network and the second neural network, wherein the first neural network and second neural network each includes multiple computational layers, evaluating computational resources of the accelerator for executing each computational layer of the first and second neural networks, and scheduling computational resources of the accelerator to execute one computational layer of the multiple computation layers of the first neural network and to execute one or more computational layers of the multiple computational layers of the second neural network.
    Type: Grant
    Filed: March 19, 2019
    Date of Patent: March 21, 2023
    Assignee: Alibaba Group Holding Limited
    Inventors: Lingjie Xu, Wei Wei
  • Patent number: 11604941
    Abstract: A method of training an action selection neural network to perform a demonstrated task using a supervised learning technique. The action selection neural network is configured to receive demonstration data comprising actions to perform the task and rewards received for performing the actions. The action selection neural network has auxiliary prediction task neural networks on one or more of its intermediate outputs. The action selection policy neural network is trained using multiple combined losses, concurrently with the auxiliary prediction task neural networks.
    Type: Grant
    Filed: October 29, 2018
    Date of Patent: March 14, 2023
    Assignee: DeepMind Technologies Limited
    Inventor: Todd Andrew Hester
  • Patent number: 11604976
    Abstract: In a hardware-implemented approach for operating a neural network system, a neural network system is provided comprising a controller, a memory, and an interface connecting the controller to the memory, where the controller comprises a processing unit configured to execute a neural network and the memory comprises a neuromorphic memory device with a crossbar array structure that includes input lines and output lines interconnected at junctions via electronic devices. The electronic devices of the neuromorphic memory device are programmed to incrementally change states by coupling write signals into the input lines based on: write instructions received from the controller and write vectors generated by the interface. Data is retrieved from the neuromorphic memory device, according to a multiply-accumulate operation, by coupling read signals into one or more of the input lines of the neuromorphic memory device based on: read instructions from the controller and read vectors generated by the interface.
    Type: Grant
    Filed: April 29, 2020
    Date of Patent: March 14, 2023
    Assignee: International Business Machines Corporation
    Inventors: Thomas Bohnstingl, Angeliki Pantazi, Stanislaw Andrzej Wozniak, Evangelos Stavros Eleftheriou