Search Patents
  • Publication number: 20220383623
    Abstract: Disclosed are a method and apparatus for training a neural network model to increase performance of the neural network model, the method including receiving input data and target data, pooling, by a neural network model, on a feature map extracted from the input data based on a probability for each of classes of the feature map, generating output data by inputting the input data to a neural network model, determining a loss based on comparing the output data and the target data and an auxiliary loss of the pooling, and training the neural network model based on the loss.
    Type: Application
    Filed: March 16, 2022
    Publication date: December 1, 2022
    Applicant: Samsung Electronics Co., Ltd.
    Inventors: Jaewook YOO, Dokwan OH, Dasol HAN
  • Publication number: 20230292063
    Abstract: A hearing device includes a deep/recurrent neural network trained to jointly perform sound enhancement and feedback cancellation. During training a neural network is connected between a simulated input and a simulated output of the hearing device. The neural network is operable to change a response affecting the simulated output. The neural network is trained by applying the simulated input to the deep neural network while applying the feedback path response between the simulated input and the simulated output. The deep-neural network is trained to reduce an error between the simulated output and the reference audio signal and used for sound enhancement in the device.
    Type: Application
    Filed: March 9, 2023
    Publication date: September 14, 2023
    Inventors: Majid Mirbagheri, Henning Schepker
  • Patent number: 10699195
    Abstract: Systems and methods are disclosed herein for ensuring a safe mutation of a neural network. A processor determines a threshold value representing a limit on an amount of divergence of response for the neural network. The processor identifies a set of weights for the neural network, the set of weights beginning as an initial set of weights. The processor trains the neural network by repeating steps including determining a safe mutation representing a perturbation that results in a response of the neural network that is within the threshold divergence, and modifying the set of weights of the neural network in accordance with the safe mutation.
    Type: Grant
    Filed: December 14, 2018
    Date of Patent: June 30, 2020
    Assignee: Uber Technologies, Inc.
    Inventors: Joel Anthony Lehman, Kenneth Owen Stanley, Jeffrey Michael Clune
  • Patent number: 11704575
    Abstract: Neural networks can be implemented with DNA strand displacement (DSD) circuits. The neural networks are designed and trained in silico taking into account the behavior of DSD circuits. Oligonucleotides comprising DSD circuits are synthesized and combined to form a neural network. In an implementation, the neural network may be a binary neural network in which the output from each neuron is a binary value and the weight of each neuron either maintains the incoming binary value or flips the binary value. Inputs to the neural network are one more oligonucleotides such as synthetic oligonucleotides containing digital data or natural oligonucleotides such as mRNA. Outputs from the neural networks may be oligonucleotides that are read by directly sequencing or oligonucleotides that generate signals such as by release of fluorescent reporters.
    Type: Grant
    Filed: December 21, 2018
    Date of Patent: July 18, 2023
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Karin Strauss, Luis Ceze, Johannes Staffan Anders Linder
  • Publication number: 20180068219
    Abstract: Certain embodiments involve generating or optimizing a neural network for risk assessment. The neural network can be generated using a relationship between various predictor variables and an outcome (e.g., a condition's presence or absence). The neural network can be used to determine a relationship between each of the predictor variables and a risk indicator. The neural network can be optimized by iteratively adjusting the neural network such that a monotonic relationship exists between each of the predictor variables and the risk indicator. The optimized neural network can be used both for accurately determining risk indicators using predictor variables and determining adverse action codes for the predictor variables, which indicate an effect or an amount of impact that a given predictor variable has on the risk indicator. The neural network can be used to generate adverse action codes upon which consumer behavior can be modified to improve the risk indicator score.
    Type: Application
    Filed: March 25, 2016
    Publication date: March 8, 2018
    Inventors: Matthew Turner, Michael McBurnett
  • Publication number: 20230368866
    Abstract: This disclosure describes methods, non-transitory computer readable media, and systems that can configure a field programmable gate array (FPGA) or other configurable processor to implement a neural network and train the neural network using the configurable processor by modifying certain network parameters of a subset of the neural network’s layers. For instance, the disclosed systems can configure a configurable processor on a computing device to implement a base-calling-neural network (or other neural network) that includes different sets of layers. Based on a set of images of oligonucleotide clusters or other datasets, the neural network generates predicted classes, such as by generating nucleobase calls for oligonucleotide clusters. Based on the predicted classes, the disclosed systems subsequently modify certain network parameters for a subset of the neural network’s layers, such by modifying parameters for a set of top layers.
    Type: Application
    Filed: May 10, 2023
    Publication date: November 16, 2023
    Inventor: Gavin Derek Parnaby
  • Publication number: 20180225564
    Abstract: A neural network that may include multiple layers of neural cells; wherein a certain neural cell of a certain layer of neural cells may include a first plurality of one-bit inputs; an adder and leaky integrator unit; and an activation function circuit that has a one-bit output; wherein the first plurality of one-bit inputs are coupled to a first plurality of one-bit outputs of neural cells of a layer that precedes the certain layer; wherein the adder and leaky integration unit is configured to calculate a leaky integral of a weighted sum of a number of one-bit pulses that were received, during a time window, by the first plurality of one-bit inputs; and wherein the activation function circuit is configured to apply an activation function on the leaky integral to provide a one-bit output of the certain neural cell.
    Type: Application
    Filed: January 23, 2018
    Publication date: August 9, 2018
    Inventor: Moshe Haiut
  • Patent number: 10204118
    Abstract: Embodiments of the invention relate to mapping neural dynamics of a neural model on to a lookup table. One embodiment comprises defining a phase plane for a neural model. The phase plane represents neural dynamics of the neural model. The phase plane is coarsely sampled to obtain state transition information for multiple neuronal states. The state transition information is mapped on to a lookup table.
    Type: Grant
    Filed: June 29, 2016
    Date of Patent: February 12, 2019
    Assignee: International Business Machines Corporation
    Inventors: Rodrigo Alvarez-Icaza Rivera, John V. Arthur, Andrew S. Cassidy, Pallab Datta, Paul A. Merolla, Dharmendra S. Modha
  • Patent number: 11010664
    Abstract: Systems, methods, devices, and other techniques are disclosed for using an augmented neural network system to generate a sequence of outputs from a sequence of inputs. An augmented neural network system can include a controller neural network, a hierarchical external memory, and a memory access subsystem. The controller neural network receives a neural network input at each of a series of time steps processes the neural network input to generate a memory key for the time step. The external memory includes a set of memory nodes arranged as a binary tree. To provide an interface between the controller neural network and the external memory, the system includes a memory access subsystem that is configured to, for each of the series of time steps, perform one or more operations to generate a respective output for the time step. The capacity of the neural network system to account for long-range dependencies in input sequences may be extended.
    Type: Grant
    Filed: December 30, 2016
    Date of Patent: May 18, 2021
    Assignee: DeepMind Technologies Limited
    Inventors: Karol Piotr Kurach, Marcin Andrychowicz
  • Patent number: 9400922
    Abstract: The present invention overcomes the limitations of the prior art by performing facial landmark localization in a coarse-to-fine manner with a cascade of neural network levels, and enforcing geometric constraints for each of the neural network levels. In one approach, the neural network levels may be implemented with deep convolutional neural network. One aspect concerns a system for localizing landmarks on face images. The system includes an input for receiving a face image, and an output for presenting landmarks identified by the system. Neural network levels are coupled in a cascade from the input to the output for the system. Each neural network level produces an estimate of landmarks. The estimate of landmarks is more refined than an estimate of landmark of a previous neural network level.
    Type: Grant
    Filed: May 29, 2014
    Date of Patent: July 26, 2016
    Assignee: Beijing Kuangshi Technology Co., Ltd.
    Inventors: Erjin Zhou, Haoqiang Fan, Zhimin Cao, Yuning Jiang, Qi Yin
  • Patent number: 11135268
    Abstract: The present disclosure provides compositions, kits, and methods of promoting neural growth and/or neural survival using IL-17c. The compositions, kits, and methods can be used to promote neural growth and/or neural survival in a variety of conditions where such growth and survival is beneficial.
    Type: Grant
    Filed: July 7, 2016
    Date of Patent: October 5, 2021
    Assignees: Fred Hutchinson Cancer Research Center, University of Washington
    Inventors: Lawrence Corey, Jia Zhu, Tao Peng
  • Patent number: 10922607
    Abstract: In one embodiment, a processor is to store a membrane potential of a neural unit of a neural network; and calculate, at a particular time-step of the neural network, a change to the membrane potential of the neural unit occurring over multiple time-steps that have elapsed since the last time-step at which the membrane potential was updated, wherein each of the multiple time-steps that have elapsed since the last time-step is associated with at least one input to the neural unit that affects the membrane potential of the neural unit.
    Type: Grant
    Filed: December 30, 2016
    Date of Patent: February 16, 2021
    Assignee: Intel Corporation
    Inventors: Abhronil Sengupta, Gregory K. Chen, Raghavan Kumar, Huseyin Ekin Sumbul, Phil Knag
  • Publication number: 20220036158
    Abstract: Embodiments relate to a neural processor circuit including one or more planar engine circuits that perform non-convolution operations in parallel with convolution operations performed by one or more neural engine circuits. The neural engine circuits perform the convolution operations on neural input data corresponding to one or more neural engine tasks to generate neural output data. The planar engine circuits perform non-convolution operations on planar input data corresponding to one or more planar engine tasks to generate planar output data. A data processor circuit that includes multiple buffer circuits performs task skew management between the one or more neural engine tasks and the one or more planar engine tasks. The data processor circuit stops addition of an incoming task to queues in response to one or more of the queues stored in the buffer circuits reaching a threshold.
    Type: Application
    Filed: July 29, 2020
    Publication date: February 3, 2022
    Inventor: PONAN KUO
  • Patent number: 11669713
    Abstract: The present disclosure is directed to a novel system for performing online reconfiguration of a neural network. Once a neural network has been implemented into a production environment, the system may use underlying construction logic to perform an in-situ reconfiguration of neural network elements while the neural network is live. The system may accomplish the reconfiguration by modifying the architecture of the neural network and/or performing adversarial training and/or retraining. In this way, the system may provide a way increase the performance of the neural network over time along one or more performance parameters or metrics.
    Type: Grant
    Filed: December 4, 2018
    Date of Patent: June 6, 2023
    Assignee: BANK OF AMERICA CORPORATION
    Inventor: Eren Kursun
  • Patent number: 12079725
    Abstract: In some embodiments, an application receives a request to execute a convolutional neural network model. The application determines the computational complexity requirement for the neural network based on the computing resource available on the device. The application further determines the architecture of the convolutional neural network model by determining the locations of down-sampling layers within the convolutional neural network model based on the computational complexity requirement. The application reconfigures the architecture of the convolutional neural network model by moving the down-sampling layers to the determined locations and executes the convolutional neural network model to generate output results.
    Type: Grant
    Filed: January 24, 2020
    Date of Patent: September 3, 2024
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Yilin Wang, Siyuan Qiao, Jianming Zhang
  • Publication number: 20160174902
    Abstract: A method and system for anatomical object detection using marginal space deep neural networks is disclosed. The pose parameter space for an anatomical object is divided into a series of marginal search spaces with increasing dimensionality. A respective sparse deep neural network is trained for each of the marginal search spaces, resulting in a series of trained sparse deep neural networks. Each of the trained sparse deep neural networks is trained by injecting sparsity into a deep neural network by removing filter weights of the deep neural network.
    Type: Application
    Filed: February 26, 2016
    Publication date: June 23, 2016
    Inventors: Bogdan Georgescu, Yefeng Zheng, Hien Hguyen, Vivek Kumar Singh, Dorin Comaniciu, David Liu
  • Patent number: 5717833
    Abstract: A system and method for designing a fixed weight analog neural network to perform analog signal processing allows the neural network to be designed with off-line training and implemented with low precision components. A global system error is iteratively computed in accordance with initialized neural functions and weights corresponding to a desired analog neural network configuration for analog signal processing. The neural weights are selectively modified during training and then expected values of weight implementation errors are added thereto. The error adjusted neural weights are used to recompute the global system error and the result thereof is compared to a desired global system error. These steps are repeated as long as the recomputed global system error is greater than the desired global system error. Following that, MOSFET parameters representing MOSFET channel widths and lengths are computed which correspond to the neural functions and weights.
    Type: Grant
    Filed: July 5, 1996
    Date of Patent: February 10, 1998
    Assignee: National Semiconductor Corporation
    Inventor: William Shields Neely
  • Patent number: 11704499
    Abstract: Technology is described herein for generating questions using a neural network. The technology generates the questions in a three-step process. In the first step, the technology selects, using a first neural network, a subset of textual passages from an identified electronic document. In the second step, the technology generates, using a second neural network, one or more candidate answers for each textual passage selected by the first neural network, to produce a plurality of candidate passage-answer pairs. In the third step, the technology selects, using a third neural network, a subset of the plurality of candidate passage-answer pairs. The technology then generates an output result that includes one or more output questions chosen from the candidate passage-answer pairs produced by the third neural network. The use of the first neural network reduces the processing burden placed on the second and third neural networks. It also reduces latency.
    Type: Grant
    Filed: April 13, 2021
    Date of Patent: July 18, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Shubham Agrawal, Owais Khan Mohammed, Weiming Wen
  • Publication number: 20220309680
    Abstract: A method for tracking and/or characterizing multiple objects in a sequence of images. The method includes: assigning a neural network to each object to be tracked; providing a memory shared by all neural networks, and designed to map an address vector of address components, via differentiable operations, onto one or multiple memory locations, and to read data from these memory locations or write data into these memory locations; supplying images from the sequence, and/or details of these images, to each neural network; during the processing of each image and/or image detail by one of the neural networks, generating an address vector from at least one processing product of this neural network; based on this address vector, writing at least one further processing product of the neural network into the shared memory, and/or reading out data from this shared memory and further processing the data by the neural network.
    Type: Application
    Filed: March 16, 2022
    Publication date: September 29, 2022
    Inventor: Cosmin Ionut Bercea
  • Publication number: 20240028722
    Abstract: Systems, devices, and methods for protecting a user computer devices/network from malicious code embedded in a neural network is described. A security platform may selectively modify a downloaded neural network model and/or architecture to remove neural network parameters that may be used to reconstruct the malicious code at an end user of the neural network model. For example, the security platform may remove specific branches of the neural network and/or set specific parameters of the neural network model to zero, such that the malicious code may not be reconstructed at an end-user device.
    Type: Application
    Filed: July 19, 2022
    Publication date: January 25, 2024
    Inventors: Kenneth Longshaw, Matthew Murray, Garrett Botkin
Narrow Results

Filter by US Classification