Search Patents
  • Publication number: 20230056617
    Abstract: A hearing device comprises an input transducer comprising a microphone for providing an electric input signal representative of sound in the environment of the hearing device, a pre-processor for processing electric input signal and providing a multitude of feature vectors, each being representative of a time segment thereof, a neural network processor adapted to implement a neural network for implementing a detector configured to provide an output indicative of a characteristic property of the at least one electric input signal, the neural network being configured to receive said multitude of feature vectors as input vectors and to provide corresponding output vectors representative of said output of said detector in dependence of said input vectors.
    Type: Application
    Filed: November 7, 2022
    Publication date: February 23, 2023
    Applicant: Oticon A/S
    Inventors: Michael Syskind PEDERSEN, Asger Heidemann ANDERSEN, Jesper JENSEN, Nels Hede ROHDE, Anders Brødløs OLSEN, Michael Smed KRISTENSEN, Thomas BENTSEN, Svend Oscar PETERSEN
  • Patent number: 11443187
    Abstract: This disclosure relates to method and system for improving classifications performed by artificial neural network (ANN) model. The method may include identifying, for a classification performed by the ANN model for an input, activated neurons in each neural layer of the ANN model; and analyzing the activated neurons in each neural layer with respect to Characteristic Feature Directive (CFDs) for corresponding neural layer and for a correct class of the input. The CFDs for each neural layer may be generated after a training phase of the ANN model and based on neurons in corresponding neural layer that may be activated for a training input of the correct class. The method may further include determining differentiating neurons in each neural layer that are not activated as per the CFDs for the correct class of the input based on the analysis; and providing missing features based on the differentiating neurons.
    Type: Grant
    Filed: December 3, 2019
    Date of Patent: September 13, 2022
    Assignee: Wipro Limited
    Inventors: Prashanth Krishnapura Subbaraya, Raghavendra Hosabettu
  • Patent number: 11782183
    Abstract: Disclosed is a magnetotelluric inversion method based on a fully convolutional neural network.
    Type: Grant
    Filed: June 14, 2022
    Date of Patent: October 10, 2023
    Assignee: Institute of Geology and Geophysics, Chinese Academy of Sciences
    Inventors: Zhongxing Wang, Lili Kang, Zhiguo An, Ruo Wang, Xiong Yin
  • Publication number: 20190325862
    Abstract: Continuous automatic speech segmentation and recognition systems and methods are described that include a detector coupled to a neural network. The neural network performs speech recognition processing on feature vectors sequentially extracted from an audio data stream to attempt to recognize a word from a set of words of a predetermined vocabulary. The neural network has word neural paths to each output a respective word output signal to the detector for each of the set of words. The neural network also has a trigger neural path to output a trigger signal to the detector to control when the detector reviews the word output signals to recognize the word.
    Type: Application
    Filed: April 23, 2019
    Publication date: October 24, 2019
    Inventors: Hari Shankar, Narayan Srinivasa, Gopal Raghavan, Chao Xu
  • Publication number: 20230273826
    Abstract: A neural network scheduling method provided includes loading at least one pre-trained neural network model to a model storage area in a memory, and acquiring a base address of the at least one neural network model, the memory further including a common data storage area; acquiring base addresses of corresponding neural network models according to a task type, and reading data in the common data storage area; and invoking, on a basis of the base addresses of the corresponding neural network models, the corresponding neural network models to compute the data read in the common data storage area to obtain a computation result and outputting the computation result. The cost for additional neural network computing devices can be reduced and the utilization rate of hardware resources can be improved.
    Type: Application
    Filed: October 12, 2019
    Publication date: August 31, 2023
    Applicant: SHENZHEN CORERAIN TECHNOLOGIES CO., LTD.
    Inventors: Jiongkai Huang, Kuenhung TSOI, Xinyu NIU
  • Patent number: 11568237
    Abstract: An electronic apparatus for compressing a recurrent neural network and a method thereof are provided. The electronic apparatus and the method thereof include a sparsification technique for the recurrent neural network, obtaining first to third multiplicative variables to learn the recurrent neural network, and performing sparsification for the recurrent neural network to compress the recurrent neural network.
    Type: Grant
    Filed: May 10, 2019
    Date of Patent: January 31, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Ekaterina Maksimovna Lobacheva, Nadezhda Aleksandrovna Chirkova, Dmitry Petrovich Vetrov
  • Patent number: 11657284
    Abstract: An electronic apparatus for compressing a neural network model may acquire training data pairs based on an original, trained neural network model and train a compressed neural network model compressed from the original, trained neural network model using the acquired training data pairs.
    Type: Grant
    Filed: May 6, 2020
    Date of Patent: May 23, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Jaedeok Kim, Chiyoun Park, Youngchul Sohn, Inkwon Choi
  • Publication number: 20210374530
    Abstract: A method and system for implementing a neural node in a neural network in a key value store (KVS) system. The method and system monitor a first KVS key of the neural node for an update of an input value. The method and system execute a microfunction for the neural node on the input value to generate an output value, in response to detecting a change in the input value and write the output value to a second KVS key for an output neural node.
    Type: Application
    Filed: October 23, 2018
    Publication date: December 2, 2021
    Applicant: Telefonaktiebolaget LM Ericsson (publ)
    Inventors: Heikki MAHKONEN, Wassim HADDAD
  • Patent number: 9433703
    Abstract: A neural graft includes a biological substrate, a carbon nanotube structure and a neural network. The carbon nanotube structure is located on the biological substrate. The carbon nanotube structure includes a number of carbon nanotube wires crossed with each other to define a number of pores. The neural network includes a number of neural cell bodies and a number of neurites branched from the neural cell bodies. An effective diameter of each pore is larger than or equal to a diameter of the neural cell body, the neurites substantially extend along the carbon nanotube wires such that the neurites are patterned.
    Type: Grant
    Filed: August 1, 2012
    Date of Patent: September 6, 2016
    Assignees: Tsinghua University, HON HAI PRECISION INDUSTRY CO., LTD.
    Inventors: Li Fan, Chen Feng, Wen-Mei Zhao
  • Publication number: 20220207361
    Abstract: A neural network model quantization method and apparatus is provided. The neural network model quantization method includes receiving a neural network model, calculating a quantization parameter corresponding to an operator of the neural network model to be quantized based on bisection approximation, and quantizing the operator to be quantized based on the quantization parameter and obtaining a neural network model having the quantized operator.
    Type: Application
    Filed: December 16, 2021
    Publication date: June 30, 2022
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Jiali PANG, Gang SUN, Lin CHEN, Zhen Zhang
  • Patent number: 11003909
    Abstract: A machine trains a first neural network using a first set of images. Training the first neural network comprises computing a first set of weights for a first set of neurons. The machine, for each of one or more alpha values in order from smallest to largest, trains an additional neural network using an additional set of images. The additional set of images comprises a homographic transformation of the first set of images. The homographic transformation is computed based on the alpha value. Training the additional neural network comprises computing an additional set of weights for an additional set of neurons. The additional set of weights is initialized based on a previously computed set of weights. The machine generates a trained ensemble neural network comprising the first neural network and one or more additional neural networks corresponding to the one or more alpha values.
    Type: Grant
    Filed: March 20, 2019
    Date of Patent: May 11, 2021
    Assignee: Raytheon Company
    Inventors: Peter Kim, Michael J. Sand
  • Patent number: 6470261
    Abstract: Design of a neural network for automatic detection of incidents on a freeway is described. A neural network is trained using a combination of both back-propagation and genetic algorithm-based methods for optimizing the design of the neural network. The back-propagation and genetic algorithm work together in a collaborative manner in the neural network design. The training starts with incremental learning based on the instantaneous error and the global total error is accumulated for batch updating at the end of the training data being presented to the neural network. The genetic algorithm directly evaluates the performance of multiple sets of neural networks in parallel and then use the analyzed results to breed new neural networks that tend to be better suited to the problems at hand.
    Type: Grant
    Filed: January 16, 2001
    Date of Patent: October 22, 2002
    Assignee: CET Technologies PTE LTD
    Inventors: Yew Liam Ng, Kim Chwee Ng
  • Patent number: 11429856
    Abstract: An approach for generating a trained neural network is provided. In an embodiment, a neural network, which can have an input layer, an output layer, and a hidden layer, is created. An initial training of the neural network is performed using a set of labeled data. The boosted neural network resulting from the initial training is applied to unlabeled data to determine whether any of the unlabeled data qualifies as additional labeled data. If it is determined that any of the unlabeled data qualifies as additional labeled data, the boosted neural network is retrained using the additional labeled data. Otherwise, if it is determined that none of the unlabeled data qualifies as additional labeled data, the neural network is updated to change a number of predictor nodes in the neural network.
    Type: Grant
    Filed: September 12, 2018
    Date of Patent: August 30, 2022
    Assignee: International Business Machines Corporation
    Inventors: Jamal Hammoud, Marc Joel Herve Legroux
  • Patent number: 10769493
    Abstract: The embodiments of the present invention provide training and construction methods and apparatus of a neural network for object detection, an object detection method and apparatus based on a neural network and a neural network. The training method of the neural network for object detection, comprises: inputting a training image including a training object to the neural network to obtain a predicted bounding box of the training object; acquiring a first loss function according to a ratio of the intersection area to the union area of the predicted bounding box and a true bounding box, the true bounding box being a bounding box of the training object marked in advance in the training image; and adjusting parameters of the neural network by utilizing at least the first loss function to train the neural network.
    Type: Grant
    Filed: July 26, 2017
    Date of Patent: September 8, 2020
    Assignees: BEIJING KUANGSHI TECHNOLOGY CO., LTD., MEGVII (BEIJING) TECHNOLOGY CO., LTD.
    Inventors: Jiahui Yu, Qi Yin
  • Publication number: 20210103803
    Abstract: Embodiments relate to a neural processor that include a plurality of neural engine circuits and one or more planar engine circuits. The plurality of neural engine circuits can perform convolution operations of input data of the neural engine circuits with one or more kernels to generate outputs. The planar engine circuit is coupled to the plurality of neural engine circuits. The planar engine circuit generates an output from input data that corresponds to output of the neural engine circuits or a version of input data of the neural processor. The planar engine circuit can be configured to multiple modes. In a pooling mode, the planar engine circuit reduces a spatial size of a version of the input data. In an elementwise mode, the planar engine circuit performs an elementwise operation on the input data. In a reduction mode, the planar engine circuit reduces the rank of a tensor.
    Type: Application
    Filed: October 8, 2019
    Publication date: April 8, 2021
    Inventors: Christopher L. Mills, Kenneth W. Waters, Youchang Kim
  • Publication number: 20050181503
    Abstract: The present invention relates to a method of inhibiting differentiation of a population of neural stem cells by contacting a purinergic receptor agonist and a population of neural stem cells under conditions effective to inhibit differentiation of the population of neural stem cells. Another aspect of the present invention relates to a method of producing neurons and/or glial cells from a population of neural stem cells by culturing a population of neural stem cells with a purinergic receptor antagonist under conditions effective to cause the neural stem cells to differentiate into neurons and/or glial cells. The purinergic receptor agonist can also be used in a method of inducing proliferation and self-renewal of neural stem cells in a subject and a method of treating a neurological disease or neurodegenerative condition in a subject. The purinergic receptor antagonist can also be used in treating a neoplastic disease of the brain or spinal cord in a subject.
    Type: Application
    Filed: February 10, 2005
    Publication date: August 18, 2005
    Inventors: Steven Goldman, Maiken Nedergaard, Jane Lin
  • Publication number: 20160307074
    Abstract: Different candidate windows in an image are identified, such as by sliding a rectangular or other geometric shape of different sizes over an image to identify portions of the image (groups of pixels in the image). The candidate windows are analyzed by a set of convolutional neural networks, which are cascaded so that the input of one convolutional neural network layer is based on the input of another convolutional neural network layer. Each convolutional neural network layer drops or rejects one or more candidate windows that the convolutional neural network layer determines does not include an object (e.g., a face). The candidate windows that are identified as including an object (e.g., a face) are analyzed by another one of the convolutional neural network layers. The candidate windows identified by the last of the convolutional neural network layers are the indications of the objects (e.g., faces) in the image.
    Type: Application
    Filed: June 29, 2016
    Publication date: October 20, 2016
    Applicant: Adobe Systems Incorporated
    Inventors: Xiaohui Shen, Haoxiang Li, Zhe Lin, Jonathan W. Brandt
  • Publication number: 20210173905
    Abstract: A system for attention-based layered neural network classification is provided. The system comprises: a sequence of layered neural networks; and a controller configured for controlling data routed through the sequence of layered neural networks, the controller configured to: receive interaction data comprising data features, wherein the data features are distinct characteristics of the interaction data; input data features into the sequence of layered neural networks, wherein each sequential layer of the sequence of layered neural networks comprises a heightened rigor level for at least one of the data features; calculate a relevance score output for at least one of the data features at each layer of the sequence of layered neural networks; and integrate the relevance score output from each layer of the sequence of layered neural networks to generate a total relevance score output.
    Type: Application
    Filed: December 6, 2019
    Publication date: June 10, 2021
    Applicant: BANK OF AMERICA CORPORATION
    Inventor: Eren Kursun
  • Publication number: 20230021835
    Abstract: A method of wireless communication by a user equipment (UE) includes receiving, from a base station, a configuration to train a neural network for multiple different signal to noise ratios (SNRs) of a channel estimate for a wireless communication channel. The method also includes determining a current SNR of the channel estimate is above a first threshold value. The method further includes training the neural network based on the channel estimate, to obtain a first trained neural network. The method still further includes perturbing the channel estimate to obtain a perturbed channel estimate, and training the neural network based on the perturbed channel estimate, to obtain a second trained neural network. The method includes reporting, to the base station, parameters of the first trained neural network along with the channel estimate, and parameters of the second trained neural network.
    Type: Application
    Filed: July 26, 2021
    Publication date: January 26, 2023
    Inventors: Pavan Kumar VITTHALADEVUNI, Taesang YOO, Naga BHUSHAN
  • Patent number: 5422983
    Abstract: The neural engine (20) is a hardware implementation of a neural network for use in real-time systems. The neural engine (20) includes a control circuit (26) and one or more multiply/accumulate circuits (28). Each multiply/accumulate circuit (28) includes a parallel/serial arrangement of multiple multiplier/accumulators (84) interconnected with weight storage elements (80) to yield multiple neural weightings and sums in a single clock cycle. A neural processing language is used to program the neural engine (20) through a conventional host personal computer (22). The parallel processing permits very high processing speeds to permit real-time pattern classification capability.
    Type: Grant
    Filed: July 19, 1993
    Date of Patent: June 6, 1995
    Assignee: Hughes Aircraft Company
    Inventors: Patrick F. Castelaz, Dwight E. Mills, Steven C. Woo, Jack I. Jmaev, Tammy L. Henrikson
Narrow Results

Filter by US Classification