Search Patents
  • Publication number: 20210303997
    Abstract: Provided are a method and apparatuses for training a classification neural network, a text classification method and apparatus and an electronic device. The method includes: acquiring a regression result of sample text data, which is determined based on a pre-constructed first target neural network and represents a classification trend of the sample text data; inputting the sample text data and the regression result to a second target neural network; obtaining a predicted classification result of each piece of sample text data based on the second target neural network; adjusting a parameter of the second target neural network according to a difference between the predicted classification result and a true value of a corresponding category; and obtaining a trained second target neural network after a change of network loss related to the second target neural network meets a convergence condition.
    Type: Application
    Filed: August 25, 2020
    Publication date: September 30, 2021
    Applicant: BEIJING XIAOMI PINECONE ELECTRONICS CO., LTD.
    Inventors: Zeyu XU, Erli MENG, Lei SUN
  • Publication number: 20230237660
    Abstract: Systems and methods are provided for medical image classification of images from varying sources. A set of microscopic medical images are acquired, and a first neural network module configured to reduce each of the set of microscopic medical images to a feature representation is generated. The first neural network module, a second neural network module, and a third neural network module are trained on at least a subset of the set of microscopic medical images. The second neural network module is trained to receive feature representation associated with an image of the microscopic images and classify the image into one of a first plurality of output classes. The third neural network module is trained to receive the feature representation, classify the image into one of a second plurality of output classes based on the feature representation, and provide feedback to the first neural network module.
    Type: Application
    Filed: June 29, 2021
    Publication date: July 27, 2023
    Inventors: Hadi Shafiee, Prudhvi Thirumalaraju, Manoj Kumar Kanakasabapathy, Sai Hemanth Kumar
  • Publication number: 20220261652
    Abstract: A computer implemented method of training a neural network configured to combine a set of coefficients with respective input data values. So as to train a test implementation of the neural network, sparsity is applied to one or more of the coefficients according to a sparsity parameter, the sparsity parameter indicating a level of sparsity to be applied to the set of coefficients; the test implementation of the neural network is operated on training input data using the coefficients so as to form training output data; in dependence on the training output data, assessing the accuracy of the neural network; the sparsity parameter is updated in dependence on the accuracy of the neural network; and a runtime implementation of the neural network is configured in dependence on the updated sparsity parameter.
    Type: Application
    Filed: December 22, 2021
    Publication date: August 18, 2022
    Inventors: Muhammad Asad, Elia Condorelli, Cagatay Dikici
  • Patent number: 11645358
    Abstract: In an example, a neural network program corresponding to a neural network model is received. The neural network program includes matrices, vectors, and matrix-vector multiplication (MVM) operations. A computation graph corresponding to the neural network model is generated. The computation graph includes a plurality of nodes, each node representing a MVM operation, a matrix, or a vector. Further, a class model corresponding to the neural network model is populated with a data structure pointing to the computation graph. The computation graph is traversed based on the class model. Based on the traversal, the plurality of MVM operations are assigned to MVM units of a neural network accelerator. Each MVM unit can perform a MVM operation. Based on assignment of the plurality of MVM operations, an executable file is generated for execution by the neural network accelerator.
    Type: Grant
    Filed: January 29, 2019
    Date of Patent: May 9, 2023
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Soumitra Chatterjee, Sunil Vishwanathpur Lakshminarasimha, Mohan Parthasarathy
  • Publication number: 20220253680
    Abstract: A system including a main neural network for performing one or more machine learning tasks on a network input to generate one or more network outputs. The main neural network includes a Mixture of Experts (MoE) subnetwork that includes a plurality of expert neural networks and a gating subsystem. The gating subsystem is configured to: apply a softmax function to a set of gating parameters having learned values to generate a respective softmax score for each of one or more of the plurality of expert neural networks; determine a respective weight for each of the one or more of the plurality of expert neural networks; select a proper subset of the plurality of expert neural networks; and combine the respective expert outputs generated by the one or more expert neural networks in the proper subset to generate one or more MoE outputs.
    Type: Application
    Filed: February 4, 2022
    Publication date: August 11, 2022
    Inventors: Zhe Zhao, Maheswaran Sathiamoorthy, Lichan Hong, Yihua Chen, Ed Huai-hsin Chi, Aakanksha Chowdhery, Hussein Hazimeh
  • Publication number: 20170326382
    Abstract: Waveguide neural interface devices and methods for fabricating such devices are provided herein. An exemplary interface device includes a neural device comprising an exterior neural device sidewall extending to a distal end portion of the neural device, an array of electrode sites supported by a first face of the neural device sidewall. The array includes a recording electrode site. The exemplary interface device further includes a waveguide extending along the neural device, the waveguide having a distal end to emit light to illuminate targeted tissue adjacent to the recording electrode site, and a light redirecting element disposed at the distal end of the waveguide. The light redirecting element redirects light traveling through the waveguide in a manner that avoids direct illumination of the recording electrode site on the first face of the neural device sidewall.
    Type: Application
    Filed: May 8, 2017
    Publication date: November 16, 2017
    Applicant: NeuroNexus Technologies, Inc.
    Inventors: John P. Seymour, Mayurachat Ning Gulari, Daryl R. Kipke, KC Kong
  • Publication number: 20230351185
    Abstract: Embodiments of the disclosure provide an optimizing method and a computer system for a neural network, and a computer-readable storage medium. In the method, the neural network is pruned sequentially using two different pruning algorithms. The pruned neural network is retrained in response to each pruning algorithm pruning the neural network. Thereby, the computation amount and the parameter amount of the neural network are reduced.
    Type: Application
    Filed: August 3, 2022
    Publication date: November 2, 2023
    Applicant: Wistron Corporation
    Inventors: Jiun-In Guo, En-Chih Chang
  • Patent number: 11604957
    Abstract: Systems and methods for designing a hybrid neural network comprising at least one physical neural network component and at least one digital neural network component. A loss function is defined within a design space composed of a plurality of voxels, the design space encompassing one or more physical structures of the at least one physical neural network component and one or more architectural features of the digital neural network. Values are determined for at least one functional parameter for the one or more physical structures, and the at least one architectural parameter for the one or more architectural features, using a domain solver to solve Maxwell's equations so that a loss determined according to the loss function is within a threshold loss. Final structures are defined for the at least one physical neural network component and the digital neural network component based on the values.
    Type: Grant
    Filed: September 20, 2019
    Date of Patent: March 14, 2023
    Assignee: X Development LLC
    Inventors: Martin Friedrich Schubert, Brian John Adolf, Jesse Lu
  • Patent number: 11392833
    Abstract: An audio processing system is described. The audio processing system uses a convolutional neural network architecture to process audio data, a recurrent neural network architecture to process at least data derived from an output of the convolutional neural network architecture, and a feed-forward neural network architecture to process at least data derived from an output of the recurrent neural network architecture. The feed-forward neural network architecture is configured to output classification scores for a plurality of sound units associated with speech. The classification scores indicate a presence of one or more sound units in the audio data. The convolutional neural network architecture has a plurality of convolutional groups arranged in series, where a convolutional group includes a combination of two data mappings arranged in parallel.
    Type: Grant
    Filed: February 13, 2020
    Date of Patent: July 19, 2022
    Assignee: SoundHound, Inc.
    Inventors: Maisy Wieman, Andrew Carl Spencer, Zìlì L{hacek over (i)}, Cristina Vasconcelos
  • Publication number: 20150242741
    Abstract: A method of executing co-processing in a neural network comprises swapping a portion of the neural network to a first processing node for a period of time. The method also includes executing the portion of the neural network with the first processing node. Additionally, the method includes returning the portion of the neural network to a second processing node after the period of time. Further, the method includes executing the portion of the neural network with the second processing node.
    Type: Application
    Filed: May 8, 2014
    Publication date: August 27, 2015
    Applicant: QUALCOMM Incorporated
    Inventors: Michael CAMPOS, Anthony LEWIS, Naveen Gandham RAO
  • Publication number: 20230214635
    Abstract: A neural processing device and transaction tracking method thereof are provided. The neural processing device comprises a first set of a plurality of neural cores, a shared memory shared by the first set of the plurality of neural cores, and a programmable hardware transactional memory (PHTM) configured to receive a memory access request directed to the shared memory from the first set of the plurality of neural cores and configured to commit or buffer the memory access request.
    Type: Application
    Filed: October 4, 2022
    Publication date: July 6, 2023
    Inventors: Wongyu Shin, Kyeongryeol Bong
  • Publication number: 20200133977
    Abstract: An apparatus includes a processor to: train a first neural network of a chain to generate first configuration data including first trained parameters, wherein the chain performs an analytical function generating a set of output values from a set of input values, each neural network has inputs to receive the set of input values and outputs to output a portion of the set of output values, and the neural networks are ordered from the first at the head to a last neural network at the tail, and are interconnected so that each neural network additionally receives the outputs of a preceding neural network; train, using the first configuration data, a next neural network in the chain ordering to generate next configuration data including next trained parameters; and use at least the first and next configuration data and data indicating the interconnections to instantiate the chain to perform the analytical function.
    Type: Application
    Filed: December 26, 2019
    Publication date: April 30, 2020
    Inventors: Henry Gabriel Victor Bequet, Jacques Rioux, John Alejandro Izquierdo, Huina Chen, Juan Du
  • Publication number: 20220036150
    Abstract: According to various embodiments, a method for generating a compact and accurate neural network for a dataset is disclosed. The method includes providing an initial neural network architecture; performing a dataset modification on the dataset, the dataset modification including reducing dimensionality of the dataset; performing a first compression step on the initial neural network architecture that results in a compressed neural network architecture, the first compression step including reducing a number of neurons in one or more layers of the initial neural network architecture based on a feature compression ratio determined by the reduced dimensionality of the dataset; and performing a second compression step on the compressed neural network architecture, the second compression step including one or more of iteratively growing connections, growing neurons, and pruning connections until a desired neural network architecture has been generated.
    Type: Application
    Filed: July 12, 2019
    Publication date: February 3, 2022
    Applicant: The Trustees of Princeton University
    Inventors: Shayan HASSANTABAR, Zeyu WANG, Niraj K. JHA
  • Patent number: 11344729
    Abstract: A method of controlling a neural stimulus by use of feedback. The neural stimulus is applied to a neural pathway in order to give rise to an evoked action potential on the neural pathway. The stimulus is defined by at least one stimulus parameter. A neural compound action potential response evoked by the stimulus is measured. From the measured evoked response a feedback variable is derived. A feedback loop is completed by using the feedback variable to control the at least one stimulus parameter value. The feedback loop adaptively compensates for changes in a gain of the feedback loop caused by electrode movement relative to the neural pathway.
    Type: Grant
    Filed: November 22, 2021
    Date of Patent: May 31, 2022
    Assignee: Saluda Medical Pty Ltd
    Inventors: Peter Scott Vallack Single, Dean Michael Karantonis
  • Publication number: 20220076121
    Abstract: A processor-implemented neural architecture search method includes: acquiring performance of neural network blocks included in a pre-trained neural network; selecting at least one target block for performance improvement from the neural network blocks; training weights and architecture parameters of candidate blocks corresponding to the target block based on arbitrary input data and output data of the target block generated based on the input data; and updating the pre-trained neural network by replacing the target block in the pre-trained neural network with one of the candidate blocks based on the trained architecture parameters.
    Type: Application
    Filed: January 20, 2021
    Publication date: March 10, 2022
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventor: Saerom CHOI
  • Publication number: 20180349788
    Abstract: An introspection network is a machine-learned neural network that accelerates training of other neural networks. The introspection network receives a weight history for each of a plurality of weights from a current training step for a target neural network. A weight history includes at least four values for the weight that are obtained during training of the target neural network up to the current step. The introspection network then provides, for each of the plurality of weights, a respective predicted value, based on the weight history. The predicted value for a weight represents a value for the weight in a future training step for the target neural network. Thus, the predicted value represents a jump in the training steps of the target neural network, which reduces the training time of the target neural network. The introspection network then sets each of the plurality of weights to its respective predicted value.
    Type: Application
    Filed: May 30, 2017
    Publication date: December 6, 2018
    Inventors: Mausoom Sarkar, Balaji Krishnamurthy, Abhishek Sinha, Aahitagni Mukherjee
  • Publication number: 20230289583
    Abstract: Systems and methods for adapting a neural network model on a hardware platform. An example method includes obtaining neural network model information comprising decision points associated with a neural network, with one or more first decision points being associated with a layout of the neural network. Platform information associated with a hardware platform for which the neural network model information is to be adapted is accessed. Constraints associated with adapting the neural network model information to the hardware platform are determined based on the platform information, with a first constraint being associated with a processing resource of the hardware platform and with a second constraint being associated with a performance metric. A candidate configuration for the neural network is generated via execution of a satisfiability solver based on the constraints, with the candidate configuration assigns values to the plurality of decision points.
    Type: Application
    Filed: March 16, 2023
    Publication date: September 14, 2023
    Inventor: Michael Driscoll
  • Publication number: 20040230270
    Abstract: An interface for selective excitation or sensing of neural cells in a biological neural network is provided. The interface includes a membrane with a number of channels passing through the membrane. Each channel has at least one electrode within it. Neural cells in the biological neural network grow or migrate into the channels, thereby coming into close proximity to the electrodes.
    Type: Application
    Filed: December 19, 2003
    Publication date: November 18, 2004
    Inventors: Philip Huie, Daniel V. Palanker, Harvey A. Fishman, Alexander Vankov
  • Patent number: 11620495
    Abstract: Some embodiments provide a method for executing a neural network that includes multiple nodes. The method receives an input for a particular execution of the neural network. The method receives state data that includes data generated from at least two previous executions of the neural network. The method executes the neural network to generate a set of output data for the received input. A set of the nodes performs computations using (i) data output from other nodes of the particular execution of the neural network and (ii) the received state data generated from at least two previous executions of the neural network.
    Type: Grant
    Filed: September 26, 2019
    Date of Patent: April 4, 2023
    Assignee: PERCEIVE CORPORATION
    Inventors: Andrew C. Mihal, Steven L. Teig, Eric A. Sather
  • Publication number: 20210303726
    Abstract: A method for privacy preserving synthetic string generation using recurrent neural networks includes receiving input data that includes a plurality of strings with private information. A neural network model is trained using the plurality of strings. The neural network model includes a recurrent neural network (RNN). An anonymous string is generated with the neural network model after training the neural network model with the plurality of strings from the input data. The anonymous string is validated to preclude the private information from the anonymous string. Anonymous data is transmitted that includes the anonymous string and precludes the private information in response to a request for the anonymous data.
    Type: Application
    Filed: March 31, 2020
    Publication date: September 30, 2021
    Applicant: Intuit Inc.
    Inventors: Liron Hayman, Shlomi Medalion
Narrow Results

Filter by US Classification