Search Patents
  • Publication number: 20070094172
    Abstract: A neural network-based rating system includes a data set, said data set further comprising at least two records and at least one field associated with said records and a data rating application, which includes means for user input of ratings for at least a first of said records of said data set; at least one artificial neural network; means for automatically dimensioning said artificial neural network as a function of said fields within said data set; means for initiating training of said artificial neural network, said trained artificial neural network operative to generate ratings for at least a second of said records of said data set; means for initiating rating of at least said second record of said data set by said trained artificial neural network; and means for sorting said data set based on said user ratings and said artificial neural network-generated ratings.
    Type: Application
    Filed: July 21, 2006
    Publication date: April 26, 2007
    Inventor: Stephen Thaler
  • Patent number: 10449382
    Abstract: An implantable subsystem includes multiple implantable neural probes disposed on a flexible substrate. Each neural probe is configured to magnetically stimulate brain neurons. Each probe includes an array of magnetic neural stimulators that magnetically stimulate neurons. Each probe also includes neural probe activation circuitry comprising thin film switches disposed on the flexible substrate. The thin film switches are electrically coupled to row and column activation lines and selectively activate the magnetic neural stimulators in response to neural stimulation activation signals carried on the activation lines.
    Type: Grant
    Filed: June 7, 2017
    Date of Patent: October 22, 2019
    Assignee: Palo Alto Research Center Incorporated
    Inventors: Bernard D. Casse, George Daniel, Jonathan Rivnay, Christopher Paulson, Robert Street
  • Publication number: 20210133552
    Abstract: A neural network learning device 20 is equipped with: a determination module 22 that determines the size of a local region in learning information 200 which is to be learned by a neural network 21 containing multiple layers, said determination being made for each layer, on the basis of the structure of the neural network 21; and a control module 25 that, on the basis of size of the local region as determined by the determination module 22, extracts the local region from the learning information 200, and performs control such that the learning of the learning information represented by the extracted local region by the neural network 200 is carried out repeatedly while changing the size of the extracted local region, and thus, a reduction in the generalization performance of the neural network can be avoided even when there is little learning data.
    Type: Application
    Filed: January 17, 2018
    Publication date: May 6, 2021
    Applicant: NEC Corporation
    Inventor: Masato ISHII
  • Publication number: 20190138606
    Abstract: Disclosed embodiments include a neural network-based translation method, including: splitting the unknown word in an initial translation into one or more characters, and inputting, into a first multi-layer neural network, a character sequence constituted by the one or more characters ; obtaining a character vector of each character in the character sequence by using the first multi-layer neural network, and inputting all character vectors in the character sequence into a second multi-layer neural network; encoding all the character vectors by using the second multi-layer neural network and a preset common word database, to obtain a semantic vector; and inputting the semantic vector into a third multi-layer neural network, decoding the semantic vector by using the third multi-layer neural network, and determining a final translation of the to-be-translated sentence based on the initial translation of the to-be-translated sentence.
    Type: Application
    Filed: January 7, 2019
    Publication date: May 9, 2019
    Inventors: Zhaopeng TU, Hang LI, Wenbin JIANG
  • Patent number: 11106974
    Abstract: A technique for training a neural network including an input layer, one or more hidden layers and an output layer, in which the trained neural network can be used to perform a task such as speech recognition. In the technique, a base of the neural network having at least a pre-trained hidden layer is prepared. A parameter set associated with one pre-trained hidden layer in the neural network is decomposed into a plurality of new parameter sets. The number of hidden layers in the neural network is increased by using the plurality of the new parameter sets. Pre-training for the neural network is performed.
    Type: Grant
    Filed: July 5, 2017
    Date of Patent: August 31, 2021
    Assignee: International Business Machines Corporation
    Inventors: Takashi Fukuda, Osamu Ichikawa
  • Publication number: 20210105565
    Abstract: A hearing device comprises an input transducer comprising a microphone for providing an electric input signal representative of sound in the environment of the hearing device, a pre-processor for processing electric input signal and providing a multitude of feature vectors, each being representative of a time segment thereof, a neural network processor adapted to implement a neural network for implementing a detector configured to provide an output indicative of a characteristic property of the at least one electric input signal, the neural network being configured to receive said multitude of feature vectors as input vectors and to provide corresponding output vectors representative of said output of said detector in dependence of said input vectors.
    Type: Application
    Filed: October 1, 2020
    Publication date: April 8, 2021
    Applicant: Oticon A/S
    Inventors: Michael Syskind PEDERSEN, Asger Heidemann ANDERSEN, Jesper JENSEN, Nels Hede ROHDE, Anders Brødløs OLSEN, Michael Smed KRISTENSEN, Thomas BENTSEN, Svend Oscar PETERSEN
  • Patent number: 11610117
    Abstract: Systems and methods for adapting a neural network model on a hardware platform. An example method includes obtaining neural network model information comprising decision points associated with a neural network, with one or more first decision points being associated with a layout of the neural network. Platform information associated with a hardware platform for which the neural network model information is to be adapted is accessed. Constraints associated with adapting the neural network model information to the hardware platform are determined based on the platform information, with a first constraint being associated with a processing resource of the hardware platform and with a second constraint being associated with a performance metric. A candidate configuration for the neural network is generated via execution of a satisfiability solver based on the constraints, with the candidate configuration assigns values to the plurality of decision points.
    Type: Grant
    Filed: December 27, 2019
    Date of Patent: March 21, 2023
    Assignee: Tesla, Inc.
    Inventor: Michael Driscoll
  • Publication number: 20230186100
    Abstract: A computer processing system is configured to train a model for use in semantic image segmentation. The model comprises a refinement neural network, a discriminator neural network. The refinement neural network is configured to receive a predicted label distribution for an image, obtain one or more random values from a random or pseudo-random noise source, use the one or more random values to generate a plurality of predicted segmentation maps from the received predicted label distribution and output the plurality of predicted segmentation maps to the discriminator neural network. The computer processing system is configured to train the refinement neural network using an objective function that is a function of an output of the discriminator neural network and that further includes a term representative of a difference between the predicted label distribution and an average of the plurality of predicted segmentation maps output by the refinement neural network for the predicted label distribution.
    Type: Application
    Filed: May 27, 2021
    Publication date: June 15, 2023
    Inventors: Cedric Nugteren, Elias Kassapis, Georgi Dikov
  • Publication number: 20200342324
    Abstract: A system and method for providing object recognition using artificial neural networks. The method includes capturing a plurality of reference images with a camera associated with an edge node on a communication network. The reference images are received by a centralized server on the communication network. The reference images are analyzed with a parent neural network of the centralized server to determine a subset of objects identified by the parent neural network in the reference images. One or more filters that are responsive to the subset of objects are selected from the parent neural network. A pruned neural network is created from only the one or more filters. The pruned neural network is deployed to the edge node. Real-time images are captured with the camera of the edge node and objects in the real-time images are identified with the pruned neural network.
    Type: Application
    Filed: January 9, 2019
    Publication date: October 29, 2020
    Inventors: KALPATHY SITARAMAN SIVARAMAN, SIRISHA RANGAVAJHALA, ABHISHEK MURTHY, TALMAI BRANDÃO DE OLIVEIRA, XIAOKE SHEN
  • Publication number: 20220398377
    Abstract: Systems, apparatuses, and methods are described for inverting neural embeddings. One or more forward neural embeddings associated with meanings, features, and/or characteristics of data samples may be generated for one or more data samples. One or more inverse neural embeddings associated with the one or more forward neural embeddings may be determined. One or more inverse feature sets for the one or more inverse neural embeddings may be generated.
    Type: Application
    Filed: June 14, 2021
    Publication date: December 15, 2022
    Inventors: Tarek Aziz Lahlou, Nathan Wolfe, Christopher Larson
  • Publication number: 20150112360
    Abstract: A neural probe system having a single guide tube that is inserted into neural tissue and from which a number of neural probes can be deployed is described. Each probe is deployable into tissue along a desired trajectory. This is done by supporting the electrode array on a spring tape-type carrier that maintains axial stiffness once the neural probe has deployed out a channel in the guide tube. That way, a target neural tissue is bounded by an increased number of neural probes while minimizing trauma to surrounding body tissue.
    Type: Application
    Filed: October 21, 2014
    Publication date: April 23, 2015
    Inventors: David S. Pellinen, Bencharong Suwarato, Rio J. Vetter, Jamille Farraye Hetke, Daryl R. Kipke
  • Patent number: 11514296
    Abstract: The present disclosure provides an output method for multiple neural networks. The method includes dividing an operator operation process for each of the neural networks or operator operation processes for part of the neural networks into multiple times of executions according to a preset ratio of output frame rates among the multiple neural networks; and executing the operator operation processes for the multiple neural networks sequentially by switching among the networks, such that the multiple neural networks output uniformly and satisfy the preset ratio of output frame rates.
    Type: Grant
    Filed: December 27, 2019
    Date of Patent: November 29, 2022
    Assignee: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.
    Inventor: Yang Zhang
  • Publication number: 20220076121
    Abstract: A processor-implemented neural architecture search method includes: acquiring performance of neural network blocks included in a pre-trained neural network; selecting at least one target block for performance improvement from the neural network blocks; training weights and architecture parameters of candidate blocks corresponding to the target block based on arbitrary input data and output data of the target block generated based on the input data; and updating the pre-trained neural network by replacing the target block in the pre-trained neural network with one of the candidate blocks based on the trained architecture parameters.
    Type: Application
    Filed: January 20, 2021
    Publication date: March 10, 2022
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventor: Saerom CHOI
  • Publication number: 20230237660
    Abstract: Systems and methods are provided for medical image classification of images from varying sources. A set of microscopic medical images are acquired, and a first neural network module configured to reduce each of the set of microscopic medical images to a feature representation is generated. The first neural network module, a second neural network module, and a third neural network module are trained on at least a subset of the set of microscopic medical images. The second neural network module is trained to receive feature representation associated with an image of the microscopic images and classify the image into one of a first plurality of output classes. The third neural network module is trained to receive the feature representation, classify the image into one of a second plurality of output classes based on the feature representation, and provide feedback to the first neural network module.
    Type: Application
    Filed: June 29, 2021
    Publication date: July 27, 2023
    Inventors: Hadi Shafiee, Prudhvi Thirumalaraju, Manoj Kumar Kanakasabapathy, Sai Hemanth Kumar
  • Patent number: 8527042
    Abstract: Various system embodiments comprise a neural stimulation delivery system adapted to deliver a neural stimulation signal for use in delivering a neural stimulation therapy, a side effect detector, and a controller. The controller is adapted to control the neural stimulation delivery system, receive a signal indicative of detected side effect, determine whether the detected side enact is attributable to delivered neural stimulation therapy, and automatically titrate the neural stimulation therapy to abate the side effect. In various embodiments, the side effect detector includes a cough detector. In various embodiments, the controller is adapted to independently adjusting at least one stimulation parameter for at least one phase in the biphasic waveform as part of a process to titrate the neural stimulation therapy. Other aspects and embodiments are provided herein.
    Type: Grant
    Filed: January 23, 2012
    Date of Patent: September 3, 2013
    Assignee: Cardiac Pacemakers, Inc.
    Inventors: Imad Libbus, Julio C. Spinelli
  • Publication number: 20230334806
    Abstract: Neural representations may be used for multi-view reconstruction of scenes. A plurality of color images representing a scene from a plurality of camera poses may be received. For each point of a plurality of points along a ray, a signed distance and a color value may be determined as a function of a feature volume, a first neural network, and a second neural network. A predicted output color may be determined as a function of the density. At least one of the first neural network, the second neural network, the feature volume, or the transformation parameter may be adjusted based on the predicted output color and a corresponding target color obtained based on one of the color images. A three-dimensional representation of the scene may be displayed based on at least one of the first neural network, the second neural network, the feature volume, or the transformation parameter.
    Type: Application
    Filed: March 17, 2023
    Publication date: October 19, 2023
    Applicant: Meta Platforms Technologies, LLC
    Inventors: Lei XIAO, Derek NOWROUZEZAHRAI, Joey LITALIEN, Feng LIU
  • Patent number: 11620495
    Abstract: Some embodiments provide a method for executing a neural network that includes multiple nodes. The method receives an input for a particular execution of the neural network. The method receives state data that includes data generated from at least two previous executions of the neural network. The method executes the neural network to generate a set of output data for the received input. A set of the nodes performs computations using (i) data output from other nodes of the particular execution of the neural network and (ii) the received state data generated from at least two previous executions of the neural network.
    Type: Grant
    Filed: September 26, 2019
    Date of Patent: April 4, 2023
    Assignee: PERCEIVE CORPORATION
    Inventors: Andrew C. Mihal, Steven L. Teig, Eric A. Sather
  • Patent number: 5422983
    Abstract: The neural engine (20) is a hardware implementation of a neural network for use in real-time systems. The neural engine (20) includes a control circuit (26) and one or more multiply/accumulate circuits (28). Each multiply/accumulate circuit (28) includes a parallel/serial arrangement of multiple multiplier/accumulators (84) interconnected with weight storage elements (80) to yield multiple neural weightings and sums in a single clock cycle. A neural processing language is used to program the neural engine (20) through a conventional host personal computer (22). The parallel processing permits very high processing speeds to permit real-time pattern classification capability.
    Type: Grant
    Filed: July 19, 1993
    Date of Patent: June 6, 1995
    Assignee: Hughes Aircraft Company
    Inventors: Patrick F. Castelaz, Dwight E. Mills, Steven C. Woo, Jack I. Jmaev, Tammy L. Henrikson
  • Publication number: 20210042612
    Abstract: A modular neural network system comprising: a plurality of neural network modules: and a controller configured to select a combination of at least one of the neural network modules to construct a neural network, dedicated for a specific task.
    Type: Application
    Filed: October 22, 2020
    Publication date: February 11, 2021
    Inventors: Pavel NOSKO, Alex ROSEN, llya BLAYVAS, Gal PERETS, Ron FRIDENTAL
  • Publication number: 20180349788
    Abstract: An introspection network is a machine-learned neural network that accelerates training of other neural networks. The introspection network receives a weight history for each of a plurality of weights from a current training step for a target neural network. A weight history includes at least four values for the weight that are obtained during training of the target neural network up to the current step. The introspection network then provides, for each of the plurality of weights, a respective predicted value, based on the weight history. The predicted value for a weight represents a value for the weight in a future training step for the target neural network. Thus, the predicted value represents a jump in the training steps of the target neural network, which reduces the training time of the target neural network. The introspection network then sets each of the plurality of weights to its respective predicted value.
    Type: Application
    Filed: May 30, 2017
    Publication date: December 6, 2018
    Inventors: Mausoom Sarkar, Balaji Krishnamurthy, Abhishek Sinha, Aahitagni Mukherjee
Narrow Results

Filter by US Classification