Multilayer Feedforward Patents (Class 706/31)
-
Patent number: 11593664Abstract: A method can be performed prior to implementation of a neural network by a processing unit. The neural network comprising a succession of layers and at least one operator applied between at least one pair of successive layers. A computational tool generates an executable code intended to be executed by the processing unit in order to implement the neural network. The computational tool generates at least one transfer function between the at least one pair of layers taking the form of a set of pre-computed values.Type: GrantFiled: June 30, 2020Date of Patent: February 28, 2023Assignees: STMicroelectronics (Rousset) SAS, STMicroelectronics S.r.l.Inventors: Laurent Folliot, Pierre Demaj, Emanuele Plebani
-
Patent number: 11551093Abstract: In implementations of resource-aware training for neural network, one or more computing devices of a system implement an architecture optimization module for monitoring parameter utilization while training a neural network. Dead neurons of the neural network are identified as having activation scales less than a threshold. Neurons with activation scales greater than or equal to the threshold are identified as survived neurons. The dead neurons are converted to reborn neurons by adding the dead neurons to layers of the neural network having the survived neurons. The reborn neurons are prevented from connecting to the survived neurons for training the reborn neurons.Type: GrantFiled: January 22, 2019Date of Patent: January 10, 2023Assignee: Adobe Inc.Inventors: Zhe Lin, Siyuan Qiao, Jianming Zhang
-
Patent number: 11544525Abstract: An artificial intelligence (AI) system is disclosed. The AI system provides an AI system lane processing chain, at least one AI processing block, a local memory, a hardware sequencer, and a lane composer. Each of the at least one AI processing block, the local memory coupled to the AI system lane processing chain, the hardware sequencer coupled to the AI system lane processing chain, and the lane composer is coupled to the AI system lane processing chain. The AI system lane processing chain is dynamically created by the lane composer.Type: GrantFiled: July 31, 2019Date of Patent: January 3, 2023Inventor: Sateesh Kumar Addepalli
-
Patent number: 11544535Abstract: Various embodiments describe techniques for making inferences from graph-structured data using graph convolutional networks (GCNs). The GCNs use various pre-defined motifs to filter and select adjacent nodes for graph convolution at individual nodes, rather than merely using edge-defined immediate-neighbor adjacency for information integration at each node. In certain embodiments, the graph convolutional networks use attention mechanisms to select a motif from multiple motifs and select a step size for each respective node in a graph, in order to capture information from the most relevant neighborhood of the respective node.Type: GrantFiled: March 8, 2019Date of Patent: January 3, 2023Assignee: ADOBE INC.Inventors: John Boaz Tsang Lee, Ryan Rossi, Sungchul Kim, Eunyee Koh, Anup Rao
-
Patent number: 11537869Abstract: Systems and methods provide a learned difference metric that operates in a wide artifact space. An example method includes initializing a committee of deep neural networks with labeled distortion pairs, iteratively actively learning a difference metric using the committee and psychophysics tasks for informative distortion pairs, and using the difference metric as an objective function in a machine-learned digital file processing task. Iteratively actively learning the difference metric can include providing an unlabeled distortion pair as input to each of the deep neural networks in the committee, a distortion pair being a base image and a distorted image resulting from application of an artifact applied to the base image, obtaining a plurality of difference metric scores for the unlabeled distortion pair from the deep neural networks, and identifying the unlabeled distortion pair as an informative distortion pair when the difference metric scores satisfy a diversity metric.Type: GrantFiled: December 27, 2017Date of Patent: December 27, 2022Assignee: Twitter, Inc.Inventors: Ferenc Huszar, Lucas Theis, Pietro Berkes
-
Patent number: 11526680Abstract: Systems and methods are provided to pre-train projection networks for use as transferable natural language representation generators. In particular, example pre-training schemes described herein enable learning of transferable deep neural projection representations over randomized locality sensitive hashing (LSH) projections, thereby surmounting the need to store any embedding matrices because the projections can be dynamically computed at inference time.Type: GrantFiled: February 14, 2020Date of Patent: December 13, 2022Assignee: GOOGLE LLCInventors: Sujith Ravi, Zornitsa Kozareva, Chinnadhurai Sankar
-
Patent number: 11526989Abstract: In brain analysis, anatomical standardization is performed when analyzing a region of interest (ROI). There are individual differences in the shape and size of the brain and by converting the brain into a standard brain, these differences can be compared with each other and subjected to statistical analysis. When generating a standard brain analysis, a large number of pieces of image data are classified into a plurality of groups based on their anatomical features. An intermediate template that is an intermediate conversion image and a conversion map is calculated for each group, and the calculation of the intermediate template and the generation of the intermediate conversion image are repeated while gradually reducing the number of classifications, so that a final standard image is generated. Using the standard image and the intermediate template calculated during the generation of the standard image, spatial standardization of the measured image is performed.Type: GrantFiled: June 2, 2020Date of Patent: December 13, 2022Assignee: FUJIFILM HEALTHCARE CORPORATIONInventors: Toru Shirai, Ryota Satoh, Yasuo Kawata, Tomoki Amemiya, Yoshitaka Bito, Hisaaki Ochi
-
Patent number: 11518637Abstract: Medicine packaging apparatuses and methods for accurately determining a remaining sheet amount of a medicine packaging sheet are described. The apparatus includes: a roll support section to which a core tube of a medicine packaging sheet roll is attached; a sensor disposed in the roll support section for outputting a count value according to a rotation amount; a wireless reader-writer unit for writing information to a core tube IC tag and reading said information; an information generation section for generating information to be written to the core tube IC tag; a remaining sheet amount estimation section for estimating a current amount of remaining sheet based on the information and dimensional information of the core tube; and a controller which selectively performs an operation if a reference time-point count value is not yet written to the core tube IC tag and another operation if the count value is already written thereto.Type: GrantFiled: October 21, 2020Date of Patent: December 6, 2022Assignee: YUYAMA MFG. CO., LTD.Inventors: Katsunori Yoshina, Tomohiro Sugimoto, Noriyoshi Fujii
-
Patent number: 11500767Abstract: In accordance with an embodiment, a method for determining an overall memory size of a global memory area configured to store input data and output data of each layer of a neural network includes: for each current layer of the neural network after a first layer, determining a pair of elementary memory areas based on each preceding elementary memory area associated with a preceding layer, wherein: the two elementary memory areas of the pair of elementary memory areas respectively have two elementary memory sizes, each of the two elementary memory areas are configured to store input data and output data of the current layer of the neural network, the output data is respectively stored in two different locations, and the overall memory size of the global memory area corresponds to a smallest elementary memory size at an output of the last layer of the neural network.Type: GrantFiled: March 5, 2020Date of Patent: November 15, 2022Assignee: STMicroelectronics (Rousset) SASInventors: Laurent Folliot, Pierre Demaj
-
Patent number: 11475274Abstract: A computer-implemented method optimizes a neural network. One or more processors define layers in a neural network based on neuron locations relative to incoming initial inputs and original outgoing final outputs of the neural network, where a first defined layer is closer to the incoming initial inputs than a second defined layer, and where the second defined layer is closer to the original outgoing final outputs than the first defined layer. The processor(s) define parameter criticalities for parameter weights stored in a memory used by the neural network, and associate defined layers in the neural network with different memory banks based on the parameter criticalities for the parameter weights. The processor(s) store parameter weights used by neurons in the first defined layer in the first memory bank and parameter weights used by neurons in the second defined layer in the second memory bank.Type: GrantFiled: April 21, 2017Date of Patent: October 18, 2022Assignee: International Business Machines CorporationInventors: Pradip Bose, Alper Buyuktosunoglu, Augusto J. Vega
-
Patent number: 11475273Abstract: Systems and methods are provided for automatically scoring a constructed response. The constructed response is processed to generate a plurality of numerical vectors that is representative of the constructed response. A model is applied to the plurality of numerical vectors. The model includes an input layer configured to receive the plurality of numerical vectors, the input layer being connected to a following layer of the model via a first plurality of connections. Each of the connections has a first weight. An intermediate layer of nodes is configured to receive inputs from an immediately-preceding layer of the model via a second plurality of connections, each of the connections having a second weight. An output layer is connected to the intermediate layer via a third plurality of connections, each of the connections having a third weight. The output layer is configured to generate a score for the constructed response.Type: GrantFiled: March 24, 2020Date of Patent: October 18, 2022Assignee: Educational Testing ServiceInventors: Derrick Higgins, Lei Chen, Michael Heilman, Klaus Zechner, Nitin Madnani
-
Patent number: 11477046Abstract: A method and device for aggregating connected objects of a communications network. The connected objects have at least one basic feature. The method includes the following steps implemented on an aggregation device, in order to obtain a group avatar suitable for representing the connected objects: obtaining at least one basic feature; obtaining at least one feature of the group object, linked to a basic feature; and creating the group avatar including: a structure having a basic feature; a structure having a group feature; a structure for linking the group feature to at least one basic feature; and a group proxy structure having an association between an address of the group avatar and an address of the connected objects.Type: GrantFiled: May 17, 2019Date of Patent: October 18, 2022Assignee: ORANGEInventors: Stéphane Petit, Olivier Berteche
-
Patent number: 11461414Abstract: A searchable database of software features for software projects can be automatically built in some examples. One such example can involve analyzing descriptive information about a software project to determine software features of the software project. Then a feature vector for the software project can be generated based on the software features of the software project. The feature vector can be stored in a database having multiple feature vectors for multiple software projects. The multiple feature vectors can be easily and quickly searched in response to search queries.Type: GrantFiled: August 20, 2019Date of Patent: October 4, 2022Assignee: RED HAT, INC.Inventors: Fridolin Pokorny, Sanjay Arora, Christoph Goern
-
Patent number: 11454183Abstract: A method of generating an ROI profile for a fuel injector using machine learning and a constrained/limited training data set is disclosed. The method includes receiving a first plurality of measurement sets for a fuel injector when operating at a first target set point. Preferably, at least two measurement sets of the first plurality of measurement sets are selected to generate a first averaged ROI profile for the first target condition. The at least two selected measurement sets are then used to train a machine learning model that can output a predicted ROI profile for a fuel injector based on a desired pressure value and/or desired mass flow rate value. Training of the machine learning model preferably includes a predetermined number of iterations that induces overfitting within the model/neural network.Type: GrantFiled: December 8, 2021Date of Patent: September 27, 2022Assignee: SOUTHWEST RESEARCH INSTITUTEInventors: Khanh D. Cung, Zachary L. Williams, Ahmed A. Moiz, Daniel C. Bitsis, Jr.
-
Patent number: 11442779Abstract: Embodiments of the present disclosure relate to a method, device and computer program product for determining a resource amount of dedicated processing resources. The method comprises obtaining a structural representation of a neural network for deep learning processing, the structural representation indicating a layer attribute of the neural network that is associated with the dedicated processing resources; and determining the resource amount of the dedicated processing resources required for the deep learning processing based on the structural representation. In this manner, the resource amount of the dedicated processing resources required by the deep learning processing may be better estimated to improve the performance and resource utilization rate of the dedicated processing resource scheduling.Type: GrantFiled: January 4, 2019Date of Patent: September 13, 2022Assignee: Dell Products L.P.Inventors: Junping Zhao, Sanping Li
-
Patent number: 11443238Abstract: A computer system is accessible to a database storing learning data to generate a prediction model, the learning data includes input data and teacher data, the computer system: performs first learning to set an extraction criterion for extracting the learning data including the input data similar to prediction target data in a case of being input the prediction target data; extract the learning data from the first database based on the extraction criterion and generate a dataset; perform second learning to generate a prediction model using the dataset; generate a decision logic showing a prediction logic of the prediction model; and output information to present the decision logic.Type: GrantFiled: December 10, 2019Date of Patent: September 13, 2022Assignee: HITACHI, LTD.Inventor: Wataru Takeuchi
-
Patent number: 11423233Abstract: The present disclosure provides projection neural networks and example applications thereof. In particular, the present disclosure provides a number of different architectures for projection neural networks, including two example architectures which can be referred to as: Self-Governing Neural Networks (SGNNs) and Projection Sequence Networks (ProSeqoNets). Each projection neural network can include one or more projection layers that project an input into a different space. For example, each projection layer can use a set of projection functions to project the input into a bit-space, thereby greatly reducing the dimensionality of the input and enabling computation with lower resource usage. As such, the projection neural networks provided herein are highly useful for on-device inference in resource-constrained devices. For example, the provided SGNN and ProSeqoNet architectures are particularly beneficial for on-device inference such as, for example, solving natural language understanding tasks on-device.Type: GrantFiled: January 5, 2021Date of Patent: August 23, 2022Assignee: GOOGLE LLCInventors: Sujith Ravi, Zornitsa Kozareva
-
Patent number: 11301776Abstract: A method for a machine learning model training is provided which operates in a mixed CPU/GPU environment. The amount of general processing unit memory is larger than the amount of special processing unit memory. The method includes loading a complete training data set into the memory of the general processing unit, determining importance values relating to training data vectors in the provided training data set of the training data vectors, dynamically transferring training data vectors of the training data set from the general processing unit memory to a special processing unit memory using as decision criteria the importance value of the training data vector, wherein the importance value used is taken from an earlier training round of the machine learning model, and executing a training algorithm on the special processing unit with the training data vectors having the highest available importance values of one of the earlier training rounds.Type: GrantFiled: April 14, 2018Date of Patent: April 12, 2022Assignee: International Business Machines CorporationInventors: Celestine Duenner, Thomas P. Parnell, Charalampos Pozidis
-
Patent number: 11295174Abstract: A computer system and method for extending parallelized asynchronous reinforcement learning to include agent modeling for training a neural network is described. Coordinated operation of plurality of hardware processors or threads is utilized such that each functions as a worker process that is configured to simultaneously interact with a target computing environment for local gradient computation based on a loss determination mechanism and to update global network parameters. The loss determination mechanism includes at least a policy loss term (actor), a value loss term (critic), and a supervised cross entropy loss. Variations are described further where the neural network is adapted to include a latent space to track agent policy features.Type: GrantFiled: November 5, 2019Date of Patent: April 5, 2022Assignee: ROYAL BANK OF CANADAInventors: Pablo Francisco Hernandez Leal, Bilal Kartal, Matthew Edmund Taylor
-
Patent number: 11238337Abstract: A method is described for designing systems that provide efficient implementations of feed-forward, recurrent, and deep networks that process dynamic signals using temporal filters and static or time-varying nonlinearities. A system design methodology is described that provides an engineered architecture. This architecture defines a core set of network components and operations for efficient computation of dynamic signals using temporal filters and static or time-varying nonlinearities. These methods apply to a wide variety of connected nonlinearities that include temporal filters in the connections. Here we apply the methods to synaptic models coupled with spiking and/or non-spiking neurons whose connection parameters are determined using a variety of methods of optimization.Type: GrantFiled: August 22, 2016Date of Patent: February 1, 2022Assignee: Applied Brain Research Inc.Inventors: Aaron Russell Voelker, Christopher David Eliasmith
-
Patent number: 11138505Abstract: A method of generating a neural network may be provided. A method may include applying non-linear quantization to a plurality of synaptic weights of a neural network model. The method may further include training the neural network model. Further, the method ma include generating a neural network output from the trained neural network model based on or more inputs received by the trained neural network model.Type: GrantFiled: December 21, 2017Date of Patent: October 5, 2021Assignee: FUJITSU LIMITEDInventors: Masaya Kibune, Xuan Tan
-
Patent number: 11016649Abstract: Various systems, methods, and media allow for graphical display of multivariate data in parallel coordinate plots and similar plots for visualizing data for a plurality of variables simultaneously. These systems, methods, and media can aggregate individual data points into curves between axes, significantly improving functioning of computer systems by decreasing the rendering time for such plots. Certain implementations can allow a user to examine the relationship between two or more variables, by displaying the data on non-parallel or other transformed axes.Type: GrantFiled: August 29, 2019Date of Patent: May 25, 2021Assignee: Palantir Technologies Inc.Inventors: Albert Slawinski, Andreas Sjoberg
-
Patent number: 10997497Abstract: A device includes a first divider circuit connected to a first data lane and configured to receive a first data lane value having a first index, to receive a second index corresponding to a second data lane value from a second data lane, and to selectively output a first adding value or the first data lane value based on whether the first index is equal to the second index and a first adder circuit connected to the second data lane and the first divider circuit and configured to receive the first adding value from the first divider circuit, to receive the second data lane value, and to add the first adding value to the second data lane value to generate an addition result.Type: GrantFiled: May 16, 2018Date of Patent: May 4, 2021Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventor: Jin-ook Song
-
Patent number: 10943582Abstract: A method and apparatus of training an acoustic feature extracting model, a device and a computer storage medium. The method comprises: considering a first acoustic feature extracted respectively from speech data corresponding to user identifiers as training data; training an initial model based on a deep neural network based on a criterion of a minimum classification error, until a preset first stop condition is reached; using a triplet loss layer to replace a Softmax layer in the initial model to constitute an acoustic feature extracting model, and continuing to train the acoustic feature extracting model until a preset second stop condition is reached, the acoustic feature extracting model being used to output a second acoustic feature of the speech data; wherein the triplet loss layer is used to maximize similarity between the second acoustic features of the same user, and minimize similarity between the second acoustic features of different users.Type: GrantFiled: May 14, 2018Date of Patent: March 9, 2021Assignee: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.Inventors: Bing Jiang, Xiaokong Ma, Chao Li, Xiangang Li
-
Patent number: 10929749Abstract: An apparatus to facilitate optimization of a neural network (NN) is disclosed. The apparatus includes optimization logic to define a NN topology having one or more macro layers, adjust the one or more macro layers to adapt to input and output components of the NN and train the NN based on the one or more macro layers.Type: GrantFiled: April 24, 2017Date of Patent: February 23, 2021Assignee: INTEL CORPORATIONInventors: Narayan Srinivasa, Joydeep Ray, Nicolas C. Galoppo Von Borries, Ben Ashbaugh, Prasoonkumar Surti, Feng Chen, Barath Lakshmanan, Elmoustapha Ould-Ahmed-Vall, Liwei Ma, Linda L. Hurd, Abhishek R. Appu, John C. Weast, Sara S. Baghsorkhi, Justin E. Gottschlich, Chandrasekaran Sakthivel, Farshad Akhbari, Dukhwan Kim, Altug Koker, Nadathur Rajagopalan Satish
-
Patent number: 10929057Abstract: Provided are techniques for selecting a disconnect from different types of channel disconnects using a machine learning module. An Input/Output (I/O) operation is received from a host via a channel. Inputs are provided to a machine learning module. An output is received from the machine learning module. Based on the output, one of no disconnect from the channel, a logical disconnect from the channel, or a physical disconnect from the channel is selected.Type: GrantFiled: February 7, 2019Date of Patent: February 23, 2021Assignee: International Business Machines CorporationInventors: Beth A. Peterson, Lokesh M. Gupta, Matthew R. Craig, Kevin J. Ash
-
Patent number: 10922610Abstract: Systems, apparatuses and methods may provide for technology that conducts a first timing measurement of a blockage timing of a first window of the training of the neural network. The blockage timing measures a time that processing is impeded at layers of the neural network during the first window of the training due to synchronization of one or more synchronizing parameters of the layers. Based upon the first timing measurement, the technology is to determine whether to modify a synchronization barrier policy to include a synchronization barrier to impede synchronization of one or more synchronizing parameters of one of the layers during a second window of the training. The technology is further to impede the synchronization of the one or more synchronizing parameters of the one of the layers during the second window if the synchronization barrier policy is modified to include the synchronization barrier.Type: GrantFiled: September 14, 2017Date of Patent: February 16, 2021Assignee: Intel CorporationInventors: Adam Procter, Vikram Saletore, Deepthi Karkada, Meenakshi Arunachalam
-
Patent number: 10885424Abstract: A neural system comprises multiple neurons interconnected via synapse devices. Each neuron integrates input signals arriving on its dendrite, generates a spike in response to the integrated input signals exceeding a threshold, and sends the spike to the interconnected neurons via its axon. The system further includes multiple noruens, each noruen is interconnected via the interconnect network with those neurons that the noruen's corresponding neuron sends its axon to. Each noruen integrates input spikes from connected spiking neurons and generates a spike in response to the integrated input spikes exceeding a threshold. There can be one noruen for every corresponding neuron. For a first neuron connected via its axon via a synapse to dendrite of a second neuron, a noruen corresponding to the second neuron is connected via its axon through the same synapse to dendrite of the noruen corresponding to the first neuron.Type: GrantFiled: November 13, 2017Date of Patent: January 5, 2021Assignee: International Business Machines CorporationInventor: Dharmendra S. Modha
-
Patent number: 10817783Abstract: The disclosed computer-implemented method for efficiently updating neural networks may include (i) identifying a neural network that comprises sets of interconnected nodes represented at least in part by a plurality of matrices and that is trained on a training computing device and executes on at least one endpoint device, (ii) constraining a training session for the neural network to reduce the size in memory of the difference between the previous values of the matrices prior to the training session and the new values of the matrices after the training session, (iii) creating a delta update for the neural network that describes the difference between the previous values and the new values, and (iv) updating the neural network on the endpoint device to the new state by sending the delta update from the training computing device to the endpoint computing device. Various other methods, systems, and computer-readable media are also disclosed.Type: GrantFiled: May 7, 2020Date of Patent: October 27, 2020Assignee: Facebook, Inc.Inventors: Nadav Rotem, Abdulkadir Utku Diril, Mikhail Smelyanskiy, Jong Soo Park, Christopher Dewan
-
Patent number: 10719613Abstract: The disclosed computer-implemented method may include (i) identifying a neural network that comprises an interconnected set of nodes organized in a set of layers represented by a plurality of matrices that each comprise a plurality of weights, where each weight represents a connection between a node in the interconnected set of nodes that resides in one layer in the set of layers and an additional node in the set of interconnected nodes that resides in a different layer in the set of layers, (ii) encrypting, using an encryption cipher, the plurality of weights, (iii) detecting that execution of the neural network has been initiated, and (iv) decrypting, using the encryption cipher, the plurality of weights in response to detecting that the execution of the neural network has been initiated. Various other methods, systems, and computer-readable media are also disclosed.Type: GrantFiled: February 23, 2018Date of Patent: July 21, 2020Assignee: Facebook, Inc.Inventors: Nadav Rotem, Abdulkadir Utku Diril, Mikhail Smelyanskiy, Jong Soo Park, Roman Levenstein
-
Patent number: 10699190Abstract: The disclosed computer-implemented method for efficiently updating neural networks may include (i) identifying a neural network that comprises sets of interconnected nodes represented at least in part by a plurality of matrices and that is trained on a training computing device and executes on at least one endpoint device, (ii) constraining a training session for the neural network to reduce the size in memory of the difference between the previous values of the matrices prior to the training session and the new values of the matrices after the training session, (iii) creating a delta update for the neural network that describes the difference between the previous values and the new values, and (iv) updating the neural network on the endpoint device to the new state by sending the delta update from the training computing device to the endpoint computing device. Various other methods, systems, and computer-readable media are also disclosed.Type: GrantFiled: March 4, 2018Date of Patent: June 30, 2020Assignee: Facebook, Inc.Inventors: Nadav Rotem, Abdulkadir Utku Diril, Mikhail Smelyanskiy, Jong Soo Park, Christopher Dewan
-
Patent number: 10460237Abstract: Artificial neural networks (ANNs) are a distributed computing model in which computation is accomplished using many simple processing units (called neurons) and the data embodied by the connections between neurons (called synapses) and the strength of these connections (called synaptic weights). An attractive implementation of ANNs uses the conductance of non-volatile memory (NVM) elements to code the synaptic weight. In this application, the non-idealities in the response of the NVM (such as nonlinearity, saturation, stochasticity and asymmetry in response to programming pulses) lead to reduced network performance compared to an ideal network implementation. Disclosed is a method that improves performance by implementing a learning rate parameter that is local to each synaptic connection, a method for tuning this local learning rate, and an implementation that does not compromise the ability to train many synaptic weights in parallel during learning.Type: GrantFiled: November 30, 2015Date of Patent: October 29, 2019Assignee: International Business Machines CorporationInventors: Irem Boybat Kara, Geoffrey Burr, Carmelo di Nolfo, Robert Shelby
-
Patent number: 10459959Abstract: Methods and apparatus for performing top-k query processing include pruning a list of documents to identify a subset of the list of documents, where pruning includes, for other query terms in the set of query terms, skipping a document in the list of documents based, at least in part, on the contribution of the query term to the score of the corresponding document and the term upper bound for each other query term, in the set of query terms, that matches the document.Type: GrantFiled: November 7, 2016Date of Patent: October 29, 2019Assignee: Oath Inc.Inventors: David Carmel, Guy Gueta, Edward Bortnikov
-
Patent number: 10452540Abstract: Memory-mapped interfaces for message passing computing systems are provided. According to various embodiments, a write request is received. The write request comprises write data and a write address. The write address is a memory address within a memory map. The write address is translated into a neural network address. The neural network address identifies at least one input location of a destination neural network. The write data is sent via a network according to the neural network address to the at least one input location of the destination neural network. A message is received via the network from a source neural network. The message comprises data and at least one address. A location in a buffer is determined based on the at least one address. The data is stored at the location in the buffer. The buffer is accessible via the memory map.Type: GrantFiled: October 20, 2017Date of Patent: October 22, 2019Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Filipp A. Akopyan, John V. Arthur, Andrew S. Cassidy, Michael V. DeBole, Paul A. Merolla, Dharmendra S. Modha, Jun Sawada
-
Patent number: 10417563Abstract: An intelligent control system based on an explicit model of cognitive development (Table 1) performs high-level functions. It comprises up to O hierarchically stacked neural networks, Nm, . . . , Nm+(O?1), where m denotes the stage/order tasks performed in the first neural network, Nm, and O denotes the highest stage/order tasks performed in the highest-level neural network. The type of processing actions performed in a network, Nm, corresponds to the complexity for stage/order m. Thus N1 performs tasks at the level corresponding to stage/order 1. N5 processes information at the level corresponding to stage/order 5. Stacked neural networks begin and end at any stage/order, but information must be processed by each stage in ascending order sequence. Stages/orders cannot be skipped. Each neural network in a stack may use different architectures, interconnections, algorithms, and training methods, depending on the stage/order of the neural network and the type of intelligent control system implemented.Type: GrantFiled: April 7, 2017Date of Patent: September 17, 2019Inventors: Michael Lamport Commons, Mitzi Sturgeon White
-
Patent number: 10410119Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for augmenting neural networks with an external memory. One of the methods includes providing an output derived from the neural network output for the time step as a system output for the time step; maintaining a current state of the external memory; determining, from the neural network output for the time step, memory state parameters for the time step; updating the current state of the external memory using the memory state parameters for the time step; reading data from the external memory in accordance with the updated state of the external memory; and combining the data read from the external memory with a system input for the next time step to generate the neural network input for the next time step.Type: GrantFiled: June 2, 2016Date of Patent: September 10, 2019Assignee: DeepMind Technologies LimitedInventors: Edward Thomas Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, Philip Blunsom
-
Patent number: 9658260Abstract: A power system grid is decomposed into several parts and decomposed state estimation steps are executed separately, on each part, using coordinated feedback regarding a boundary state. The achieved solution is the same that would be achieved with a simultaneous state estimation approach. With the disclosed approach, the state estimation problem can be distributed among decomposed estimation operations for each subsystem and a coordinating operation that yields the complete state estimate. The approach is particularly suited for estimating the state of power systems that are naturally decomposed into separate subsystems, such as separate AC and HVDC systems, and/or into separate transmission and distribution systems.Type: GrantFiled: September 4, 2013Date of Patent: May 23, 2017Assignee: ABB SCHWEIZ AGInventors: Xiaoming Feng, Vaibhav Donde, Ernst Scholtz
-
Patent number: 9563842Abstract: A neural system comprises multiple neurons interconnected via synapse devices. Each neuron integrates input signals arriving on its dendrite, generates a spike in response to the integrated input signals exceeding a threshold, and sends the spike to the interconnected neurons via its axon. The system further includes multiple noruens, each noruen is interconnected via the interconnect network with those neurons that the noruen's corresponding neuron sends its axon to. Each noruen integrates input spikes from connected spiking neurons and generates a spike in response to the integrated input spikes exceeding a threshold. There can be one noruen for every corresponding neuron. For a first neuron connected via its axon via a synapse to dendrite of a second neuron, a noruen corresponding to the second neuron is connected via its axon through the same synapse to dendrite of the noruen corresponding to the first neuron.Type: GrantFiled: August 11, 2015Date of Patent: February 7, 2017Assignee: International Business Machines CorporationInventor: Dharmendra S. Modha
-
Patent number: 9489623Abstract: Apparatus and methods for developing robotic controllers comprising parallel networks. In some implementations, a parallel network may comprise at least first and second neuron layers. The second layer may be configured to determine a measure of discrepancy (error) between a target network output and actual network output. The network output may comprise control signal configured to cause a task execution by the robot. The error may be communicated back to the first neuron layer in order to adjust efficacy of input connections into the first layer. The error may be encoded into spike latency using linear or nonlinear encoding. Error communication and control signal provision may be time multiplexed so as to enable target action execution. Efficacy associated with forward and backward/reverse connections may be stored in individual arrays. A synchronization mechanism may be employed to match forward/reverse efficacy in order to implement plasticity.Type: GrantFiled: October 15, 2013Date of Patent: November 8, 2016Assignee: BRAIN CORPORATIONInventors: Oleg Sinyavskiy, Vadim Polonichko
-
Patent number: 9189731Abstract: A neural system comprises multiple neurons interconnected via synapse devices. Each neuron integrates input signals arriving on its dendrite, generates a spike in response to the integrated input signals exceeding a threshold, and sends the spike to the interconnected neurons via its axon. The system further includes multiple noruens, each noruen is interconnected via the interconnect network with those neurons that the noruen's corresponding neuron sends its axon to. Each noruen integrates input spikes from connected spiking neurons and generates a spike in response to the integrated input spikes exceeding a threshold. There can be one noruen for every corresponding neuron. For a first neuron connected via its axon via a synapse to dendrite of a second neuron, a noruen corresponding to the second neuron is connected via its axon through the same synapse to dendrite of the noruen corresponding to the first neuron.Type: GrantFiled: March 24, 2014Date of Patent: November 17, 2015Assignee: International Business Machines CorporationInventor: Dharmendra S. Modha
-
Patent number: 9183495Abstract: A neural system comprises multiple neurons interconnected via synapse devices. Each neuron integrates input signals arriving on its dendrite, generates a spike in response to the integrated input signals exceeding a threshold, and sends the spike to the interconnected neurons via its axon. The system further includes multiple noruens, each noruen is interconnected via the interconnect network with those neurons that the noruen's corresponding neuron sends its axon to. Each noruen integrates input spikes from connected spiking neurons and generates a spike in response to the integrated input spikes exceeding a threshold. There can be one noruen for every corresponding neuron. For a first neuron connected via its axon via a synapse to dendrite of a second neuron, a noruen corresponding to the second neuron is connected via its axon through the same synapse to dendrite of the noruen corresponding to the first neuron.Type: GrantFiled: August 8, 2012Date of Patent: November 10, 2015Assignee: International Business Machines CorporationInventor: Dharmendra S. Modha
-
Patent number: 8892485Abstract: Certain embodiments of the present disclosure support implementation of a neural processor with synaptic weights, wherein training of the synapse weights is based on encouraging a specific output neuron to generate a spike. The implemented neural processor can be applied for classification of images and other patterns.Type: GrantFiled: July 8, 2010Date of Patent: November 18, 2014Assignee: QUALCOMM IncorporatedInventors: Vladimir Aparin, Jeffrey A. Levin
-
Publication number: 20140180989Abstract: A parallel convolutional neural network is provided. The CNN is implemented by a plurality of convolutional neural networks each on a respective processing node. Each CNN has a plurality of layers. A subset of the layers are interconnected between processing nodes such that activations are fed forward across nodes. The remaining subset is not so interconnected.Type: ApplicationFiled: September 18, 2013Publication date: June 26, 2014Applicant: Google Inc.Inventors: Alexander Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton
-
Patent number: 8712942Abstract: An active element machine is a new kind of computing machine. When implemented in hardware, the active element machine can execute multiple instructions simultaneously, because every one of its computing elements is active. This greatly enhances the computing speed. By executing a meta program whose instructions change the connections in a dynamic active element machine, the active element machine can perform tasks that a digital computer are unable to compute. In an embodiment, instructions in a computer language are translated into instructions in a register machine language. The instructions in the register machine language are translated into active element machine instructions. In an embodiment, an active element machine may be programmed using instructions for a register machine. The active element machine is not limited to these embodiments.Type: GrantFiled: April 24, 2007Date of Patent: April 29, 2014Assignee: AEMEA Inc.Inventor: Michael Stephen Fiske
-
Patent number: 8527542Abstract: User-generated input may be received to initiate a generation of a message associated with an incident of a computing system having a multi-layer architecture that requires support. Thereafter, context data associated with one or more operational parameters may be collected from each of at least two of the layers of the computing system. A message may then be generated on at least a portion of the user-generated input and at least a portion of the collected context data. Related apparatuses, methods, computer program products, and computer systems are also described.Type: GrantFiled: December 30, 2005Date of Patent: September 3, 2013Assignee: SAP AGInventors: Tilmann Haeberle, Lilia Kotchanovskaia, Zoltan Nagy, Berthold Wocher, Juergen Subat
-
Publication number: 20130212053Abstract: A feature extraction device according to the present invention includes a neural network including neurons each including at least one expressed gene which is an attribute value for determining whether transmission of a signal from one of the first neurons to one of the second neurons is possible, each first neuron having input data resulting from target data to be subjected to feature extraction outputs a first signal value to corresponding second neuron(s) having the same expressed gene as the one in the first neuron, the first signal value increasing as a value of the input data increases, and each second neuron calculates, as a feature quantity of the target data, a second signal value corresponding to a total sum of the first signal values input thereto.Type: ApplicationFiled: October 18, 2011Publication date: August 15, 2013Inventors: Takeshi Yagi, Takashi Kitsukawa
-
Patent number: 8468109Abstract: Systems and methods for a scalable artificial neural network, wherein the architecture includes: an input layer; at least one hidden layer; an output layer; and a parallelization subsystem configured to provide a variable degree of parallelization to the artificial neural network by providing scalability to neurons and layers. In a particular case, the systems and methods may include a back-propagation subsystem that is configured to scalably adjust weights in the artificial neural network in accordance with the variable degree of parallelization. Systems and methods are also provided for selecting an appropriate degree of parallelization based on factors such as hardware resources and performance requirements.Type: GrantFiled: December 28, 2011Date of Patent: June 18, 2013Inventors: Medhat Moussa, Antony Savich, Shawki Areibi
-
Publication number: 20120166374Abstract: Systems and methods for a scalable artificial neural network, wherein the architecture includes: an input layer; at least one hidden layer; an output layer; and a parallelization subsystem configured to provide a variable degree of parallelization to the artificial neural network by providing scalability to neurons and layers. In a particular case, the systems and methods may include a back-propagation subsystem that is configured to scalably adjust weights in the artificial neural network in accordance with the variable degree of parallelization. Systems and methods are also provided for selecting an appropriate degree of parallelization based on factors such as hardware resources and performance requirements.Type: ApplicationFiled: December 28, 2011Publication date: June 28, 2012Inventors: Medhat Moussa, Antony Savich, Shawki Areibi
-
Patent number: 8190542Abstract: A neural network includes neurons and wires adapted for connecting the neurons. Some of the wires comprise input connections and exactly one output connection and/or a part of the wires comprise exactly one input connection and output connections. Neurons are hierarchically arranged in groups. A lower group of neurons recognizes a pattern of information input to the neurons of this lower group. A higher group of neurons recognizes higher level patterns. A strength value is associated with a connection between different neurons. The strength value of a particular connection is indicative of a likelihood that information which is input to the neurons propagates via the particular connection. The strength value of each connection is modifiable based on an amount of traffic of information which is input to the neurons and which propagates via the particular connection and/or is modifiable based on a strength modification impulse.Type: GrantFiled: September 27, 2006Date of Patent: May 29, 2012Assignee: ComDys Holding B.V.Inventor: Eugen Oetringer
-
Patent number: 8121817Abstract: Process control system for detecting abnormal events in a process having one or more independent variables and one or more dependent variables. The system includes a device for measuring values of the one or more independent and dependent variables, a process controller having a predictive model for calculating predicted values of the one or more dependent variables from the measured values of the one or more independent variables, a calculator for calculating residual values for the one or more dependent variables from the difference between the predicted and measured values of the one or more dependent variables, and an analyzer for performing a principal component analysis on the residual values. The process controller is a multivariable predictive control means and the principal component analysis results in the output of one or more scores values, T2 values and Q values.Type: GrantFiled: October 16, 2007Date of Patent: February 21, 2012Assignee: BP Oil International LimitedInventors: Keith Landells, Zaid Rawi