Patents Examined by George Giroux
-
Patent number: 11741342Abstract: Neural Architecture Search (NAS) is a laborious process. Prior work on automated NAS targets mainly on improving accuracy but lacked consideration of computational resource use. Presented herein are embodiments of a Resource-Efficient Neural Architect (RENA), an efficient resource-constrained NAS using reinforcement learning with network embedding. RENA embodiments use a policy network to process the network embeddings to generate new configurations. Example demonstrates of RENA embodiments on image recognition and keyword spotting (KWS) problems are also presented herein. RENA embodiments can find novel architectures that achieve high performance even with tight resource constraints. For the CIFAR10 dataset, the tested embodiment achieved 2.95% test error when compute intensity is greater than 100 FLOPs/byte, and 3.87% test error when model size was less than 3M parameters.Type: GrantFiled: March 8, 2019Date of Patent: August 29, 2023Assignee: Baidu USA LLCInventors: Yanqi Zhou, Siavash Ebrahimi, Sercan Arik, Haonan Yu, Hairong Liu, Gregory Diamos
-
Patent number: 11736363Abstract: In various embodiments, a prediction subsystem automatically predicts a level of network availability of a device network. The prediction subsystem computes a set of predicted attribute values for a set of devices attributes associated with the device network based on a trained recurrent neural network (RNN) and set(s) of past attribute values for the set of device attributes. The prediction subsystem then performs classification operation(s) based on the set of predicted attribute values and one or more machine-learned classification criteria. The result of the classification operation(s) is a network availability data point that predicts a level of network availability of the device network. Preemptive action(s) are subsequently performed on the device network based on the network availability data point. By performing the preemptive action(s), the amount of time during which network availability is below a given level can be substantially reduced compared to prior art, reactive approaches.Type: GrantFiled: November 30, 2018Date of Patent: August 22, 2023Assignee: Disney Enterprises, Inc.Inventors: Benjamin Quachtran, Ian Conrad McLein, Daniel Ryan Hare, Nina Zalah Sanchez, Sona Kokonyan
-
Patent number: 11727020Abstract: Techniques regarding providing artificial intelligence problem descriptions are provided. For example, one or more embodiments described herein can comprise a system, which can comprise a memory that can store computer executable components. The system can also comprise a processor, operably coupled to the memory, and that can execute the computer executable components stored in the memory. The computer executable components can include, at least: a query component that generates key performance indicators from a query, determines a subset of key performance indicators that individually have a performance below a threshold, and maps the subset of key performance indicators to operational metrics; a learning component that generates, using artificial intelligence, problem descriptions from one or more of the subset of key performance indicators or the operational metrics and transmits the problem descriptions to a database.Type: GrantFiled: October 11, 2018Date of Patent: August 15, 2023Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Muhammed Fatih Bulut, Hongtan Sun, Pritpal Arora, Klaus Koenig, Naga A. Ayachitula, Jonathan Richard Young, Maja Vukovic
-
Patent number: 11727267Abstract: Systems, apparatuses and methods may provide for technology that adjusts a plurality of weights in a neural network model and adjusts a plurality of activation functions in the neural network model. The technology may also output the neural network model in response to one or more conditions being satisfied by the plurality of weights and the plurality of activation functions. In one example, two or more of the activation functions are different from one another and the activation functions are adjusted on a per neuron basis.Type: GrantFiled: August 30, 2019Date of Patent: August 15, 2023Assignee: Intel CorporationInventors: Julio Cesar Zamora Esquivel, Jose Rodrigo Camacho Perez, Paulo Lopez Meyer, Hector Cordourier Maruri, Jesus Cruz Vargas
-
Patent number: 11715003Abstract: An optimization apparatus calculates a first portion, among energy change caused by change in value of a neuron of a neuron group, caused by influence of another neuron of the neuron group, determines whether to allow updating the value, based on a sum of the first and second portions of the energy change, and repeats a process of updating or maintaining the value according to the determination. An arithmetic processing apparatus calculates the second portion caused by influence of a neuron not belonging to the neuron group and an initial value of the sum. A control apparatus transmits data for calculating the second portion and the initial value to the arithmetic processing apparatus, and the initial value and data for calculating the first portion to the optimization apparatus, and receives the initial value from the arithmetic processing apparatus, and a value of the neuron group from the optimization apparatus.Type: GrantFiled: February 4, 2019Date of Patent: August 1, 2023Assignee: FUJITSU LIMITEDInventors: Sanroku Tsukamoto, Satoshi Matsubara, Hirotaka Tamura
-
Patent number: 11704543Abstract: A digital circuit for accelerating computations of an artificial neural network model includes a pairs selection unit that selects different subsets of pairs of input vector values and corresponding weight vector values to be processed simultaneously at each time step; a sorting unit that simultaneously processes a vector of input-weight pairs wherein pair values whose estimated product is small are routed with a high probability to small multipliers, and pair values whose estimated product is greater are routed with a high probability to large multipliers that support larger input and output values; and a core unit that includes a plurality of multiplier units and a plurality of adder units that accumulate output results of the plurality of multiplier units into one or more output values that are stored back into the memory, where the plurality of multiplier units include the small multipliers and the large multipliers.Type: GrantFiled: June 12, 2018Date of Patent: July 18, 2023Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventor: Shai Litvak
-
Patent number: 11694062Abstract: A computer-implemented method includes instantiating a neural network including a recurrent cell. The recurrent cell includes a probabilistic state component. The method further includes training the neural network with a sequence of data. In an embodiment, the method includes extracting a deterministic finite automaton from the trained recurrent neural network and classifying a sequence with the extracted automaton.Type: GrantFiled: July 29, 2019Date of Patent: July 4, 2023Assignee: NEC CORPORATIONInventors: Cheng Wang, Mathias Niepert
-
Patent number: 11676043Abstract: A mechanism is provided in a data processing system having a processor and a memory. The memory comprises instructions which are executed by the processor to cause the processor to implement a training system for finding an optimal surface for hierarchical classification task on an ontology. The training system receives a training data set and a hierarchical classification ontology data structure. The training system generates a neural network architecture based on the training data set and the hierarchical classification ontology data structure. The neural network architecture comprises an indicative layer, a parent tier (PT) output and a lower leaf tier (LLT) output. The training system trains the neural network architecture to classify the training data set to leaf nodes at the LLT output and parent nodes at the PT output. The indicative layer in the neural network architecture determines a surface that passes through each path from a root to a leaf node in the hierarchical ontology data structure.Type: GrantFiled: March 4, 2019Date of Patent: June 13, 2023Assignee: International Business Machines CorporationInventors: Pathirage Dinindu Sujan Udayanga Perera, Orna Raz, Ramani Routray, Vivek Krishnamurthy, Sheng Hua Bao, Eitan D. Farchi
-
Patent number: 11663518Abstract: Mechanisms are provided for implementing a virtual corpus engine that receives an inquiry to be processed and analyzes the inquiry to extract one or more features of the inquiry. The virtual corpus engine selects a weight matrix associated with a virtual corpus based on the extracted one or more features of the inquiry. The virtual corpus comprises a plurality of actual corpora of information. The weight matrix comprises a separate weight value for each actual corpus in the plurality of actual corpora. The virtual corpus engine processes the inquiry using a set of selected actual corpora selected from the plurality of actual corpora based on the weight values in the weight matrix and receives results of the processing of the inquiry using the set of selected actual corpora. The virtual corpus engine outputs the results of the processing of the inquiry.Type: GrantFiled: April 23, 2019Date of Patent: May 30, 2023Assignee: International Business Machines CorporationInventors: Joseph N. Kozhaya, Christopher M. Madison, Sridhar Sudarsan
-
Patent number: 11645496Abstract: A calculation circuit calculates, among a plurality of neurons whose states are represented by variables each having m values (m is a positive integer of 3 or greater), two energy changes caused by a state change of a second neuron by 2n (n is an integer of 0 or greater) in positive and negative directions, based on a state change direction of a first neuron whose state has been updated and a weighting coefficient indicating magnitude of an interaction between the first and second neurons. A state transition determination circuit determines, based on magnitude relationships among a thermal excitation energy and the two energy changes, whether to allow updates of the state of the second neuron that cause the two energy change, outputs the determination results, and limits updates by which the state of the second neuron exceeds an upper limit value or falls below a lower limit value.Type: GrantFiled: July 25, 2019Date of Patent: May 9, 2023Assignee: FUJITSU LIMITEDInventors: Jumpei Koyama, Hirotaka Tamura
-
Patent number: 11640515Abstract: A method and neural network system for human-computer interaction, and user equipment are disclosed. According to the method for human-computer interaction, a natural language question and a knowledge base are vectorized, and an intermediate result vector that is based on the knowledge base and that represents a similarity between a natural language question and a knowledge base answer is obtained by means of vector calculation, and then a fact-based correct natural language answer is obtained by means of calculation according to the question vector and the intermediate result vector. By means of this method, a dialog and knowledge base-based question-answering are combined by means of vector calculation, so that natural language interaction can be performed with a user, and a fact-based correct natural language answer can be given according to the knowledge base.Type: GrantFiled: May 31, 2018Date of Patent: May 2, 2023Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Xin Jiang, Zhengdong Lu, Hang Li
-
Patent number: 11636316Abstract: Broadly speaking, the present techniques exploit the properties of correlated electron materials for artificial neural networks and neuromorphic computing. In particular, the present techniques provide apparatuses/devices that comprise at least one correlated electron switch (CES) element and which may be used as, or to form, an artificial neuron or an artificial synapse.Type: GrantFiled: January 31, 2018Date of Patent: April 25, 2023Assignee: Cerfe Labs, Inc.Inventors: Lucian Shifren, Shidhartha Das, Naveen Suda, Carlos Alberto Paz de Araujo
-
Patent number: 11636330Abstract: Systems and methods including one or more processing modules and one or more non-transitory storage modules storing computing instructions configured to run on the one or more processing modules and perform acts of receiving attribute data comprising a set of unstructured attribute data and a set of structured attribute data, analyzing the set of unstructured attribute data by processing through a first set of one or more Long Short Term Memory (LSTM) layers, to obtain an unstructured semantic signature, analyzing the set of the structured attribute data by processing through a first set of one or more Convolutional Neural Network (CNN) layers, to obtain a structured semantic signature, analyzing the unstructured semantic signature and the structured semantic signature, and classifying the item in one or more item categories. Other embodiments are disclosed herein.Type: GrantFiled: January 30, 2019Date of Patent: April 25, 2023Assignee: WALMART APOLLO, LLCInventors: Abhinandan Krishnan, Abilash Amarthaluri, Venkatesh Kandaswamy
-
Patent number: 11625581Abstract: A method in a hardware implementation of a Convolutional Neural Network (CNN), includes receiving a first subset of data having at least a portion of weight data and at least a portion of input data for a CNN layer and performing, using at least one convolution engine, a convolution of the first subset of data to generate a first partial result; receiving a second subset of data comprising at least a portion of weight data and at least a portion of input data for the CNN layer and performing, using the at least one convolution engine, a convolution of the second subset of data to generate a second partial result; and combining the first partial result and the second partial result to generate at least a portion of convolved data for a layer of the CNN.Type: GrantFiled: May 3, 2017Date of Patent: April 11, 2023Assignee: Imagination Technologies LimitedInventors: Clifford Gibson, James Imber
-
Patent number: 11625583Abstract: Systems and methods for quality monitoring and hidden quantization in artificial neural network (ANN) computations are provided. An example method may include receiving a description of an ANN and input data associated with the ANN, performing, based on a quantization scheme, quantization of the ANN to obtain a quantized ANN, performing, based on the set of input data, ANN computations of the quantized ANN to obtain a result of the ANN computation for the input data, while performing the ANN computations, monitoring, a measure of quality of the ANN computations of the quantized ANN, determining that the measure of quality does not satisfy quality requirements, and in response to the determination, informing a user of an external system of the measure of quality, and adjusting, based on the measure of quality, the quantization scheme to be used in the ANN computations for further input data.Type: GrantFiled: February 13, 2019Date of Patent: April 11, 2023Assignee: MIPSOLOGY SASInventors: Frederic Dumoulin, Ludovic Larzul
-
Patent number: 11625609Abstract: During end-to-end training of a Deep Neural Network (DNN), a differentiable estimator subnetwork is operated to estimate a functionality of an external software application. Then, during inference by the trained DNN, the differentiable estimator subnetwork is replaced with the functionality of the external software application, by enabling API communication between the DNN and the external software application.Type: GrantFiled: June 14, 2018Date of Patent: April 11, 2023Assignee: International Business Machines CorporationInventors: Boaz Carmeli, Guy Hadash, Einat Kermany, Ofer Lavi, Guy Lev, Oren Sar-Shalom
-
Patent number: 11574193Abstract: A method and system for training a neural network are described. The method includes providing at least one continuously differentiable model of the neural network. The at least one continuously differentiable model is specific to hardware of the neural network. The method also includes iteratively training the neural network using the at least one continuously differentiable model to provide at least one output for the neural network. Each iteration uses at least one output of a previous iteration and a current continuously differentiable model of the at least one continuously differentiable model.Type: GrantFiled: September 5, 2018Date of Patent: February 7, 2023Assignee: Samsung Electronics Co., Ltd.Inventors: Borna J. Obradovic, Titash Rakshit, Jorge A. Kittl, Ryan M. Hatcher
-
Patent number: 11568270Abstract: A generation function to generate and output generated data from an input, a discrimination function to cause each discriminator to discriminate whether the data to be discriminated is based on the training data or the generated data and to output a discrimination result. Also an update function to update the discriminator that has output the discrimination result such that the data to be discriminated is discriminated with higher accuracy, and to further update the generator to increase a probability of discriminating that the generated data-based data to be discriminated is the training data-based data, and a whole update function to cause the updates to be executed for the generator and all the discriminators.Type: GrantFiled: September 17, 2018Date of Patent: January 31, 2023Assignee: PREFERRED NETWORKS, INC.Inventor: Masaki Saito
-
Patent number: 11551096Abstract: Systems and methods are described herein for generating potential feature combinations for a new item. A neural network may be utilized to identify positive and/or negative sentiment phrases from textual data. Each sentiment phrase may correspond to particular features of existing items. A machine-learning model may utilize the sentiment phrases and their corresponding features to generate a set of potential feature combinations for a new item. The potential feature combinations may be scored, for example, based on an amount by which a potential feature combination differs from known feature combinations of existing items. One or more potential feature combinations may be provided in a feature recommendation. Feedback (e.g., human feedback, sales data, page views for similar items, and the like) may be obtained and utilized to retrain the machine-learning model to better identify subsequent feature combinations that may be desirable and/or practical to manufacture.Type: GrantFiled: July 26, 2018Date of Patent: January 10, 2023Assignee: Amazon Technologies, Inc.Inventor: Pragyana K. Mishra
-
Patent number: 11520580Abstract: A processor includes a plurality of execution units. At least one of the execution units is configured to repeatedly execute a first instruction based on a first field of the first instruction indicating that the first instruction is to be iteratively executed.Type: GrantFiled: March 7, 2016Date of Patent: December 6, 2022Assignee: TEXAS INSTRUMENTS INCORPORATEDInventors: Horst Diewald, Johann Zipperer