Patents Examined by Kamran Afshar
-
Patent number: 12045725Abstract: Some embodiments provide a method for training a network including layers that each includes multiple nodes. The method identifies a set of related layers of the network. Each node in one of the related layers has corresponding nodes in each of the other related layers. Each set of corresponding nodes receives a same set of inputs and applies different sets of weights to the inputs to generate an output. The method identifies an element-wise addition layer including nodes that each add outputs of a different set of corresponding nodes from the related layers to generate a sum. The method uses a set of outputs generated by the nodes of each related layer to determine batch normalization parameters specific to each layer of the set of related layers. The method uses data generated by the element-wise addition layer to determine batch normalization parameters for the set of related layers.Type: GrantFiled: July 7, 2020Date of Patent: July 23, 2024Assignee: PERCEIVE CORPORATIONInventors: Eric A. Sather, Steven L. Teig
-
Patent number: 12039432Abstract: An artificial neural network (ANN) apparatus can include processing component circuitry that receives linear inputs, and removes linearity from the one or more linear inputs based on an S-shaped saturating activation function that generates a continuous non-linear output. The neurons of the ANN comprise digital bit-wise components configured to transform the linear inputs into the continuous non-linear output.Type: GrantFiled: March 18, 2020Date of Patent: July 16, 2024Assignee: Infineon Technologies AGInventor: Andrew Stevens
-
Patent number: 12033053Abstract: Embodiments of the invention may execute a NN by executing sub-tensor columns, each sub-tensor column including computations from portions of a layers of the NN, and each sub-tensor column performing computations entirely within a first layer of cache (e.g. L2 in one embodiment) and saving its output entirely within a second layer of cache (e.g. L3 in one embodiment). Embodiments may include partitioning the execution of a NN by partitioning the execution of the NN into sub-tensor columns, each sub-tensor column including computations from portions of layers of the NN, each sub-tensor column performing computations entirely within a first layer of cache and saving its output entirely within a second layer of cache.Type: GrantFiled: November 23, 2022Date of Patent: July 9, 2024Assignee: NEURALMAGIC, INC.Inventors: Alexander Matveev, Nir Shavit, Govind Ramnarayan
-
Patent number: 12020163Abstract: A method includes receiving a request to solve a problem defined by input information and applying a neural network to generate an answer to the problem. The neural network includes an input level, a manager level including a first manager, a worker level including first and second workers, and an output level. Applying the neural network includes implementing the input level to provide a piece of input information to the first manager; implementing the first manager to delegate portions of the piece of information to the first and second workers; implementing the first worker to operate on its portion of information to generate a first output; implementing the second worker to operate on its portion of information to generate a second output; and implementing the output level to generate the answer to the problem, using the first and second outputs. The method also includes transmitting a response comprising the answer.Type: GrantFiled: February 4, 2020Date of Patent: June 25, 2024Assignee: Bank of America CorporationInventors: Garrett Thomas Botkin, Matthew Bruce Murray
-
Patent number: 12020160Abstract: A method, computer program product and system for generating a neural network. Initial neural networks are prepared, each of which includes an input layer containing one or more input nodes, a middle layer containing one or more middle nodes, and an output layer containing one or more output nodes. A new neural network is generated that includes a new middle layer containing one or more middle nodes based on the middle nodes of the middle layers of the initial neural networks.Type: GrantFiled: January 19, 2018Date of Patent: June 25, 2024Assignee: International Business Machines CorporationInventor: Takeshi Inagaki
-
Patent number: 12014262Abstract: Disclosed herein are apparatus, method, and computer-readable storage device embodiments for implementing deconvolution via a set of convolutions. An embodiment includes a convolution processor that includes hardware implementing logic to perform at least one algorithm comprising a convolution algorithm. The at least one convolution processor may be further configured to perform operations including performing a first convolution and outputting a first deconvolution segment as a result of the performing the first convolution. The at least one convolution processor may be further configured to perform a second convolution and output a second deconvolution segment as a result of the performing the second convolution.Type: GrantFiled: October 3, 2019Date of Patent: June 18, 2024Assignee: SYNOPSYS, INC.Inventors: Tom Michiels, Thomas Julian Pennello
-
Patent number: 12001944Abstract: A mechanism is described for facilitating smart distribution of resources for deep learning autonomous machines. A method of embodiments, as described herein, includes detecting one or more sets of data from one or more sources over one or more networks, and introducing a library to a neural network application to determine an optimal point at which to apply frequency scaling without degrading performance of the neural network application at a computing device.Type: GrantFiled: July 27, 2022Date of Patent: June 4, 2024Assignee: INTEL CORPORATIONInventors: Rajkishore Barik, Brian T. Lewis, Murali Sundaresan, Jeffrey Jackson, Feng Chen, Xiaoming Chen, Mike Macpherson
-
Patent number: 11995533Abstract: Some embodiments provide a method for executing a layer of a neural network, for a circuit that restricts a number of weight values used per layer. The method applies a first set of weights to a set of inputs to generate a first set of results. The first set of weights are restricted to a first set of allowed values. For each of one or more additional sets of weights, the method applies the respective additional set of weights to the same set of inputs to generate a respective additional set of results. The respective additional set of weights is restricted to a respective additional set of allowed values that is related to the first set of allowed values and the other additional sets of allowed values. The method generates outputs for the particular layer by combining the first set of results with each respective additional set of results.Type: GrantFiled: November 14, 2019Date of Patent: May 28, 2024Assignee: PERCEIVE CORPORATIONInventors: Eric A. Sather, Steven L. Teig
-
Patent number: 11954597Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for using embedded function with a deep network. One of the methods includes receiving an input comprising a plurality of features, wherein each of the features is of a different feature type; processing each of the features using a respective embedding function to generate one or more numeric values, wherein each of the embedding functions operates independently of each other embedding function, and wherein each of the embedding functions is used for features of a respective feature type; processing the numeric values using a deep network to generate a first alternative representation of the input, wherein the deep network is a machine learning model composed of a plurality of levels of non-linear operations; and processing the first alternative representation of the input using a logistic regression classifier to predict a label for the input.Type: GrantFiled: October 24, 2022Date of Patent: April 9, 2024Assignee: Google LLCInventors: Gregory S. Corrado, Kai Chen, Jeffrey A. Dean, Gary R. Holt, Julian P. Grady, Sharat Chikkerur, David W. Sculley, II
-
Patent number: 11948065Abstract: A system that uses one or more artificial intelligence models that predict an effect of a predicted event on a current state of the system. For example, the model may predict how a rate of change in time-series data may be altered throughout the first time period based on the predicted event.Type: GrantFiled: June 1, 2023Date of Patent: April 2, 2024Assignee: Citigroup Technology, Inc.Inventors: Ernst Wilhelm Spannhake, II, Thomas Francis Gianelle, Milan Shah
-
Patent number: 11948074Abstract: Disclosed is a processor-implemented data processing method in a neural network. A data processing apparatus includes at least one processor, and at least one memory configured to store instructions to be executed by the processor and a neural network, wherein the processor is configured to, based on the instructions, input an input activation map into a current layer included in the neural network, output an output activation map by performing a convolution operation between the input activation map and a weight quantized with a first representation bit number of the current layer, and output a quantized activation map by quantizing the output activation map with a second representation bit number based on an activation quantization parameter.Type: GrantFiled: April 30, 2019Date of Patent: April 2, 2024Assignee: Samsung Electronics Co., Ltd.Inventors: Sangil Jung, Changyong Son, Seohyung Lee, Jinwoo Son, Chang Kyu Choi
-
Patent number: 11948073Abstract: Systems, apparatuses, and methods for adaptively mapping a machine learning model to a multi-core inference accelerator engine are disclosed. A computing system includes a multi-core inference accelerator engine with multiple inference cores coupled to a memory subsystem. The system also includes a control unit which determines how to adaptively map a machine learning model to the multi-core inference accelerator engine. In one implementation, the control unit selects a mapping scheme which minimizes the memory bandwidth utilization of the multi-core inference accelerator engine. In one implementation, this mapping scheme involves having one inference core of the multi-core inference accelerator engine fetch given data and broadcast the given data to other inference cores of the inference accelerator engine. Each inference core fetches second data unique to the respective inference core.Type: GrantFiled: August 30, 2018Date of Patent: April 2, 2024Assignees: Advanced Micro Devices, Inc., ATI Technologies ULCInventors: Lei Zhang, Sateesh Lagudu, Allen Rush
-
Patent number: 11922316Abstract: A computer-implemented method includes: initializing model parameters for training a neural network; performing a forward pass and backpropagation for a first minibatch of training data; determining a new weight value for each of a plurality of nodes of the neural network using a gradient descent of the first minibatch; for each determined new weight value, determining whether to update a running mean corresponding to a weight of a particular node; based on a determination to update the running mean, calculating a new mean weight value for the particular node using the determined new weight value; updating the weight parameters for all nodes based on the calculated new mean weight values corresponding to each node; assigning the running mean as the weight for the particular node when training on the first minibatch is completed; and reinitializing running means for all nodes at a start of training a second minibatch.Type: GrantFiled: August 13, 2020Date of Patent: March 5, 2024Assignee: LG ELECTRONICS INC.Inventors: Samarth Tripathi, Jiayi Liu, Unmesh Kurup, Mohak Shah
-
Patent number: 11922296Abstract: A system includes inputs, outputs, and nodes between the inputs and the outputs. The nodes include hidden nodes. Connections between the nodes are determined based on a gradient computable using symmetric solution submatrices.Type: GrantFiled: July 27, 2022Date of Patent: March 5, 2024Assignee: Rain Neuromorphics Inc.Inventor: Jack David Kendall
-
Patent number: 11907827Abstract: Methods and systems include a neural network system that includes a neural network accelerator. The neural network accelerator includes multiple processing engines coupled together to perform arithmetic operations in support of an inference performed using the deep neural network system. The neural network accelerator also includes a schedule-aware tensor data distribution circuitry or software that is configured to load tensor data into the multiple processing engines in a load phase, extract output data from the multiple processing engines in an extraction phase, reorganize the extracted output data, and store the reorganized extracted output data to memory.Type: GrantFiled: June 28, 2019Date of Patent: February 20, 2024Assignee: Intel CorporationInventors: Gautham Chinya, Huichu Liu, Arnab Raha, Debabrata Mohapatra, Cormac Brick, Lance Hacking
-
Patent number: 11899669Abstract: A data processing system is configured to pre-process data for a machine learning classifier. The data processing system includes an input port that receives one or more data items, an extraction engine that extracts a plurality of data signatures and structure data, a logical rule set generation engine configured to generate a data structure, select a particular data signature of the data structure, identify each instance of the particular data signature in the data structure, segment the data structure around instances of the particular data signature, identify one or more sequences of data signatures connected to the particular data signature, and generate a logical ruleset. A classification engine executes one or more classifiers against the logical ruleset to classify the one or more data items received by the input port.Type: GrantFiled: March 20, 2018Date of Patent: February 13, 2024Assignee: Carnegie Mellon UniversityInventors: Jonathan Cagan, Phil LeDuc, Mark Whiting
-
Patent number: 11900052Abstract: The present disclosure applies trained artificial intelligence (AI) processing adapted to automatically generating transformations of formatted templates. Pre-existing formatted templates (e.g., slide-based presentation templates) are leveraged by the trained AI processing to automatically generate a plurality of high-quality template transformations. In transforming a formatted template, the trained AI processing not only generates feature transformation of objects thereof but may also provide style transformations where attributes associated with a presentation theme may be modified for a formatted template or set of formatted templates. The trained AI processing is novel in that it is tailored for analysis of feature data of a specific type of formatted template.Type: GrantFiled: November 11, 2020Date of Patent: February 13, 2024Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Ji Li, Amit Srivastava, Mingxi Cheng
-
Patent number: 11900243Abstract: A computing core circuit, including: an encoding module, a route sending module, and a control module, wherein the control module is configured to control the encoding module to perform encoding processing on a pulse sequence determined by pulses of at least one neuron in a current computing core to be transmitted, so as to obtain an encoded pulse sequence, and control the route sending module to determine a corresponding route packet according to the encoded pulse sequence, so as to send the route packet. The present disclosure further provides a data processing method, a chip, a board, an electronic device, and a computer-readable storage medium.Type: GrantFiled: April 22, 2021Date of Patent: February 13, 2024Assignee: LYNXI TECHNOLOGIES CO., LTD.Inventors: Zhenzhi Wu, Yaolong Zhu, Luojun Jin, Wei He, Qikun Zhang
-
Patent number: 11897066Abstract: A simulation apparatus includes a machine learning device for learning a change in a machining route in machining of a workpiece. The machine learning device observes data indicating the changed machining route and data indicating a machining condition of the workpiece as a state variable, and also acquires determination data for determining whether or not a cycle time obtained by simulation using the changed machining route is appropriate, and learns by associating the machining condition of the workpiece with the change in the machining route, using the state variable and the determination data.Type: GrantFiled: May 13, 2019Date of Patent: February 13, 2024Assignee: FANUC CORPORATIONInventor: Satoshi Uchida
-
Patent number: 11893492Abstract: A neural processing device and method for pruning thereof are provided. The neural processing device includes a processing unit configured to perform calculations, an L0 memory configured to store input and output data of the processing unit, wherein the input and output data include a two-dimensional weight matrix and a weight manipulator configured to receive the two-dimensional weight matrix and partition it into preset sizes to thereby generate partitioned matrices, to generate a pruning matrix by pruning the partitioned matrix, and to transmit the pruning matrix to the processing unit.Type: GrantFiled: March 25, 2022Date of Patent: February 6, 2024Assignee: Rebellions Inc.Inventor: Jinwook Oh