Patents by Inventor Swagath
Swagath has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12236338Abstract: A combined function specified by an instruction is performed. The combined function includes a plurality of operations performed as part of one invocation of the combined function. The performing the combined function includes performing a matrix multiplication of a first tensor and a second tensor to obtain one or more intermediate results. The second tensor includes an adjusted weight tensor created using a multiplier. Values of a bias tensor are added to the one or more intermediate results to obtain one or more results for the combined function. The one or more results are at least a part of an output tensor.Type: GrantFiled: June 17, 2021Date of Patent: February 25, 2025Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Cedric Lichtenau, Kailash Gopalakrishnan, Vijayalakshmi Srinivasan, Sunil K. Shukla, Swagath Venkataramani
-
Publication number: 20250004736Abstract: Provided are a computer program product, system, and method for generating program binaries for a transformer to process sequence concatenations having different input lengths. A sequence concatenation length is selected of a sequence concatenation of concatenated tokens from inputs to process in the accelerator. A determination is made of combinations of input lengths. A representation of a transformer is processed to generate program binaries for each combination of input lengths of the combinations. The program binaries, for a given combination of input lengths of the combinations, program the accelerator to decompose tokens of inputs in the sequence concatenation according to the input lengths in the given combination and compute attention scores between tokens within each input of the inputs in the sequence concatenation. The program binaries, for a combination of input lengths, are executed to process a sequence concatenation of received inputs to generate outputs.Type: ApplicationFiled: June 29, 2023Publication date: January 2, 2025Inventors: Swagath Venkataramani, Sanchari Sen, Amrit Nagarajan, Vijayalakshmi Srinivasan, Sarada Krithivasan
-
Publication number: 20250005326Abstract: Provided are a computer program product, system, and method for reusing weights and biases in an artificial intelligence accelerator for a neural network for different minibatch sizes of inferences. A minibatch size is selected of inference jobs batched to process in the accelerator. A representation of a neural network is processed to determine a set of weights and biases for the selected minibatch size to load into the core. The set of weights and biases is loaded into the core for use by the array of processing elements in the core of the accelerator. The weights and the biases are reused in the processing elements for the neural network, loaded for the selected minibatch size, to apply to minibatches of inferences having minibatch sizes less than the selected minibatch size.Type: ApplicationFiled: June 29, 2023Publication date: January 2, 2025Inventors: Swagath Venkataramani, Marcel Schaal, Sanchari Sen, Amrit Nagarajan, Vijayalakshmi Srinivasan, Shyam Ramji
-
Patent number: 12141513Abstract: A method for improving performance of a predefined Deep Neural Network (DNN) convolution processing on a computing device includes inputting parameters, as input data into a processor on a computer that formalizes a design space exploration of a convolution mapping, on a predefined computer architecture that will execute the predefined convolution processing. The parameters are predefined as guided by a specification for the predefined convolution processing to be implemented by the convolution mapping and by a microarchitectural specification for the processor that will execute the predefined convolution processing. The processor calculates performance metrics for executing the predefined convolution processing on the computing device, as functions of the predefined parameters, as proxy estimates of performance of different possible design choices to implement the predefined convolution processing.Type: GrantFiled: October 31, 2018Date of Patent: November 12, 2024Assignee: International Business Machines CorporationInventors: Chia-Yu Chen, Jungwook Choi, Kailash Gopalakrishnan, Vijayalakshmi Srinivasan, Swagath Venkataramani, Jintao Zhang
-
Patent number: 12094525Abstract: A memory system, a method of assembling the memory system, and a computer system. The memory system includes a global memory device coupled to a plurality of processing elements. The global memory device is positioned external to a chip on which the plurality of processing devices reside. The memory system also includes at least one main scratchpad coupled to the at least one processing element of the plurality of processing devices and the global memory device. The memory system further includes a plurality of auxiliary scratchpads coupled to the plurality of processing elements and the global memory device. The one or more auxiliary scratchpads are configured to store static tensors. At least a portion of the plurality of auxiliary scratchpads are configured as a unitary multichannel device.Type: GrantFiled: July 22, 2022Date of Patent: September 17, 2024Assignee: International Business Machines CorporationInventors: Ravi Nair, Swagath Venkataramani, Vijayalakshmi Srinivasan, Arvind Kumar
-
Patent number: 12056594Abstract: A compensated deep neural network (compensated-DNN) is provided. A first vector having a set of components and a second vector having a set of corresponding components are received. A component of the first vector includes a first quantized value and a first compensation instruction, and a corresponding component of the second vector includes a second quantized value and a second compensation instruction. The first quantized value is multiplied with the second quantized value to compute a raw product value. The raw product value is compensated for a quantization error according to the first and second compensation instructions to produce a compensated product value. The compensated product value is added into an accumulated value for the dot product. The accumulated value is converted into an output vector of the dot product. The output vector includes an output quantized value and an output compensation instruction.Type: GrantFiled: June 27, 2018Date of Patent: August 6, 2024Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Swagath Venkataramani, Shubham Jain, Vijayalakshmi Srinivasan, Jungwook Choi, Leland Chang
-
Publication number: 20240143982Abstract: Fused channel and/or fused filter convolutions for fast deep neural network execution are provided. In one aspect, a system includes: a processor, connected to a memory, configured to: implement an approximated datapath in a deep neural network having a sequence of adders and multipliers for adding up operands to provide accumulated sums for two or more groups of neurons in the deep neural network, and multiplying the accumulated sums to obtain a product; and make an inference using the deep neural network based on the product from the approximated datapath. A method for approximation in a deep neural network is also provided.Type: ApplicationFiled: October 26, 2022Publication date: May 2, 2024Inventors: Swagath Venkataramani, Sarada Krithivasan, Vijayalakshmi Srinivasan
-
Publication number: 20240118892Abstract: Methods and apparatuses relating to processing neural networks are described. In one embodiment, an apparatus to process a neural network includes a plurality of fully connected layer chips coupled by an interconnect; a plurality of convolutional layer chips each coupled by an interconnect to a respective fully connected layer chip of the plurality of fully connected layer chips and each of the plurality of fully connected layer chips and the plurality of convolutional layer chips including an interconnect to couple each of a forward propagation compute intensive tile, a back propagation compute intensive tile, and a weight gradient compute intensive tile of a column of compute intensive tiles between a first memory intensive tile and a second memory intensive tile.Type: ApplicationFiled: December 18, 2023Publication date: April 11, 2024Inventors: Swagath VENKATARAMANI, Dipankar DAS, Ashish RANJAN, Subarno BANERJEE, Sasikanth AVANCHA, Ashok JAGANNATHAN, Ajaya V. DURG, Dheemanth NAGARAJ, Bharat KAUL, Anand RAGHUNATHAN
-
Patent number: 11941111Abstract: Indices of non-zero weights may be stored in an index register file included within each of a plurality of processor elements in a systolic array. Non-zero weights may be stored in a register file associated with the index register file. Input values (e.g., dense input values) corresponding to a single block in a data structure may be sent to the plurality of processor elements. Those of the input values corresponding to the indices of non-zero weights in the index register file may be selected for performing multiply-accumulate (“MAC”) operation based on sending the plurality of input values to one or more of the plurality of processor elements. The indices of the plurality of non-zero weight are stored in an index data stick. The values of the plurality of non-zero weights are stored in a value data stick.Type: GrantFiled: July 31, 2021Date of Patent: March 26, 2024Assignee: International Business Machines CorporationInventors: Sanchari Sen, Swagath Venkataramani, Vijayalakshmi Srinivasan, Kailash Gopalakrishnan, Sunil K. Shukla
-
Publication number: 20240029786Abstract: A memory system, a method of assembling the memory system, and a computer system. The memory system includes a global memory device coupled to a plurality of processing elements. The global memory device is positioned external to a chip on which the plurality of processing devices reside. The memory system also includes at least one main scratchpad coupled to the at least one processing element of the plurality of processing devices and the global memory device. The memory system further includes a plurality of auxiliary scratchpads coupled to the plurality of processing elements and the global memory device. The one or more auxiliary scratchpads are configured to store static tensors. At least a portion of the plurality of auxiliary scratchpads are configured as a unitary multichannel device.Type: ApplicationFiled: July 22, 2022Publication date: January 25, 2024Inventors: Ravi Nair, Swagath Venkataramani, Vijayalakshmi Srinivasan, Arvind Kumar
-
Publication number: 20240028899Abstract: Embodiments are provided for efficient realization of memory-bound operations in a computing system by a processor. Data may be read from and written to a memory at a granular level using a stickification operation. One or more regions of activation and weight tensor data on the memory may be annotated by coupling the stickification operation with padding.Type: ApplicationFiled: July 25, 2022Publication date: January 25, 2024Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Swagath VENKATARAMANI, Vijayalakshmi SRINIVASAN, Shubham JAIN, Sarada KRITHIVASAN, Sanchari SEN
-
Patent number: 11831467Abstract: Embodiments for providing enhanced multicast data transfer for ring topology based artificial intelligence systems are disclosed. Multicast data is sent to a plurality of disjointed cores in a multicast group according to a first multicast mode, a second multicast mode, or a third multicast mode, where the first multicast mode sends a first half the multicast data on first multicast ring and a second half on a second multicast ring, the second multicast mode sends the multicast data on either the first multicast ring or the second multicast ring, and the third multicast mode replicates the multicast data and sends the multicast data to both the first multicast ring and the second multicast ring.Type: GrantFiled: May 13, 2022Date of Patent: November 28, 2023Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Shubham Jain, Swagath Venkataramani, Vijayalakshmi Srinivasan, Sunil K Shukla, Martin A Lutz
-
Publication number: 20230370304Abstract: Embodiments for providing enhanced multicast data transfer for ring topology based artificial intelligence systems are disclosed. Multicast data is sent to a plurality of disjointed cores in a multicast group according to a first multicast mode, a second multicast mode, or a third multicast mode, where the first multicast mode sends a first half the multicast data on first multicast ring and a second half on a second multicast ring, the second multicast mode sends the multicast data on either the first multicast ring or the second multicast ring, and the third multicast mode replicates the multicast data and sends the multicast data to both the first multicast ring and the second multicast ring.Type: ApplicationFiled: May 13, 2022Publication date: November 16, 2023Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Shubham JAIN, Swagath VENKATARAMANI, Vijayalakshmi SRINIVASAN, Sunil K. SHUKLA, Martin A. LUTZ
-
Patent number: 11810340Abstract: A system includes a determination component that determines output for successively larger neural networks of a set; and a consensus component that determines consensus between a first neural network and a second neural network of the set. A linear chain of increasingly complex neural networks trained on progressively larger inputs is utilized (e.g., increasingly complex neural networks is generally representative of increased accuracy). Outputs of progressively networks are computed until a consensus point is reached—where two or more successive large networks yield a same inference output. At such point of consensus the larger neural network of the set reaching consensus can be deemed appropriately sized (or of sufficient complexity) for a classification task at hand.Type: GrantFiled: November 29, 2017Date of Patent: November 7, 2023Assignee: International Business Machines CorporationInventors: Pradip Bose, Alper Buyuktosunoglu, Schuyler Eldridge, Karthik V. Swaminathan, Swagath Venkataramani
-
Publication number: 20230344667Abstract: Embodiments for providing single-producer-multiple consumers synchronization and multicast data transfer by a processor are disclosed. Multicast data transfer is synchronized based on an identification tag and a request from each one of a plurality of recipients for the multicast data. The multicast data is transferred to each of the plurality of recipients based on the identification tag, the request from each one of the plurality of recipients, and a list of the plurality of recipients.Type: ApplicationFiled: April 22, 2022Publication date: October 26, 2023Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Vijayalakshmi SRINIVASAN, Scot RIDER, Swagath VENKATARAMANI, Kailash GOPALAKRISHNAN, Sunil K. SHUKLA, Brian William CURRAN, Martin A. LUTZ
-
Publication number: 20230267003Abstract: Processing input data for transmittal to a data consumer such as an artificial intelligence engine is performed by arranging the input data into a uniform structure made up of sticks of data combined to form pages of sticks. A stick is any well-sized set of input data elements whereby the size of the stick is fixed. A masking pattern is established for sticks of data having certain ranges of invalid data for consumption of partial sticks while maintaining validity of the input data being transferred. The mask pattern is derived based on set-active-mask-and-value (SAMV) instructions. The derived mask pattern is carried forward for subsequent load instructions to the data consumer.Type: ApplicationFiled: February 23, 2022Publication date: August 24, 2023Inventors: Cedric Lichtenau, Vijayalakshmi Srinivasan, Sunil K Shukla, Swagath Venkataramani, Kailash Gopalakrishnan, Holger Horbach, Razvan Peter Figuli, Wei Wang, YULONG LI, Martin A Lutz
-
Patent number: 11681529Abstract: Systems, methods, and apparatuses relating to access synchronization in a shared memory are described. In one embodiment, a processor includes a decoder to decode an instruction into a decoded instruction, and an execution unit to execute the decoded instruction to: receive a first input operand of a memory address to be tracked and a second input operand of an allowed sequence of memory accesses to the memory address, and cause a block of a memory access that violates the allowed sequence of memory accesses to the memory address. In one embodiment, a circuit separate from the execution unit compares a memory address for a memory access request to one or more memory addresses in a tracking table, and blocks a memory access for the memory access request when a type of access violates a corresponding allowed sequence of memory accesses to the memory address for the memory access request.Type: GrantFiled: August 24, 2021Date of Patent: June 20, 2023Assignee: Intel CorporationInventors: Swagath Venkataramani, Dipankar Das, Sasikanth Avancha, Ashish Ranjan, Subarno Banerjee, Bharat Kaul, Anand Raghunathan
-
Patent number: 11669489Abstract: A systolic array can be configured to skip distributed operands that have zero-values, resulting in improved resource efficiency. A skip module is introduced to receive operands from memory, identify whether they have a zero value or not, and, if they are nonzero, generate an operand vector including an index before sending the operand vector to a processing element.Type: GrantFiled: September 30, 2021Date of Patent: June 6, 2023Assignee: International Business Machines CorporationInventors: Swagath Venkataramani, Sanchari Sen, Vijayalakshmi Srinivasan, Ankur Agrawal, Sunil K Shukla, Bruce Fleischer, Kailash Gopalakrishnan
-
Publication number: 20230109301Abstract: A systolic array can be configured to skip distributed operands that have zero-values, resulting in improved resource efficiency. A skip module is introduced to receive operands from memory, identify whether they have a zero value or not, and, if they are nonzero, generate an operand vector including an index before sending the operand vector to a processing element.Type: ApplicationFiled: September 30, 2021Publication date: April 6, 2023Inventors: Swagath Venkataramani, Sanchari Sen, Vijayalakshmi Srinivasan, Ankur Agrawal, Sunil K Shukla, Bruce Fleischer, Kailash Gopalakrishnan
-
Publication number: 20230099608Abstract: A system comprises an analog resistive processing unit (RPU) system, and one or more processors. The analog RPU system comprises an array of RPU cells. The one or more processors are configured to: configure the analog RPU system to implement a convolutional neural network comprising a convolutional layer comprising at least one kernel matrix; program the at least one array of RPU cells to store a transformed kernel matrix which is generated by applying a first transformation process to the kernel matrix using a first predefined transformation matrix; and utilize the analog RPU system to perform an analog convolution operation by performing analog matrix-vector multiplication operations using the transformed kernel matrix and input vectors of a transformed data matrix, to thereby generate a transformed convolution output matrix, wherein the transformed data matrix is generated by applying a second transformation process to a data matrix using a second predefined transformation matrix.Type: ApplicationFiled: September 24, 2021Publication date: March 30, 2023Inventors: Swagath Venkataramani, Shubham Jain, Leland Chang