Patents by Inventor Soumitra Chatterjee
Soumitra Chatterjee has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12254416Abstract: Examples disclosed herein relate to using a compiler for implementing tensor operations in a neural network base computing system. A compiler defines the tensor operations to be implemented. The compiler identifies a binary tensor operation receiving input operands from a first output tensor of a first tensor operation and a second output tensor of a second tensor operation from two different paths of the convolution neural network. For the binary tensor operation, the compiler allocates a buffer space for a first input operand in the binary tensor operation based on a difference between a count of instances of the first output tensor and a count of instances of the second output tensor.Type: GrantFiled: April 13, 2021Date of Patent: March 18, 2025Assignee: Hewlett Packard Enterprise Development LPInventors: Jitendra Onkar Kolhe, Soumitra Chatterjee, Vaithyalingam Nagendran, Shounak Bandopadhyay
-
Publication number: 20250036550Abstract: Systems and methods sanitize computer code. In particular, fragile portions of computer code are identified based on instances of bug/defect-related churn data associated with the computer code. A control flow graph representative of the computer code may be generated, the control flow graph including nodes and edges. Nodes whose source location falls within the reported fragile sections are identified, and may be flagged as being susceptible. Thereafter, a sanitizer is run on the flagged nodes.Type: ApplicationFiled: July 28, 2023Publication date: January 30, 2025Inventors: SOUMITRA CHATTERJEE, RITANYA BHASKAR BHARADWAJ, VEENA KONNANATH
-
Publication number: 20250021273Abstract: In some examples, a processor receives a first request to allocate a memory region for a collective operation by process entities in a plurality of computer nodes. In response to the first request, the processor creates a virtual address for the memory region and allocates the memory region in a network-attached memory coupled to the plurality of computer nodes over a network. The processor correlates the virtual address to an address of the memory region in mapping information. The processor identifies the memory region in the network-attached memory by obtaining the address of the memory region from the mapping information using the virtual address in a second request. In response to the second request, the processor performs the collective operation.Type: ApplicationFiled: July 10, 2023Publication date: January 16, 2025Inventors: Soumitra Chatterjee, Chinmay Ghosh, Mashood Abdulla Kodavanji, Sharad Singhal
-
Publication number: 20240362163Abstract: Some examples relate to providing a fabric-attached memory (FAM) for applications using message passing procedure. In an example, a remotely accessible memory creation function of a message passing procedure is modified to include a reference to a region of memory in a FAM. A remotely accessible memory data structure representing a remotely accessible memory is created through the remotely accessible memory creation function. When an application calls a message passing function of the message passing procedure, a determination is made whether the remotely accessible memory data structure in the message passing function includes a reference to the region of memory in the FAM. In response to a determination that the remotely accessible memory data structure includes a reference to the region of memory in the FAM, the message passing function call is routed to a FAM message passing function corresponding to the message passing function.Type: ApplicationFiled: April 28, 2023Publication date: October 31, 2024Inventors: Soumitra Chatterjee, Chinmay Ghosh, Mashood Abdulla Kodavanji, Sharad Singhal
-
Publication number: 20240303075Abstract: Systems and methods are provided for identifying and reporting possible fragile lines of code from a repository of codes. In particular, some examples cluster the lines of codes containing similar values of bug/defect-related churn data instances and report the lines of code containing bug/defect-related churn data instances with high numbers of bug/defect-related churn data.Type: ApplicationFiled: March 9, 2023Publication date: September 12, 2024Inventors: Soumitra CHATTERJEE, Ritanya BHARADWAJ, Veena KONNANATH, Sunil KURAVINAKOP, Balaji Sankar Naga Sai Sandeep KOSURI
-
Patent number: 11874688Abstract: Example techniques for identification of diagnostic messages corresponding to exceptions are described. A determination model may determine whether a set of diagnostic messages generated based on analysis of a source code includes a diagnostic message that likely corresponds to an exception. The determination may be used to identify a set of diagnostic messages including the diagnostic message that likely corresponds to an exception.Type: GrantFiled: November 4, 2021Date of Patent: January 16, 2024Assignee: Hewlett Packard Enterprise Development LPInventors: Soumitra Chatterjee, Balasubramanian Viswanathan
-
Patent number: 11645358Abstract: In an example, a neural network program corresponding to a neural network model is received. The neural network program includes matrices, vectors, and matrix-vector multiplication (MVM) operations. A computation graph corresponding to the neural network model is generated. The computation graph includes a plurality of nodes, each node representing a MVM operation, a matrix, or a vector. Further, a class model corresponding to the neural network model is populated with a data structure pointing to the computation graph. The computation graph is traversed based on the class model. Based on the traversal, the plurality of MVM operations are assigned to MVM units of a neural network accelerator. Each MVM unit can perform a MVM operation. Based on assignment of the plurality of MVM operations, an executable file is generated for execution by the neural network accelerator.Type: GrantFiled: January 29, 2019Date of Patent: May 9, 2023Assignee: Hewlett Packard Enterprise Development LPInventors: Soumitra Chatterjee, Sunil Vishwanathpur Lakshminarasimha, Mohan Parthasarathy
-
Publication number: 20230028560Abstract: Example techniques for identification of diagnostic messages corresponding to exceptions are described. A determination model may determine whether a set of diagnostic messages generated based on analysis of a source code includes a diagnostic message that likely corresponds to an exception. The determination may be used to identify a set of diagnostic messages including the diagnostic message that likely corresponds to an exception.Type: ApplicationFiled: November 4, 2021Publication date: January 26, 2023Inventors: Soumitra CHATTERJEE, Balasubramanian VISWANATHAN
-
Patent number: 11379712Abstract: Disclosed is a method, system, and computer readable medium to manage (and possibly replace) cycles in graphs for a computer device. The method includes detecting a compound operation including a first tensor, the compound operation resulting from source code represented in a first graph structure as part of a compilation process from source code to binary executable code. To address a detected cycle, an instance of a proxy class may be created to store a pointer to a proxy instance of the first tensor based on the detection. In some examples, using the instance of the proxy class facilitates implementation of a level of indirection to replace a cyclical portion of the graph structure with an acyclical portion such that the second graph structure indicates assignment of a result of the compound operation to the proxy instance of the first tensor. Optimization may reduce a total number of indirection replacements.Type: GrantFiled: October 9, 2018Date of Patent: July 5, 2022Assignee: Hewlett Packard Enterprise Development LPInventors: Soumitra Chatterjee, Sunil Vishwanathpur Lakshminarasimha, Mohan Parthasarathy
-
Publication number: 20220198250Abstract: Examples of performing convolution operations based on a weighted matrix are described. In an example, an input data stream vector is processed using a weighted matrix stored onto a processing unit of a neural network accelerator. The weighted matrix may correspond to a first convolution filter and a second convolution filter.Type: ApplicationFiled: April 20, 2021Publication date: June 23, 2022Inventors: Jitendra Onkar KOLHE, Soumitra CHATTERJEE
-
Publication number: 20220198249Abstract: Example techniques for causing execution of neural networks are described. A neural network includes a first part and a second part. A determination is made that a first physical resource in a first computing device is to execute the first part and that a second physical resource in a second computing device is to execute the second part. The determination is based on a latency in communication between the first physical resource and the second physical resource. The first computing device and the second computing device are part of a cluster.Type: ApplicationFiled: April 15, 2021Publication date: June 23, 2022Inventors: Jitendra Onkar KOLHE, Soumitra CHATTERJEE, Mohan PARTHASARATHY
-
Patent number: 11361050Abstract: Example implementations relate to assigning dependent matrix-vector multiplication (MVM) operations to consecutive crossbars of a dot product engine (DPE). A method can comprise grouping a first MVM operation of a computation graph with a second MVM operation of the computation graph where the first MVM operation is dependent on a result of the second MVM operation, assigning a first crossbar of a DPE to an operand of the first MVM operation, and assigning a second crossbar of the DPE to an operand of the second MVM operation, wherein the first and second crossbars are consecutive.Type: GrantFiled: November 20, 2018Date of Patent: June 14, 2022Assignee: Hewlett Packard Enterprise Development LPInventors: Soumitra Chatterjee, Sunil Vishwanathpur Lakshminarasimha, Mohan Parthasarathy
-
Publication number: 20220121959Abstract: Examples disclosed herein relate to using a compiler for implementing tensor operations in a neural network base computing system. A compiler defines the tensor operations to be implemented. The compiler identifies a binary tensor operation receiving input operands from a first output tensor of a first tensor operation and a second output tensor of a second tensor operation from two different paths of the convolution neural network. For the binary tensor operation, the compiler allocates a buffer space for a first input operand in the binary tensor operation based on a difference between a count of instances of the first output tensor and a count of instances of the second output tensor.Type: ApplicationFiled: April 13, 2021Publication date: April 21, 2022Inventors: Jitendra Onkar KOLHE, Soumitra CHATTERJEE, Vaithyalingam NAGENDRAN, Shounak BANDOPADHYAY
-
Patent number: 11269973Abstract: Repeating patterns are identified in a matrix. Based on the identification of the repeating patterns, instructions are generated, which are executable by processing cores of a dot product engine to allocate analog multiplication crossbars of the dot product engine to perform multiplication of the matrix with a vector.Type: GrantFiled: April 28, 2020Date of Patent: March 8, 2022Assignee: Hewlett Packard Enterprise Development LPInventors: Mashood Abdulla Kodavanji, Soumitra Chatterjee, Chinmay Ghosh, Mohan Parthasarathy
-
Publication number: 20220012573Abstract: Examples of performing tensor operations by a neural network-based computing system, are described. In an example, a first output working set generated by a first operation, wherein the first output working set is a set of processed partitioned tensors, is obtained. The first output working set is then copied to the output working set, for retrieving by the second operation.Type: ApplicationFiled: April 8, 2021Publication date: January 13, 2022Inventors: Vaithyalingam NAGENDRAN, Jitendra Onkar KOLHE, Soumitra CHATTERJEE, Shounak BANDOPADHYAY
-
Publication number: 20210334335Abstract: Repeating patterns are identified in a matrix. Based on the identification of the repeating patterns, instructions are generated, which are executable by processing cores of a dot product engine to allocate analog multiplication crossbars of the dot product engine to perform multiplication of the matrix with a vector.Type: ApplicationFiled: April 28, 2020Publication date: October 28, 2021Inventors: Mashood Abdulla Kodavanji, Soumitra Chatterjee, Chinmay Ghosh, Mohan Parthasarathy
-
Patent number: 11132423Abstract: According to examples, an apparatus may include a processor and a non-transitory computer readable medium having instructions that when executed by the processor, may cause the processor to partition a matrix of elements into a plurality of sub-matrices of elements. Each sub-matrix of the plurality of sub-matrices may include elements from a set of columns of the matrix of elements that includes a nonzero element. The processor may also assign elements of the plurality of sub-matrices to a plurality of crossbar devices to maximize a number of nonzero elements of the matrix of elements assigned to the crossbar devices.Type: GrantFiled: October 31, 2018Date of Patent: September 28, 2021Assignee: Hewlett Packard Enterprise Development LPInventors: Soumitra Chatterjee, Mashood Abdulla K, Chinmay Ghosh, Mohan Parthasarathy
-
Publication number: 20200242189Abstract: In an example, a neural network program corresponding to a neural network model is received. The neural network program includes matrices, vectors, and matrix-vector multiplication (MVM) operations. A computation graph corresponding to the neural network model is generated. The computation graph includes a plurality of nodes, each node representing a MVM operation, a matrix, or a vector. Further, a class model corresponding to the neural network model is populated with a data structure pointing to the computation graph. The computation graph is traversed based on the class model. Based on the traversal, the plurality of MVM operations are assigned to MVM units of a neural network accelerator. Each MVM unit can perform a MVM operation. Based on assignment of the plurality of MVM operations, an executable file is generated for execution by the neural network accelerator.Type: ApplicationFiled: January 29, 2019Publication date: July 30, 2020Inventors: Soumitra Chatterjee, Sunil Vishwanathpur Lakshminarasimha, Mohan Parthasarathy
-
Patent number: 10726096Abstract: Systems and methods are provided for sparse matrix vector multiplication with a matrix vector multiplication unit. The method includes partitioning a sparse matrix of entries into a plurality of sub-matrices; mapping each of the sub-matrices to one of a plurality of respective matrix vector multiplication engines; partitioning an input vector into a plurality of sub-vectors; computing, via each matrix vector multiplication engine, a plurality of intermediate result vectors each resulting from a multiplication of one of the sub-matrices and one of the sub-vectors; for each set of rows of the sparse matrix, adding elementwise the intermediate result vectors to produce a plurality of result sub-vectors; and concatenating the result sub-vectors to form a result vector.Type: GrantFiled: October 12, 2018Date of Patent: July 28, 2020Assignee: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LPInventors: Soumitra Chatterjee, Chinmay Ghosh, Mashood Abdulla Kodavanji, Mohan Parthasarathy
-
Publication number: 20200159811Abstract: Example implementations relate to assigning dependent matrix-vector multiplication (MVM) operations to consecutive crossbars of a dot product engine (DPE). A method can comprise grouping a first MVM operation of a computation graph with a second MVM operation of the computation graph where the first MVM operation is dependent on a result of the second MVM operation, assigning a first crossbar of a DPE to an operand of the first MVM operation, and assigning a second crossbar of the DPE to an operand of the second MVM operation, wherein the first and second crossbars are consecutive.Type: ApplicationFiled: November 20, 2018Publication date: May 21, 2020Inventors: Soumitra Chatterjee, Sunil Vishwanathpur Lakshminarasimha, Mohan Parthasarathy