Patents by Inventor Soumitra Chatterjee

Soumitra Chatterjee has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12254416
    Abstract: Examples disclosed herein relate to using a compiler for implementing tensor operations in a neural network base computing system. A compiler defines the tensor operations to be implemented. The compiler identifies a binary tensor operation receiving input operands from a first output tensor of a first tensor operation and a second output tensor of a second tensor operation from two different paths of the convolution neural network. For the binary tensor operation, the compiler allocates a buffer space for a first input operand in the binary tensor operation based on a difference between a count of instances of the first output tensor and a count of instances of the second output tensor.
    Type: Grant
    Filed: April 13, 2021
    Date of Patent: March 18, 2025
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Jitendra Onkar Kolhe, Soumitra Chatterjee, Vaithyalingam Nagendran, Shounak Bandopadhyay
  • Publication number: 20250036550
    Abstract: Systems and methods sanitize computer code. In particular, fragile portions of computer code are identified based on instances of bug/defect-related churn data associated with the computer code. A control flow graph representative of the computer code may be generated, the control flow graph including nodes and edges. Nodes whose source location falls within the reported fragile sections are identified, and may be flagged as being susceptible. Thereafter, a sanitizer is run on the flagged nodes.
    Type: Application
    Filed: July 28, 2023
    Publication date: January 30, 2025
    Inventors: SOUMITRA CHATTERJEE, RITANYA BHASKAR BHARADWAJ, VEENA KONNANATH
  • Publication number: 20250021273
    Abstract: In some examples, a processor receives a first request to allocate a memory region for a collective operation by process entities in a plurality of computer nodes. In response to the first request, the processor creates a virtual address for the memory region and allocates the memory region in a network-attached memory coupled to the plurality of computer nodes over a network. The processor correlates the virtual address to an address of the memory region in mapping information. The processor identifies the memory region in the network-attached memory by obtaining the address of the memory region from the mapping information using the virtual address in a second request. In response to the second request, the processor performs the collective operation.
    Type: Application
    Filed: July 10, 2023
    Publication date: January 16, 2025
    Inventors: Soumitra Chatterjee, Chinmay Ghosh, Mashood Abdulla Kodavanji, Sharad Singhal
  • Publication number: 20240362163
    Abstract: Some examples relate to providing a fabric-attached memory (FAM) for applications using message passing procedure. In an example, a remotely accessible memory creation function of a message passing procedure is modified to include a reference to a region of memory in a FAM. A remotely accessible memory data structure representing a remotely accessible memory is created through the remotely accessible memory creation function. When an application calls a message passing function of the message passing procedure, a determination is made whether the remotely accessible memory data structure in the message passing function includes a reference to the region of memory in the FAM. In response to a determination that the remotely accessible memory data structure includes a reference to the region of memory in the FAM, the message passing function call is routed to a FAM message passing function corresponding to the message passing function.
    Type: Application
    Filed: April 28, 2023
    Publication date: October 31, 2024
    Inventors: Soumitra Chatterjee, Chinmay Ghosh, Mashood Abdulla Kodavanji, Sharad Singhal
  • Publication number: 20240303075
    Abstract: Systems and methods are provided for identifying and reporting possible fragile lines of code from a repository of codes. In particular, some examples cluster the lines of codes containing similar values of bug/defect-related churn data instances and report the lines of code containing bug/defect-related churn data instances with high numbers of bug/defect-related churn data.
    Type: Application
    Filed: March 9, 2023
    Publication date: September 12, 2024
    Inventors: Soumitra CHATTERJEE, Ritanya BHARADWAJ, Veena KONNANATH, Sunil KURAVINAKOP, Balaji Sankar Naga Sai Sandeep KOSURI
  • Patent number: 11874688
    Abstract: Example techniques for identification of diagnostic messages corresponding to exceptions are described. A determination model may determine whether a set of diagnostic messages generated based on analysis of a source code includes a diagnostic message that likely corresponds to an exception. The determination may be used to identify a set of diagnostic messages including the diagnostic message that likely corresponds to an exception.
    Type: Grant
    Filed: November 4, 2021
    Date of Patent: January 16, 2024
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Soumitra Chatterjee, Balasubramanian Viswanathan
  • Patent number: 11645358
    Abstract: In an example, a neural network program corresponding to a neural network model is received. The neural network program includes matrices, vectors, and matrix-vector multiplication (MVM) operations. A computation graph corresponding to the neural network model is generated. The computation graph includes a plurality of nodes, each node representing a MVM operation, a matrix, or a vector. Further, a class model corresponding to the neural network model is populated with a data structure pointing to the computation graph. The computation graph is traversed based on the class model. Based on the traversal, the plurality of MVM operations are assigned to MVM units of a neural network accelerator. Each MVM unit can perform a MVM operation. Based on assignment of the plurality of MVM operations, an executable file is generated for execution by the neural network accelerator.
    Type: Grant
    Filed: January 29, 2019
    Date of Patent: May 9, 2023
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Soumitra Chatterjee, Sunil Vishwanathpur Lakshminarasimha, Mohan Parthasarathy
  • Publication number: 20230028560
    Abstract: Example techniques for identification of diagnostic messages corresponding to exceptions are described. A determination model may determine whether a set of diagnostic messages generated based on analysis of a source code includes a diagnostic message that likely corresponds to an exception. The determination may be used to identify a set of diagnostic messages including the diagnostic message that likely corresponds to an exception.
    Type: Application
    Filed: November 4, 2021
    Publication date: January 26, 2023
    Inventors: Soumitra CHATTERJEE, Balasubramanian VISWANATHAN
  • Patent number: 11379712
    Abstract: Disclosed is a method, system, and computer readable medium to manage (and possibly replace) cycles in graphs for a computer device. The method includes detecting a compound operation including a first tensor, the compound operation resulting from source code represented in a first graph structure as part of a compilation process from source code to binary executable code. To address a detected cycle, an instance of a proxy class may be created to store a pointer to a proxy instance of the first tensor based on the detection. In some examples, using the instance of the proxy class facilitates implementation of a level of indirection to replace a cyclical portion of the graph structure with an acyclical portion such that the second graph structure indicates assignment of a result of the compound operation to the proxy instance of the first tensor. Optimization may reduce a total number of indirection replacements.
    Type: Grant
    Filed: October 9, 2018
    Date of Patent: July 5, 2022
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Soumitra Chatterjee, Sunil Vishwanathpur Lakshminarasimha, Mohan Parthasarathy
  • Publication number: 20220198250
    Abstract: Examples of performing convolution operations based on a weighted matrix are described. In an example, an input data stream vector is processed using a weighted matrix stored onto a processing unit of a neural network accelerator. The weighted matrix may correspond to a first convolution filter and a second convolution filter.
    Type: Application
    Filed: April 20, 2021
    Publication date: June 23, 2022
    Inventors: Jitendra Onkar KOLHE, Soumitra CHATTERJEE
  • Publication number: 20220198249
    Abstract: Example techniques for causing execution of neural networks are described. A neural network includes a first part and a second part. A determination is made that a first physical resource in a first computing device is to execute the first part and that a second physical resource in a second computing device is to execute the second part. The determination is based on a latency in communication between the first physical resource and the second physical resource. The first computing device and the second computing device are part of a cluster.
    Type: Application
    Filed: April 15, 2021
    Publication date: June 23, 2022
    Inventors: Jitendra Onkar KOLHE, Soumitra CHATTERJEE, Mohan PARTHASARATHY
  • Patent number: 11361050
    Abstract: Example implementations relate to assigning dependent matrix-vector multiplication (MVM) operations to consecutive crossbars of a dot product engine (DPE). A method can comprise grouping a first MVM operation of a computation graph with a second MVM operation of the computation graph where the first MVM operation is dependent on a result of the second MVM operation, assigning a first crossbar of a DPE to an operand of the first MVM operation, and assigning a second crossbar of the DPE to an operand of the second MVM operation, wherein the first and second crossbars are consecutive.
    Type: Grant
    Filed: November 20, 2018
    Date of Patent: June 14, 2022
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Soumitra Chatterjee, Sunil Vishwanathpur Lakshminarasimha, Mohan Parthasarathy
  • Publication number: 20220121959
    Abstract: Examples disclosed herein relate to using a compiler for implementing tensor operations in a neural network base computing system. A compiler defines the tensor operations to be implemented. The compiler identifies a binary tensor operation receiving input operands from a first output tensor of a first tensor operation and a second output tensor of a second tensor operation from two different paths of the convolution neural network. For the binary tensor operation, the compiler allocates a buffer space for a first input operand in the binary tensor operation based on a difference between a count of instances of the first output tensor and a count of instances of the second output tensor.
    Type: Application
    Filed: April 13, 2021
    Publication date: April 21, 2022
    Inventors: Jitendra Onkar KOLHE, Soumitra CHATTERJEE, Vaithyalingam NAGENDRAN, Shounak BANDOPADHYAY
  • Patent number: 11269973
    Abstract: Repeating patterns are identified in a matrix. Based on the identification of the repeating patterns, instructions are generated, which are executable by processing cores of a dot product engine to allocate analog multiplication crossbars of the dot product engine to perform multiplication of the matrix with a vector.
    Type: Grant
    Filed: April 28, 2020
    Date of Patent: March 8, 2022
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Mashood Abdulla Kodavanji, Soumitra Chatterjee, Chinmay Ghosh, Mohan Parthasarathy
  • Publication number: 20220012573
    Abstract: Examples of performing tensor operations by a neural network-based computing system, are described. In an example, a first output working set generated by a first operation, wherein the first output working set is a set of processed partitioned tensors, is obtained. The first output working set is then copied to the output working set, for retrieving by the second operation.
    Type: Application
    Filed: April 8, 2021
    Publication date: January 13, 2022
    Inventors: Vaithyalingam NAGENDRAN, Jitendra Onkar KOLHE, Soumitra CHATTERJEE, Shounak BANDOPADHYAY
  • Publication number: 20210334335
    Abstract: Repeating patterns are identified in a matrix. Based on the identification of the repeating patterns, instructions are generated, which are executable by processing cores of a dot product engine to allocate analog multiplication crossbars of the dot product engine to perform multiplication of the matrix with a vector.
    Type: Application
    Filed: April 28, 2020
    Publication date: October 28, 2021
    Inventors: Mashood Abdulla Kodavanji, Soumitra Chatterjee, Chinmay Ghosh, Mohan Parthasarathy
  • Patent number: 11132423
    Abstract: According to examples, an apparatus may include a processor and a non-transitory computer readable medium having instructions that when executed by the processor, may cause the processor to partition a matrix of elements into a plurality of sub-matrices of elements. Each sub-matrix of the plurality of sub-matrices may include elements from a set of columns of the matrix of elements that includes a nonzero element. The processor may also assign elements of the plurality of sub-matrices to a plurality of crossbar devices to maximize a number of nonzero elements of the matrix of elements assigned to the crossbar devices.
    Type: Grant
    Filed: October 31, 2018
    Date of Patent: September 28, 2021
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Soumitra Chatterjee, Mashood Abdulla K, Chinmay Ghosh, Mohan Parthasarathy
  • Publication number: 20200242189
    Abstract: In an example, a neural network program corresponding to a neural network model is received. The neural network program includes matrices, vectors, and matrix-vector multiplication (MVM) operations. A computation graph corresponding to the neural network model is generated. The computation graph includes a plurality of nodes, each node representing a MVM operation, a matrix, or a vector. Further, a class model corresponding to the neural network model is populated with a data structure pointing to the computation graph. The computation graph is traversed based on the class model. Based on the traversal, the plurality of MVM operations are assigned to MVM units of a neural network accelerator. Each MVM unit can perform a MVM operation. Based on assignment of the plurality of MVM operations, an executable file is generated for execution by the neural network accelerator.
    Type: Application
    Filed: January 29, 2019
    Publication date: July 30, 2020
    Inventors: Soumitra Chatterjee, Sunil Vishwanathpur Lakshminarasimha, Mohan Parthasarathy
  • Patent number: 10726096
    Abstract: Systems and methods are provided for sparse matrix vector multiplication with a matrix vector multiplication unit. The method includes partitioning a sparse matrix of entries into a plurality of sub-matrices; mapping each of the sub-matrices to one of a plurality of respective matrix vector multiplication engines; partitioning an input vector into a plurality of sub-vectors; computing, via each matrix vector multiplication engine, a plurality of intermediate result vectors each resulting from a multiplication of one of the sub-matrices and one of the sub-vectors; for each set of rows of the sparse matrix, adding elementwise the intermediate result vectors to produce a plurality of result sub-vectors; and concatenating the result sub-vectors to form a result vector.
    Type: Grant
    Filed: October 12, 2018
    Date of Patent: July 28, 2020
    Assignee: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
    Inventors: Soumitra Chatterjee, Chinmay Ghosh, Mashood Abdulla Kodavanji, Mohan Parthasarathy
  • Publication number: 20200159811
    Abstract: Example implementations relate to assigning dependent matrix-vector multiplication (MVM) operations to consecutive crossbars of a dot product engine (DPE). A method can comprise grouping a first MVM operation of a computation graph with a second MVM operation of the computation graph where the first MVM operation is dependent on a result of the second MVM operation, assigning a first crossbar of a DPE to an operand of the first MVM operation, and assigning a second crossbar of the DPE to an operand of the second MVM operation, wherein the first and second crossbars are consecutive.
    Type: Application
    Filed: November 20, 2018
    Publication date: May 21, 2020
    Inventors: Soumitra Chatterjee, Sunil Vishwanathpur Lakshminarasimha, Mohan Parthasarathy