Patents by Inventor Sakyasingha Dasgupta
Sakyasingha Dasgupta has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11893475Abstract: Neural network inference may be performed by configuration of a device including an accumulation memory, a plurality of convolution modules configured to perform mathematical operations on input values, a plurality of adder modules configured to sum values output from the plurality of convolution modules, and a plurality of convolution output interconnects connecting the plurality of convolution modules, the plurality of adder modules, and the accumulation memory. The accumulation memory is an accumulation memory allocation of a writable memory block having a reconfigurable bank width, and each bank of the accumulation memory allocation is a virtual combination of consecutive banks of the writable memory block.Type: GrantFiled: October 11, 2021Date of Patent: February 6, 2024Assignee: EDGECORTIX INC.Inventors: Nikolay Nez, Hamid Reza Zohouri, Oleg Khavin, Antonio Tomas Nevado Vilchez, Sakyasingha Dasgupta
-
Patent number: 11886988Abstract: Adaptive exploration in deep reinforcement learning may be performed by inputting a current time frame of an action and observation sequence sequentially into a function approximator, such as a deep neural network, including a plurality of parameters, the action and observation sequence including a plurality of time frames, each time frame including action values and observation values, approximating a value function using the function approximator based on the current time frame to acquire a current value, updating an action selection policy through exploration based on an ?-greedy strategy using the current value, and updating the plurality of parameters.Type: GrantFiled: November 22, 2017Date of Patent: January 30, 2024Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventor: Sakyasingha Dasgupta
-
Publication number: 20230252275Abstract: Neural network hardware acceleration data parallelism is performed by an integrated circuit including a plurality of memory banks, each memory bank among the plurality of memory banks configured to store values and to transmit stored values, a plurality of computation units, each computation unit among the plurality of computation units including one of a channel pipeline and a multiply-and-accumulate (MAC) element configured to perform a mathematical operation on an input data value and a weight value to produce a resultant data value, and a computation controller configured to cause a value transmission to be received by more than one computation unit or memory bank.Type: ApplicationFiled: April 13, 2023Publication date: August 10, 2023Inventors: Nikolay NEZ, Oleg KHAVIN, Tanvir AHMED, Jens HUTHMANN, Sakyasingha DASGUPTA
-
Patent number: 11657260Abstract: Neural network hardware acceleration data parallelism is performed by an integrated circuit including a plurality of memory banks, each memory bank among the plurality of memory banks configured to store values and to transmit stored values, a plurality of computation units, each computation unit among the plurality of computation units including a processor including circuitry configured to perform a mathematical operation on an input data value and a weight value to produce a resultant data value, and a computation controller configured to cause a value transmission to be received by more than one computation unit or memory bank.Type: GrantFiled: October 26, 2021Date of Patent: May 23, 2023Assignee: EDGECORTIX PTE. LTD.Inventors: Nikolay Nez, Oleg Khavin, Tanvir Ahmed, Jens Huthmann, Sakyasingha Dasgupta
-
Publication number: 20230128600Abstract: Neural network hardware acceleration data parallelism is performed by an integrated circuit including a plurality of memory banks, each memory bank among the plurality of memory banks configured to store values and to transmit stored values, a plurality of computation units, each computation unit among the plurality of computation units including a processor including circuitry configured to perform a mathematical operation on an input data value and a weight value to produce a resultant data value, and a computation controller configured to cause a value transmission to be received by more than one computation unit or memory bank.Type: ApplicationFiled: October 26, 2021Publication date: April 27, 2023Inventors: Nikolay NEZ, Oleg KHAVIN, Tanvir AHMED, Jens HUTHMANN, Sakyasingha DASGUPTA
-
Patent number: 11593611Abstract: Cooperative neural networks may be implemented by providing an input to a first neural network including a plurality of first parameters, and updating at least one first parameter based on an output from a recurrent neural network provided with the input, the recurrent neural network including a plurality of second parameters.Type: GrantFiled: November 6, 2017Date of Patent: February 28, 2023Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventor: Sakyasingha Dasgupta
-
Patent number: 11574164Abstract: Cooperative neural networks may be implemented by providing an input to a first neural network including a plurality of first parameters, and updating at least one first parameter based on an output from a recurrent neural network provided with the input, the recurrent neural network including a plurality of second parameters.Type: GrantFiled: March 20, 2017Date of Patent: February 7, 2023Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventor: Sakyasingha Dasgupta
-
Patent number: 11521052Abstract: Hardware and neural architecture co-search may be performed by operations including obtaining a specification of a function and a plurality of hardware design parameters. The hardware design parameters include a memory capacity, a number of computational resources, a communication bandwidth, and a template configuration for performing neural architecture inference. The operations further include determining, for each neural architecture among a plurality of neural architectures, an overall latency of performance of inference of the neural architecture by an accelerator within the hardware design parameters. Each neural architecture having been trained to perform the function with an accuracy. The operations further include selecting, from among the plurality of neural architectures, a neural architecture based on the overall latency and the accuracy.Type: GrantFiled: June 30, 2021Date of Patent: December 6, 2022Assignee: EDGECORTIX PTE. LTD.Inventors: Sakyasingha Dasgupta, Weiwen Jiang, Yiyu Shi
-
Patent number: 11410042Abstract: A computer-implemented method includes employing a dynamic Boltzmann machine (DyBM) to predict a higher-order moment of time-series datasets. The method further includes acquiring the time-series datasets transmitted from a source node to a destination node of a neural network including a plurality of nodes, learning, by the processor, a time-series generative model based on the DyBM with eligibility traces, and obtaining, by the processor, parameters of a generalized auto-regressive heteroscedasticity (GARCH) model to predict a time-varying second-order moment of the times-series datasets.Type: GrantFiled: October 31, 2018Date of Patent: August 9, 2022Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Rudy Raymond Harry Putra, Takayuki Osogami, Sakyasingha Dasgupta
-
Publication number: 20220215236Abstract: Neural network inference may be performed by configuration of a device including an accumulation memory, a plurality of convolution modules configured to perform mathematical operations on input values, a plurality of adder modules configured to sum values output from the plurality of convolution modules, and a plurality of convolution output interconnects connecting the plurality of convolution modules, the plurality of adder modules, and the accumulation memory. The accumulation memory is an accumulation memory allocation of a writable memory block having a reconfigurable bank width, and each bank of the accumulation memory allocation is a virtual combination of consecutive banks of the writable memory block.Type: ApplicationFiled: October 11, 2021Publication date: July 7, 2022Inventors: Nikolay NEZ, Hamid Reza ZOHOURI, Oleg KHAVIN, Antonio Tomas Nevado VILCHEZ, Sakyasingha DASGUPTA
-
Patent number: 11250313Abstract: A computer-implemented method is provided for autonomously making continuous trading decisions for assets using a first eligibility trace enabled Neural Network (NN). The method includes pretraining the first eligibility trace enabled NN, using asset price time series data, to generation predictions of future asset price time series data. The method further includes initializing a second eligibility trace enabled NN for reinforcement learning using learned parameters of the first eligibility trace enabled NN. The method also includes augmenting state information of the second eligibility trace enabled NN for reinforcement learning using an output from the first eligibility trace enabled NN. The method additionally includes performing continuous actions for trading assets at each of multiple time points.Type: GrantFiled: January 28, 2019Date of Patent: February 15, 2022Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Sakyasingha Dasgupta, Rudy R. Harry Putra
-
Publication number: 20220027716Abstract: Neural network inference may be performed by an apparatus or integrated circuit configured to perform mathematical operations on activation data stored in an activation data memory and weight values stored in a weight memory, to store values resulting from the mathematical operations onto an accumulation memory, to perform activation operations on the values stored in the accumulation memory, to store resulting activation data onto the activation data memory, and to perform inference of a neural network by feeding and synchronizing instructions from an external memory.Type: ApplicationFiled: October 4, 2021Publication date: January 27, 2022Inventors: Nikolay Nez, Antonio Tomas Nevado Vilchez, Hamid Reza Zohouri, Mikhail Volkov, Oleg Khavin, Sakyasingha Dasgupta
-
Publication number: 20220019880Abstract: Hardware and neural architecture co-search may be performed by operations including obtaining a specification of a function and a plurality of hardware design parameters. The hardware design parameters include a memory capacity, a number of computational resources, a communication bandwidth, and a template configuration for performing neural architecture inference. The operations further include determining, for each neural architecture among a plurality of neural architectures, an overall latency of performance of inference of the neural architecture by an accelerator within the hardware design parameters. Each neural architecture having been trained to perform the function with an accuracy. The operations further include selecting, from among the plurality of neural architectures, a neural architecture based on the overall latency and the accuracy.Type: ApplicationFiled: June 30, 2021Publication date: January 20, 2022Inventors: Sakyasingha DASGUPTA, Weiwen JIANG, Yiyu SHI
-
Patent number: 11195116Abstract: A computer-implemented method includes employing a dynamic Boltzmann machine (DyBM) to solve a maximum likelihood of generalized normal distribution (GND) of time-series datasets. The method further includes acquiring the time-series datasets transmitted from a source node to a destination node of a neural network including a plurality of nodes, learning, by the processor, a time-series generative model based on the GND with eligibility traces, and, performing, by the processor, online updating of internal parameters of the GND based on a gradient update to predict updated times-series datasets generated from non-Gaussian distributions.Type: GrantFiled: October 31, 2018Date of Patent: December 7, 2021Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Rudy Raymond Harry Putra, Takayuki Osogami, Sakyasingha Dasgupta
-
Patent number: 11188300Abstract: Preparation and execution of quantized scaling may be performed by operations including obtaining an original array and a scaling factor representing a ratio of a size of the original array to a size of a scaled array, determining, for each column of the scaled array, a horizontal coordinate of each of two nearest elements in the horizontal dimension of the original array, and, for each row of the scaled array, a vertical coordinate of each of two nearest elements in the vertical dimension of the original array, calculating, for each row of the scaled array and each column of the scaled array, a linear interpolation coefficient, converting each value of the original array from a floating point number into a quantized number, converting each linear interpolation coefficient from a floating point number into a fixed point number, storing, in a memory, the horizontal coordinates and vertical coordinates as integers, the values as quantized numbers, and the linear interpolation coefficients as fixed point numbersType: GrantFiled: June 18, 2021Date of Patent: November 30, 2021Assignee: EDGECORTIX PTE. LTD.Inventors: Oleg Khavin, Nikolay Nez, Sakyasingha Dasgupta, Antonio Tomas Nevado Vilchez
-
Patent number: 11182676Abstract: Deep reinforcement learning of cooperative neural networks can be performed by obtaining an action and observation sequence including a plurality of time frames, each time frame including action values and observation values. At least some of the observation values of each time frame of the action and observation sequence can be input sequentially into a first neural network including a plurality of first parameters. The action values of each time frame of the action and observation sequence and output values from the first neural network corresponding to the at least some of the observation values of each time frame of the action and observation sequence can be input sequentially into a second neural network including a plurality of second parameters. An action-value function can be approximated using the second neural network, and the plurality of first parameters of the first neural network can be updated using backpropagation.Type: GrantFiled: August 4, 2017Date of Patent: November 23, 2021Assignee: International Business Machines CorporationInventors: Sakyasingha Dasgupta, Takayuki Osogami
-
Publication number: 20210357732Abstract: Neural network accelerator hardware-specific division of inference may be performed by operations including obtaining a computational graph and a hardware chip configuration. The operations also include dividing inference of the plurality of layers into a plurality of groups. Each group includes a number of sequential layers based on an estimate of duration and energy consumption by the hardware chip to perform inference of the neural network by performing the mathematical operations on activation data, sequentially by layer, of corresponding portions of layers of each group. The operations further include generating instructions for the hardware chip to perform inference of the neural network, sequentially by group, of the plurality of groups.Type: ApplicationFiled: February 26, 2021Publication date: November 18, 2021Inventors: Nikolay NEZ, Antonio Tomas Nevado VILCHEZ, Hamid Reza ZOHOURI, Mikhail VOLKOV, Oleg KHAVIN, Sakyasingha DASGUPTA
-
Patent number: 11176449Abstract: Neural network accelerator hardware-specific division of inference may be performed by operations including obtaining a computational graph and a hardware chip configuration. The operations also include dividing inference of the plurality of layers into a plurality of groups. Each group includes a number of sequential layers based on an estimate of duration and energy consumption by the hardware chip to perform inference of the neural network by performing the mathematical operations on activation data, sequentially by layer, of corresponding portions of layers of each group. The operations further include generating instructions for the hardware chip to perform inference of the neural network, sequentially by group, of the plurality of groups.Type: GrantFiled: February 26, 2021Date of Patent: November 16, 2021Assignee: EDGECORTIX PTE. LTD.Inventors: Nikolay Nez, Antonio Tomas Nevado Vilchez, Hamid Reza Zohouri, Mikhail Volkov, Oleg Khavin, Sakyasingha Dasgupta
-
Patent number: 11144822Abstract: Neural network inference may be performed by configuration of a device including a plurality of convolution modules, a plurality of adder modules, an accumulation memory, and a convolution output interconnect control module configured to open and close convolution output interconnects among a plurality of convolution output interconnects connecting the plurality of convolution modules, the plurality of adder modules, and the accumulation memory. Inference may be performed while the device is configured according to at least one convolution output connection scheme whereby each convolution module has no more than one open direct connection through the plurality of convolution output interconnects to the accumulation memory or one of the plurality of adder modules. The device includes a convolution output interconnect control module to configure the plurality of convolution output interconnects according to the at least one convolution output connection scheme.Type: GrantFiled: January 4, 2021Date of Patent: October 12, 2021Assignee: EDGECORTIX PTE. LTD.Inventors: Nikolay Nez, Hamid Reza Zohouri, Oleg Khavin, Antonio Tomas Nevado Vilchez, Sakyasingha Dasgupta
-
Patent number: 11080586Abstract: A computer-implement method and an apparatus are provided for neural network reinforcement learning. The method includes obtaining, by a processor, an action and observation sequence. The method further includes inputting, by the processor, each of a plurality of time frames of the action and observation sequence sequentially into a plurality of input nodes of a neural network. The method also includes updating, by the processor, a plurality of parameters of the neural network by using the neural network to approximate an action-value function of the action and observation sequence.Type: GrantFiled: November 6, 2017Date of Patent: August 3, 2021Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Sakyasingha Dasgupta, Takayuki Osogami