Patents by Inventor Ayon Basumallik

Ayon Basumallik has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220261645
    Abstract: Methods and systems for training neural networks using low-bitwidth accelerators are described. The methods described herein use moment-penalization functions. For example, a method comprises producing a modified data set by training a neural network using a moment-penalization function and the data set. The moment-penalization function is configured to penalize a moment associated with the neural network. Training the neural network in turn comprises quantizing the data set to obtain a fixed-point data set so that the fixed-point data set represents the data set in a fixed-point representation, and passing the fixed-point data set through an analog accelerator. The inventors have recognized that training a neural network using a modified objective function augments the accuracy and robustness of the neural network notwithstanding the use of low-bitwidth accelerators.
    Type: Application
    Filed: February 15, 2022
    Publication date: August 18, 2022
    Applicant: Lightmatter, Inc.
    Inventors: Nicholas Dronen, Tyler J. Kenney, Tomo Lazovich, Ayon Basumallik, Darius Bunandar
  • Publication number: 20220229634
    Abstract: A photonic processor uses light signals and a residue number system (RNS) to perform calculations. The processor sums two or more values by shifting the phase of a light signal with phase shifters and reading out the summed phase with a coherent detector. Because phase winds back every 2? radians, the photonic processor performs addition modulo 2?. A photonic processor may use the summation of phases to perform dot products and correct erroneous residues. A photonic processor may use the RNS in combination with a positional number system (PNS) to extend the numerical range of the photonic processor, which may be used to accelerate homomorphic encryption (HE)-based deep learning.
    Type: Application
    Filed: December 6, 2021
    Publication date: July 21, 2022
    Applicant: Lightmatter, Inc.
    Inventors: Eric Hein, Ayon Basumallik, Nicholas C. Harris, Darius Bunandar, Cansu Demirkiran
  • Publication number: 20220172052
    Abstract: Described herein are techniques of training a machine learning model and performing inference using an analog processor. Some embodiments mitigate the loss in performance of a machine learning model resulting from a lower precision of an analog processor by using an adaptive block floating-point representation of numbers for the analog processor. Some embodiments mitigate the loss in performance of a machine learning model due to noise that is present when using an analog processor. The techniques involve training the machine learning model such that it is robust to noise.
    Type: Application
    Filed: November 29, 2021
    Publication date: June 2, 2022
    Applicant: Lightmatter, Inc.
    Inventors: Darius Bunandar, Ludmila Levkova, Nicholas Dronen, Lakshmi Nair, David Widemann, David Walter, Martin B.Z. Forsythe, Tomo Lazovich, Ayon Basumallik, Nicholas C. Harris
  • Publication number: 20220155996
    Abstract: Aspects of the present disclosure provide an aligned storage strategy for stripes within a long vector for a vector processor, such that the extra computation needed to track strides between input stripes and output stripes may be eliminated. As a result, the stripe locations are located in a more predictable memory access pattern such that memory access bandwidth may be improved and the tendency for memory error may be reduced.
    Type: Application
    Filed: November 15, 2021
    Publication date: May 19, 2022
    Applicant: Lightmatter, Inc.
    Inventors: Nicholas Moore, Gongyu Wang, Bradley Dobbie, Tyler J. Kenney, Ayon Basumallik
  • Publication number: 20220156469
    Abstract: Parallelization and pipelining techniques that can be applied to multi-core analog accelerators are described. The techniques descried herein improve performance of matrix multiplication (e.g., tensor-tensor multiplication, matrix-matrix multiplication or matrix-vector multiplication). The parallelization and pipelining techniques developed by the inventors and described herein focus on maintaining a high utilization of the processing cores. A representative processing systemin includes an analog accelerator, a digital processor, and a controller. The controller is configured to control the analog accelerator to output data using linear operations and to control the digital processor to perform non-linear operations based on the output data.
    Type: Application
    Filed: November 15, 2021
    Publication date: May 19, 2022
    Applicant: Lightmatter, Inc.
    Inventors: Gongyu Wang, Cansu Demirkiran, Nicholas Moore, Ayon Basumallik, Darius Bunandar
  • Publication number: 20220147280
    Abstract: Aspects of the present disclosure are directed to an efficient data transfer strategy in which data transfer is scheduled based on a prediction of the internal memory utilization due to computational workload throughout its runtime. According to one aspect, the DMA transfer may be performed opportunistically: whenever internal buffer memory is available and the additional internal memory usage due to DMA transfer isn't interfering with the processor's ability to complete the workload. In some embodiments, an opportunistic transfer schedule may be found by solving an optimization problem.
    Type: Application
    Filed: November 9, 2021
    Publication date: May 12, 2022
    Applicant: Lightmatter, Inc.
    Inventors: Darius Bunandar, Cansu Demirkiran, Gongyu Wang, Nicholas Moore, Ayon Basumallik
  • Publication number: 20220036185
    Abstract: A training system for training a machine learning model such as a neural network may have a different configuration and/or hardware components than a target device that employs the trained neural network. For example, the training system may use a higher precision format to represent neural network parameters than the target device. In another example, the target device may use analog and digital processing hardware to compute an output of the neural network whereas the training system may have used only digital processing hardware to train the neural network. The difference in configuration and/or hardware components of the target device may introduce quantization error into parameters of the neural network, and thus affect performance of the neural network on the target device. Described herein is a training system that trains a neural network for use on a target device that reduces loss in performance resulting from quantization error.
    Type: Application
    Filed: July 30, 2021
    Publication date: February 3, 2022
    Applicant: Lightmatter, Inc.
    Inventors: Nicholas Dronen, Tomo Lazovich, Ayon Basumallik, Darius Bunandar
  • Patent number: 11055092
    Abstract: The exemplary embodiments may provide an approach to finding and identifying the correlation between the invoking code and the invoked code by correlating the timestamps of contextual information of code in the invoking code and invoked code. As a result, developers have information during investigating the programs and can use the information to identify a region of interest to narrow down a performance problem in the invoking code efficiently. As a result, development productivity can be improved.
    Type: Grant
    Filed: February 19, 2020
    Date of Patent: July 6, 2021
    Assignee: The MathWorks, Inc.
    Inventors: Ayon Basumallik, Meng-Ju Wu
  • Patent number: 10949181
    Abstract: Extended types are defined for functions that are called by function handles in a programming environment. The extended types can be accessed and used by a computing system to improve compile-time and run-time performance of the computing system.
    Type: Grant
    Filed: November 13, 2018
    Date of Patent: March 16, 2021
    Assignee: The MathWorks, Inc.
    Inventors: Rajeshwar Vanka, Ayon Basumallik, Brett Baker
  • Patent number: 10929160
    Abstract: Systems and methods for just-in-time compilation are disclosed. The systems and methods can be used to generate composite blocks, reducing program execution time. The systems and methods can include generating single-trace blocks during program execution. Upon satisfaction of a trigger criterion, single-trace blocks can be selected for compilation into a composite block. The trigger criterion can be a number of executions of a trigger block. Selecting the single-trace blocks can include identifying blocks reachable from the trigger block, selecting a subset of the reachable blocks, and selecting an entry point for the composite block. The composite block can be generated from the single-trace blocks and incorporated into the program control flow, such that the composite block is executed in place of the selected single-trace blocks.
    Type: Grant
    Filed: December 12, 2018
    Date of Patent: February 23, 2021
    Assignee: The MathWorks, Inc.
    Inventors: Nikolay Mateev, Ayon Basumallik, Aaditya Kalsi, Prabhakar Kumar
  • Patent number: 9135027
    Abstract: A device may identify a first compiled block with an original constraint and an additional constraint. The first compiled block may be identified based on a program counter value and may include compiled information relating to a first segment of program code, linking information associated with a second compiled block, and information distinguishing the original constraint from the additional constraint. The original constraint may relate to a type of variable used in the first segment of programming code. The additional constraint may relate to a variable used in a second segment of programming code associated with the second compiled block. The device may copy information of the first compiled block to generate a third compiled block that lacks the additional constraint. The device may execute the third compiled block to execute a program associated with the programming code.
    Type: Grant
    Filed: January 14, 2015
    Date of Patent: September 15, 2015
    Assignee: The MathWorks, Inc.
    Inventors: Ayon Basumallik, Nikolay Mateev
  • Patent number: 8943474
    Abstract: A device receives programming code, corresponding to a dynamic programming language, that is to be executed by a computing environment, and executes the programming code. When executing the programming code, the device maintains a program counter that identifies an execution location within the programming code, and select blocks of the programming code based on the program counter. The blocks correspond to segments of the programming code, and are associated with type-based constraints that relate to types of variables that are used by the block. When executing the programming code, the device also compiles the selected blocks, caches the compiled blocks along with the type-based constraints, generates linking information between certain ones of the compiled blocks based on the type-based constraints, and executes the compiled blocks in an order based on the program counter, the type-based constraints, and the linking information.
    Type: Grant
    Filed: October 24, 2012
    Date of Patent: January 27, 2015
    Assignee: The MathWorks, Inc.
    Inventors: Ayon Basumallik, Brett W. Baker, Nikolay Mateev, Hongjun Zheng