Patents by Inventor Tomo Lazovich
Tomo Lazovich has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20220416908Abstract: Systems and methods for performing signed matrix operations using a linear photonic processor are provided. The linear photonic processor is formed as an array of first amplitude modulators and second amplitude modulators, the first amplitude modulators configured to encode elements of a vector into first optical signals and the second amplitude modulators configured to encode a product between the vector elements and matrix elements into second optical signals. An apparatus may be used to implement a signed value of an output of the linear processor. The linear photonic processor may be configured to perform matrix-vector and/or matrix-matrix operations.Type: ApplicationFiled: June 14, 2022Publication date: December 29, 2022Applicant: Lightmatter, Inc.Inventors: Darius Bunandar, Nicholas C. Harris, Michael Gould, Carl Ramey, Tomo Lazovich
-
Publication number: 20220366308Abstract: Methods and apparatus for training a matrix-based differentiable program using a photonics-based processor. The matrix-based differentiable program includes at least one matrix-valued variable associated with a matrix of values in a Euclidean vector space. The method comprises configuring components of the photonics-based processor to represent the matrix of values as an angular representation, processing, using the components of the photonics-based processor, training data to compute an error vector, determining in parallel, at least some gradients of parameters of the angular representation, wherein the determining is based on the error vector and a current input training vector, and updating the matrix of values by updating the angular representation based on the determined gradients.Type: ApplicationFiled: July 13, 2022Publication date: November 17, 2022Applicant: Lightmatter, Inc.Inventors: Tomo Lazovich, Darius Bunandar, Nicholas C. Harris, Martin B.Z. Forsythe
-
Patent number: 11475367Abstract: Methods and apparatus for training a matrix-based differentiable program using a photonics-based processor. The matrix-based differentiable program includes at least one matrix-valued variable associated with a matrix of values in a Euclidean vector space. The method comprises configuring components of the photonics-based processor to represent the matrix of values as an angular representation, processing, using the components of the photonics-based processor, training data to compute an error vector, determining in parallel, at least some gradients of parameters of the angular representation, wherein the determining is based on the error vector and a current input training vector, and updating the matrix of values by updating the angular representation based on the determined gradients.Type: GrantFiled: June 29, 2020Date of Patent: October 18, 2022Assignee: Lightmatter, Inc.Inventors: Tomo Lazovich, Darius Bunandar, Nicholas C. Harris, Martin B. Z. Forsythe
-
Publication number: 20220261645Abstract: Methods and systems for training neural networks using low-bitwidth accelerators are described. The methods described herein use moment-penalization functions. For example, a method comprises producing a modified data set by training a neural network using a moment-penalization function and the data set. The moment-penalization function is configured to penalize a moment associated with the neural network. Training the neural network in turn comprises quantizing the data set to obtain a fixed-point data set so that the fixed-point data set represents the data set in a fixed-point representation, and passing the fixed-point data set through an analog accelerator. The inventors have recognized that training a neural network using a modified objective function augments the accuracy and robustness of the neural network notwithstanding the use of low-bitwidth accelerators.Type: ApplicationFiled: February 15, 2022Publication date: August 18, 2022Applicant: Lightmatter, Inc.Inventors: Nicholas Dronen, Tyler J. Kenney, Tomo Lazovich, Ayon Basumallik, Darius Bunandar
-
Patent number: 11398871Abstract: Systems and methods for performing signed matrix operations using a linear photonic processor are provided. The linear photonic processor is formed as an array of first amplitude modulators and second amplitude modulators, the first amplitude modulators configured to encode elements of a vector into first optical signals and the second amplitude modulators configured to encode a product between the vector elements and matrix elements into second optical signals. An apparatus may be used to implement a signed value of an output of the linear processor. The linear photonic processor may be configured to perform matrix-vector and/or matrix-matrix operations.Type: GrantFiled: July 28, 2020Date of Patent: July 26, 2022Assignee: Lightmatter, Inc.Inventors: Darius Bunandar, Nicholas C. Harris, Michael Gould, Carl Ramey, Tomo Lazovich
-
Publication number: 20220172052Abstract: Described herein are techniques of training a machine learning model and performing inference using an analog processor. Some embodiments mitigate the loss in performance of a machine learning model resulting from a lower precision of an analog processor by using an adaptive block floating-point representation of numbers for the analog processor. Some embodiments mitigate the loss in performance of a machine learning model due to noise that is present when using an analog processor. The techniques involve training the machine learning model such that it is robust to noise.Type: ApplicationFiled: November 29, 2021Publication date: June 2, 2022Applicant: Lightmatter, Inc.Inventors: Darius Bunandar, Ludmila Levkova, Nicholas Dronen, Lakshmi Nair, David Widemann, David Walter, Martin B.Z. Forsythe, Tomo Lazovich, Ayon Basumallik, Nicholas C. Harris
-
Publication number: 20220100973Abstract: Techniques for computing matrix operations for arbitrarily large matrices on a finite-sized hybrid analog-digital matrix processor are described. Techniques for gain adjustment in a finite-sized hybrid analog-digital matrix processor are described which enable the system to obtain higher energy efficiencies, greater physical density and improved numerical accuracy. In some embodiments, these techniques enable maximization of the predictive accuracy of a GEMM-based convolutional neural network using low-precision data representations.Type: ApplicationFiled: December 8, 2021Publication date: March 31, 2022Applicant: Lightmatter, Inc.Inventors: Tyler J. Kenney, Martin B. Z. Forsythe, Tomo Lazovich, Darius Bunandar
-
Publication number: 20220043474Abstract: Systems and methods for performing matrix operations using a path-number balanced optical network are provided. The optical network is formed as an array including active optical components and passive optical components arranged at a substantially central location of the array. The optical network includes at least NM active optical components which are used to implement a first matrix of any size N×M by embedding the first matrix in a second matrix of a larger size. The optical network performs matrix-vector and matrix-matrix operations by propagating one or more pluralities of optical signals corresponding to an input vector through the optical network.Type: ApplicationFiled: October 21, 2021Publication date: February 10, 2022Applicant: Lightmatter, Inc.Inventors: Darius Bunandar, Martin B.Z. Forsythe, Michael Gould, Tomo Lazovich
-
Publication number: 20220036185Abstract: A training system for training a machine learning model such as a neural network may have a different configuration and/or hardware components than a target device that employs the trained neural network. For example, the training system may use a higher precision format to represent neural network parameters than the target device. In another example, the target device may use analog and digital processing hardware to compute an output of the neural network whereas the training system may have used only digital processing hardware to train the neural network. The difference in configuration and/or hardware components of the target device may introduce quantization error into parameters of the neural network, and thus affect performance of the neural network on the target device. Described herein is a training system that trains a neural network for use on a target device that reduces loss in performance resulting from quantization error.Type: ApplicationFiled: July 30, 2021Publication date: February 3, 2022Applicant: Lightmatter, Inc.Inventors: Nicholas Dronen, Tomo Lazovich, Ayon Basumallik, Darius Bunandar
-
Patent number: 11209856Abstract: Systems and methods for performing matrix operations using a path-number balanced optical network are provided. The optical network is formed as an array including active optical components and passive optical components arranged at a substantially central location of the array. The optical network includes at least NM active optical components which are used to implement a first matrix of any size N×M by embedding the first matrix in a second matrix of a larger size. The optical network performs matrix-vector and matrix-matrix operations by propagating one or more pluralities of optical signals corresponding to an input vector through the optical network.Type: GrantFiled: February 24, 2020Date of Patent: December 28, 2021Assignee: Lightmatter, Inc.Inventors: Darius Bunandar, Martin B. Z. Forsythe, Michael Gould, Tomo Lazovich
-
Publication number: 20210279432Abstract: Techniques for computing matrix operations for arbitrarily large matrices on a finite-sized hybrid analog-digital matrix processor are described. Techniques for gain adjustment in a finite-sized hybrid analog-digital matrix processor are described which enable the system to obtain higher energy efficiencies, greater physical density and improved numerical accuracy. In some embodiments, these techniques enable maximization of the predictive accuracy of a GEMM-based convolutional neural network using low-precision data representations.Type: ApplicationFiled: May 3, 2021Publication date: September 9, 2021Applicant: Lightmatter, Inc.Inventors: TYLER J. KENNEY, Martin B.Z. Forsythe, Tomo Lazovich, Darius Bunandar
-
Patent number: 11023691Abstract: Techniques for computing matrix operations for arbitrarily large matrices on a finite-sized hybrid analog-digital matrix processor are described. Techniques for gain adjustment in a finite-sized hybrid analog-digital matrix processor are described which enable the system to obtain higher energy efficiencies, greater physical density and improved numerical accuracy. In some embodiments, these techniques enable maximization of the predictive accuracy of a GEMM-based convolutional neural network using low-precision data representations.Type: GrantFiled: August 17, 2020Date of Patent: June 1, 2021Assignee: Lightmatter, Inc.Inventors: Tyler J. Kenney, Martin B. Z. Forsythe, Tomo Lazovich, Darius Bunandar
-
Publication number: 20210125066Abstract: Described herein are techniques for determining an architecture of a machine learning model that optimizes the machine learning model. The system obtains a machine learning model configured with a first architecture of a plurality of architectures. The machine learning model has a first set of parameters. The system determines a second architecture using a quantization of the parameters of the machine learning model. The system updates the machine learning model to obtain a machine learning model configured with the second architecture.Type: ApplicationFiled: October 27, 2020Publication date: April 29, 2021Applicant: Lightmatter, Inc.Inventor: Tomo Lazovich
-
Publication number: 20210089906Abstract: Methods and apparatus for pre-processing first data for use with a trained machine learning model. In some embodiments, the method may comprise accessing the first data, wherein the first data has a first precision; generating, based on at least a first portion of the first data, second data having a second precision lower than the first precision; and providing the second data as input to the trained machine learning model to generate model output.Type: ApplicationFiled: September 22, 2020Publication date: March 25, 2021Applicant: Lightmatter, Inc.Inventor: Tomo Lazovich
-
Publication number: 20210036783Abstract: Systems and methods for performing signed matrix operations using a linear photonic processor are provided. The linear photonic processor is formed as an array of first amplitude modulators and second amplitude modulators, the first amplitude modulators configured to encode elements of a vector into first optical signals and the second amplitude modulators configured to encode a product between the vector elements and matrix elements into second optical signals. An apparatus may be used to implement a signed value of an output of the linear processor. The linear photonic processor may be configured to perform matrix-vector and/or matrix-matrix operations.Type: ApplicationFiled: July 28, 2020Publication date: February 4, 2021Applicant: Lightmatter, Inc.Inventors: Darius Bunandar, Nicholas C. Harris, Michael Gould, Carl Ramey, Tomo Lazovich
-
Patent number: 10866877Abstract: A software instruction code repair system comprising an instruction code example pool. The example pool comprises a set of good instruction code examples and a set of bad instruction code examples. The software instruction code repair system further comprises a sequence-to-sequence (seq2seq) network that is configured to generate a corrected instruction code example, based on one example of the set of bad instruction code examples. The software instruction code repair system further comprises a discriminator configured to randomly select one of the corrected instruction code example and one instruction code example of the set of good instruction code examples to produce a selected instruction code example. The discriminator is further configured to make a determination that the selected instruction code example is most likely taken either the instruction code example pool or the seq2seq network.Type: GrantFiled: November 13, 2018Date of Patent: December 15, 2020Assignee: THE CHARLES STARK DRAPER LABORATORY, INC.Inventors: Jacob Harer, Tomo Lazovich, Rebecca Russell, Onur Ozdemir, Louis Kim
-
Publication number: 20200380217Abstract: Techniques for computing matrix operations for arbitrarily large matrices on a finite-sized hybrid analog-digital matrix processor are described. Techniques for gain adjustment in a finite-sized hybrid analog-digital matrix processor are described which enable the system to obtain higher energy efficiencies, greater physical density and improved numerical accuracy. In some embodiments, these techniques enable maximization of the predictive accuracy of a GEMM-based convolutional neural network using low-precision data representations.Type: ApplicationFiled: August 17, 2020Publication date: December 3, 2020Applicant: Lightmatter, Inc.Inventors: Tyler J. Kenney, Martin B.Z. Forsythe, Tomo Lazovich, Darius Bunandar
-
Publication number: 20200334576Abstract: Methods and apparatus for training a matrix-based differentiable program using a photonics-based processor. The matrix-based differentiable program includes at least one matrix-valued variable associated with a matrix of values in a Euclidean vector space. The method comprises configuring components of the photonics-based processor to represent the matrix of values as an angular representation, processing, using the components of the photonics-based processor, training data to compute an error vector, determining in parallel, at least some gradients of parameters of the angular representation, wherein the determining is based on the error vector and a current input training vector, and updating the matrix of values by updating the angular representation based on the determined gradients.Type: ApplicationFiled: June 29, 2020Publication date: October 22, 2020Applicant: Lightmatter, Inc.Inventors: Tomo Lazovich, Darius Bunandar, Nicholas C. Harris, Martin B.Z. Forsythe
-
Patent number: 10803258Abstract: Techniques for computing matrix operations for arbitrarily large matrices on a finite-sized hybrid analog-digital matrix processor are described. Techniques for gain adjustment in a finite-sized hybrid analog-digital matrix processor are described which enable the system to obtain higher energy efficiencies, greater physical density and improved numerical accuracy. In some embodiments, these techniques enable maximization of the predictive accuracy of a GEMM-based convolutional neural network using low-precision data representations.Type: GrantFiled: February 25, 2020Date of Patent: October 13, 2020Assignee: Lightmatter, Inc.Inventors: Tyler J. Kenney, Martin B. Z. Forsythe, Tomo Lazovich, Darius Bunandar
-
Patent number: 10803259Abstract: Techniques for computing matrix operations for arbitrarily large matrices on a finite-sized hybrid analog-digital matrix processor are described. Techniques for gain adjustment in a finite-sized hybrid analog-digital matrix processor are described which enable the system to obtain higher energy efficiencies, greater physical density and improved numerical accuracy. In some embodiments, these techniques enable maximization of the predictive accuracy of a GEMM-based convolutional neural network using low-precision data representations.Type: GrantFiled: February 25, 2020Date of Patent: October 13, 2020Assignee: Lightmatter, Inc.Inventors: Tyler J. Kenney, Martin B. Z. Forsythe, Tomo Lazovich, Darius Bunandar