Patents by Inventor Dan Alistarh
Dan Alistarh has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11797855Abstract: A system and method of accelerating execution of a NN model, by at least one processor may include: receiving a first matrix A, representing elements of a kernel K of the NN model and a second matrix B, representing elements of an input I to kernel K; producing from matrix A, a group-sparse matrix A?, comprising G tensors of elements. The number of elements in each tensor is defined by, or equal to a number of entries in each index of an input tensor register used for a specific Single Instruction Multiple Data (SIMD) tensor operation, and all elements of A? outside said G tensors are null. The system and method may further include executing kernel K on input I, by performing at least one computation of the SIMD tensor operation, having as operands elements of a tensor of the G tensors and corresponding elements of the B matrix.Type: GrantFiled: November 4, 2021Date of Patent: October 24, 2023Assignee: Neuralmagic, Inc.Inventors: Alexander Matveev, Dan Alistarh, Justin Kopinsky, Rati Gelashvili, Mark Kurtz, Nir Shavit
-
Patent number: 11636343Abstract: Training a neural network (NN) may include training a NN N, and for S, a version of N to be sparsified (e.g. a copy of N), removing NN elements from S to create a sparsified version of S, and training S using outputs from N (e.g. “distillation”). A boosting or reintroduction phase may follow sparsification: training a NN may include for a trained NN N and S, a sparsified version of N, re-introducing NN elements previously removed from S, and training S using outputs from N. The boosting phase need not use a NN sparsified by “distillation.” Training and sparsification, or training and reintroduction, may be performed iteratively or over repetitions.Type: GrantFiled: September 26, 2019Date of Patent: April 25, 2023Assignee: Neuralmagic Inc.Inventor: Dan Alistarh
-
Publication number: 20220058486Abstract: A system and method of accelerating execution of a NN model, by at least one processor may include: receiving a first matrix A, representing elements of a kernel K of the NN model and a second matrix B, representing elements of an input I to kernel K; producing from matrix A, a group-sparse matrix A?, comprising G tensors of elements. The number of elements in each tensor is defined by, or equal to a number of entries in each index of an input tensor register used for a specific Single Instruction Multiple Data (SIMD) tensor operation, and all elements of A? outside said G tensors are null. The system and method may further include executing kernel K on input I, by performing at least one computation of the SIMD tensor operation, having as operands elements of a tensor of the G tensors and corresponding elements of the B matrix.Type: ApplicationFiled: November 4, 2021Publication date: February 24, 2022Applicant: Neuralmagic Inc.Inventors: Alexander MATVEEV, Dan ALISTARH, Justin KOPINSKY, Rati GELASHVILI, Mark KURTZ, Nir SHAVIT
-
Patent number: 11195095Abstract: A system and method of accelerating execution of a NN model, by at least one processor may include: receiving a first matrix A, representing elements of a kernel K of the NN model and a second matrix B, representing elements of an input I to kernel K; producing from matrix A, a group-sparse matrix A?, comprising G tensors of elements. The number of elements in each tensor is defined by, or equal to a number of entries in each index of an input tensor register used for a specific Single Instruction Multiple Data (SIMD) tensor operation, and all elements of A? outside said G tensors are null. The system and method may further include executing kernel K on input I, by performing at least one computation of the SIMD tensor operation, having as operands elements of a tensor of the G tensors and corresponding elements of the B matrix.Type: GrantFiled: August 5, 2020Date of Patent: December 7, 2021Assignee: NEURALMAGIC INC.Inventors: Alexander Matveev, Dan Alistarh, Justin Kopinsky, Rati Gelashvili, Mark Kurtz, Nir Shavit
-
Publication number: 20210216872Abstract: A system and a method of training a Neural network (NN) model may include, receiving a pretrained NN model, that may include a plurality of layers, each associated with an activation matrix; selecting at least one, and performing an iterative training process on the layer. The iterative training process may include, applying an activation threshold to the activation matrix of the layer; measuring an accuracy value of the NN model; retraining the layer, while using a bimodal regularization function of one or more activation matrices of the NN model; and repeating the applying, measuring and retraining, while each repetition uses different activation threshold values. This repetition may be repeated until a maximal value of the activation threshold, where the NN model still converges, is found.Type: ApplicationFiled: January 14, 2021Publication date: July 15, 2021Applicant: Neuralmagic Inc.Inventors: Mark KURTZ, Dan ALISTARH
-
Publication number: 20210042624Abstract: A system and method of accelerating execution of a NN model, by at least one processor may include: receiving a first matrix A, representing elements of a kernel K of the NN model and a second matrix B, representing elements of an input I to kernel K; producing from matrix A, a group-sparse matrix A?, comprising G tensors of elements. The number of elements in each tensor is defined by, or equal to a number of entries in each index of an input tensor register used for a specific Single Instruction Multiple Data (SIMD) tensor operation, and all elements of A? outside said G tensors are null. The system and method may further include executing kernel K on input I, by performing at least one computation of the SIMD tensor operation, having as operands elements of a tensor of the G tensors and corresponding elements of the B matrix.Type: ApplicationFiled: August 5, 2020Publication date: February 11, 2021Applicant: Neuralmagic Inc.Inventors: Alexander MATVEEV, Dan ALISTARH, Justin KOPINSKY, Rati GELASHVILI, Mark KURTZ, Nir SHAVIT
-
Publication number: 20200104717Abstract: Training a neural network (NN) may include training a NN N, and for S, a version of N to be sparsified (e.g. a copy of N), removing NN elements from S to create a sparsified version of S, and training S using outputs from N (e.g. “distillation”). A boosting or reintroduction phase may follow sparsification: training a NN may include for a trained NN N and S, a sparsified version of N, re-introducing NN elements previously removed from S, and training S using outputs from N. The boosting phase need not use a NN sparsified by “distillation.” Training and sparsification, or training and reintroduction, may be performed iteratively or over repetitions.Type: ApplicationFiled: September 26, 2019Publication date: April 2, 2020Applicant: Neuralmagic Inc.Inventor: Dan ALISTARH
-
Patent number: 9980149Abstract: Techniques for distributed selection of white space channels are described. According to one or more embodiments, techniques described herein enable fair allocation of available white spaces among entities seeking access to the white spaces, such as base stations and client devices in a particular geographical region. According to one or more embodiments, techniques for distributed selection of white space channels enable individual network components to detect white space network attributes and distribute white space channels based on the detected attributes. Alternatively or additionally, multiple base stations can collaborate to share information about white spaces in a particular region.Type: GrantFiled: January 29, 2016Date of Patent: May 22, 2018Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Bozidar Radunovic, Thomas Karagiannis, Dan A. Alistarh, Ghufran Baig
-
Publication number: 20180075347Abstract: A computation node of a neural network training system is described. The node has a memory storing a plurality of gradients of a loss function of the neural network and an encoder. The encoder encodes the plurality of gradients by setting individual ones of the gradients either to zero or to a quantization level according to a probability related to at least the magnitude of the individual gradient. The node has a processor which sends the encoded plurality of gradients to one or more other computation nodes of the neural network training system over a communications network.Type: ApplicationFiled: September 15, 2016Publication date: March 15, 2018Inventors: Dan Alistarh, Jerry Zheng Li, Ryota Tomioka, Milan Vojnovic
-
Publication number: 20170223549Abstract: Techniques for distributed selection of white space channels are described. According to one or more embodiments, techniques described herein enable fair allocation of available white spaces among entities seeking access to the white spaces, such as base stations and client devices in a particular geographical region. According to one or more embodiments, techniques for distributed selection of white space channels enable individual network components to detect white space network attributes and distribute white space channels based on the detected attributes. Alternatively or additionally, multiple base stations can collaborate to share information about white spaces in a particular region.Type: ApplicationFiled: January 29, 2016Publication date: August 3, 2017Inventors: Bozidar Radunovic, Thomas Karagiannis, Dan A. Alistarh, Ghufran Baig