Patents by Inventor Aliasger Tayeb Zaidy
Aliasger Tayeb Zaidy has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250036950Abstract: Systems, devices, and methods related to a Deep Learning Accelerator and memory are described. For example, an integrated circuit device may be configured to execute instructions with matrix operands and configured with random access memory. A computing device running a compiler can interact and/or probe an integrated circuit device to identify hardware characteristics of the integrated circuit device in performing matrix computations. The compiler can generate and optimize a result of compilation from a description of an artificial neural network based at least in part on the hardware characteristics of the integrated circuit device. The result of compilation can include first data representative of parameters of the artificial neural network and second data representative of instructions executable by the integrated circuit device to generate an output of the artificial neural network based on the first data and an input to the artificial neural network.Type: ApplicationFiled: October 10, 2024Publication date: January 30, 2025Inventors: Aliasger Tayeb Zaidy, Marko Vitez, Eugenio Culurciello, Jaime Cummins, Andre Xian Ming Chang
-
Publication number: 20240428853Abstract: Systems, devices, and methods related to a deep learning accelerator and memory are described. For example, the accelerator can have processing units to perform at least matrix computations of an artificial neural network via execution of instructions. The processing units have a local memory store operands of the instructions. The accelerator can access a random access memory via a system buffer, or without going through the system buffer. A fetch instruction can request an item, available at a memory address in the random access memory, to be loaded into the local memory at a local address. The fetch instruction can include a hint for the caching of the item in the system buffer. During execution of the instruction, the hint can be used to determine whether to load the item through the system buffer or to bypass the system buffer in loading the item.Type: ApplicationFiled: September 5, 2024Publication date: December 26, 2024Inventors: Aliasger Tayeb Zaidy, Patrick Alan Estep, David Andrew Roberts
-
Patent number: 12118460Abstract: Systems, devices, and methods related to a Deep Learning Accelerator and memory are described. For example, an integrated circuit device may be configured to execute instructions with matrix operands and configured with random access memory. A computing device running a compiler can interact and/or probe an integrated circuit device to identify hardware characteristics of the integrated circuit device in performing matrix computations. The compiler can generate and optimize a result of compilation from a description of an artificial neural network based at least in part on the hardware characteristics of the integrated circuit device. The result of compilation can include first data representative of parameters of the artificial neural network and second data representative of instructions executable by the integrated circuit device to generate an output of the artificial neural network based on the first data and an input to the artificial neural network.Type: GrantFiled: November 6, 2020Date of Patent: October 15, 2024Assignee: Micron Technology, Inc.Inventors: Aliasger Tayeb Zaidy, Marko Vitez, Eugenio Culurciello, Jaime Cummins, Andre Xian Ming Chang
-
Publication number: 20240330190Abstract: Disclosed in some examples are improved address prediction and memory preloading that leverages next-delta prediction and/or far-delta prediction for scheduling using a DNN. Previous memory access sequence data that identify one or more memory addresses previously accessed by one or more processors of a system may be processed and then converted into a sequence of delta values. The sequence of delta values are then mapped to one or more classes that are then input to a DNN. The DNN then outputs a predicted future class identifier sequence that represents addresses that the DNN predicts will be accessed by the processor in the future. The predicted future class identifier sequence is then converted back to a predicted delta value sequence and back into a set of one or more predicted addresses.Type: ApplicationFiled: June 7, 2024Publication date: October 3, 2024Inventors: Aliasger Tayeb Zaidy, David Andrew Roberts, Patrick Michael Sheridan, Lukasz Burzawa
-
Patent number: 12094531Abstract: Systems, devices, and methods related to a Deep Learning Accelerator and memory are described. For example, the accelerator can have processing units to perform at least matrix computations of an artificial neural network via execution of instructions. The processing units have a local memory store operands of the instructions. The accelerator can access a random access memory via a system buffer, or without going through the system buffer. A fetch instruction can request an item, available at a memory address in the random access memory, to be loaded into the local memory at a local address. The fetch instruction can include a hint for the caching of the item in the system buffer. During execution of the instruction, the hint can be used to determine whether to load the item through the system buffer or to bypass the system buffer in loading the item.Type: GrantFiled: January 11, 2021Date of Patent: September 17, 2024Assignee: Micron Technology, Inc.Inventors: Aliasger Tayeb Zaidy, Patrick Alan Estep, David Andrew Roberts
-
Patent number: 12007899Abstract: Disclosed in some examples are improved address prediction and memory preloading that leverages next-delta prediction and/or far-delta prediction for scheduling using a DNN. Previous memory access sequence data that identify one or more memory addresses previously accessed by one or more processors of a system may be processed and then converted into a sequence of delta values. The sequence of delta values are then mapped to one or more classes that are then input to a DNN. The DNN then outputs a predicted future class identifier sequence that represents addresses that the DNN predicts will be accessed by the processor in the future. The predicted future class identifier sequence is then converted back to a predicted delta value sequence and back into a set of one or more predicted addresses.Type: GrantFiled: July 18, 2022Date of Patent: June 11, 2024Assignee: Micron Technology, Inc.Inventors: Aliasger Tayeb Zaidy, David Andrew Roberts, Patrick Michael Sheridan, Lukasz Burzawa
-
Patent number: 11829627Abstract: Various embodiments provide for one or more processor instructions and memory instructions that enable a memory sub-system to predict a schedule for migrating data between memory devices, which can be part of a memory sub-system.Type: GrantFiled: August 16, 2021Date of Patent: November 28, 2023Assignee: Micron Technology, Inc.Inventors: David Andrew Roberts, Aliasger Tayeb Zaidy
-
Publication number: 20230100328Abstract: Disclosed in some examples are improved address prediction and memory preloading that leverages next-delta prediction and/or far-delta prediction for scheduling using a DNN. Previous memory access sequence data that identify one or more memory addresses previously accessed by one or more processors of a system may be processed and then converted into a sequence of delta values. The sequence of delta values are then mapped to one or more classes that are then input to a DNN. The DNN then outputs a predicted future class identifier sequence that represents addresses that the DNN predicts will be accessed by the processor in the future. The predicted future class identifier sequence is then converted back to a predicted delta value sequence and back into a set of one or more predicted addresses.Type: ApplicationFiled: July 18, 2022Publication date: March 30, 2023Inventors: Aliasger Tayeb Zaidy, David Andrew Roberts, Patrick Michael Sheridan, Lukasz Burzawa
-
Publication number: 20230051103Abstract: Various embodiments provide for one or more processor instructions and memory instructions that enable a memory sub-system to predict a schedule for migrating data between memory devices, which can be part of a memory sub-system.Type: ApplicationFiled: August 16, 2021Publication date: February 16, 2023Inventors: David Andrew Roberts, Aliasger Tayeb Zaidy
-
Publication number: 20220223201Abstract: Systems, devices, and methods related to a Deep Learning Accelerator and memory are described. For example, the accelerator can have processing units to perform at least matrix computations of an artificial neural network via execution of instructions. The processing units have a local memory store operands of the instructions. The accelerator can access a random access memory via a system buffer, or without going through the system buffer. A fetch instruction can request an item, available at a memory address in the random access memory, to be loaded into the local memory at a local address. The fetch instruction can include a hint for the caching of the item in the system buffer. During execution of the instruction, the hint can be used to determine whether to load the item through the system buffer or to bypass the system buffer in loading the item.Type: ApplicationFiled: January 11, 2021Publication date: July 14, 2022Inventors: Aliasger Tayeb Zaidy, Patrick Alan Estep, David Andrew Roberts
-
Publication number: 20220147809Abstract: Systems, devices, and methods related to a Deep Learning Accelerator and memory are described. For example, an integrated circuit device may be configured to execute instructions with matrix operands and configured with random access memory. A compiler can convert a description of an artificial neural network into a compiler output through optimization and/or selection of hardware options of the integrated circuit device. The compiler output can include parameters of the artificial neural network, instructions executable by processing units of the Deep Learning Accelerator to generate an output of the artificial neural network responsive to an input to the artificial neural network, and hardware options to be stored in registers connected to control hardware configurations of the processing units.Type: ApplicationFiled: November 6, 2020Publication date: May 12, 2022Inventors: Aliasger Tayeb Zaidy, Marko Vitez, Eugenio Culurciello, Jaime Cummins, Andre Xian Ming Chang
-
Publication number: 20220147812Abstract: Systems, devices, and methods related to a Deep Learning Accelerator and memory are described. For example, an integrated circuit device may be configured to execute instructions with matrix operands and configured with random access memory (RAM). A compiler has an artificial neural network configured to identify an optimized compilation option for an artificial neural network to be compiled by the compiler and/or for a hardware platform of Deep Learning Accelerators. The artificial neural network of the compiler can be trained via machine learning to identify the optimized compilation option based on the features of the artificial neural network to be compiled and/or features of the hardware platform on which the compiler output will be executed.Type: ApplicationFiled: November 6, 2020Publication date: May 12, 2022Inventors: Andre Xian Ming Chang, Aliasger Tayeb Zaidy, Marko Vitez, Michael Cody Glapa, Abhishek Chaurasia, Eugenio Culurciello
-
Publication number: 20220147811Abstract: Systems, devices, and methods related to a Deep Learning Accelerator and memory are described. For example, an integrated circuit device may be configured to execute instructions with matrix operands and configured with random access memory (RAM). A compiler can identify a plurality of portions of an artificial neural network for implementation on a plurality of such integrated circuit devices respectively. The compiler converts a description of the artificial neural network into a plurality of compiler outputs executable on the plurality of devices to generate an output of the artificial neural network response to an input to the artificial neural network. Intermediate results are communicated among the devices in generating the output of the artificial neural network.Type: ApplicationFiled: November 6, 2020Publication date: May 12, 2022Inventors: Jaime Cummins, Marko Vitez, Eugenio Culurciello, Andre Xian Ming Chang, Aliasger Tayeb Zaidy
-
Publication number: 20220147808Abstract: Systems, devices, and methods related to a Deep Learning Accelerator and memory are described. For example, an integrated circuit device may be configured to execute instructions with matrix operands and configured with random access memory (RAM). A compiler can convert a description of an artificial neural network into a generic result of compilation according to a specification of a generic Deep Learning Accelerator and then map the first result of compilation into a platform-specific result according to a specification of a specific hardware platform of Deep Learning Accelerators. The platform-specific result can be stored into the RAM of the integrated circuit device to enable the integrated circuit device to autonomously perform the computation of the artificial neural network in generating an output in response to an input to the artificial neural network.Type: ApplicationFiled: November 6, 2020Publication date: May 12, 2022Inventors: Andre Xian Ming Chang, Aliasger Tayeb Zaidy, Eugenio Culurciello, Jaime Cummins, Marko Vitez
-
Publication number: 20220147813Abstract: Systems, devices, and methods related to a Deep Learning Accelerator and memory are described. For example, an integrated circuit device may be configured to execute instructions with matrix operands and configured with random access memory (RAM). A compiler is configured to generate instructions executable by the Deep Learning Accelerator from a description of a target artificial neural network. The instructions may call routines in a runtime library that has an embedded artificial neural network configured to predict optimized execution options available to implement the routines. The prediction is based at least in part on a pattern of data being processed in the target artificial neural network and/or a pattern of usages of the routines by the instructions.Type: ApplicationFiled: November 6, 2020Publication date: May 12, 2022Inventors: Andre Xian Ming Chang, Aliasger Tayeb Zaidy, Marko Vitez, Eugenio Culurciello
-
Publication number: 20220147810Abstract: Systems, devices, and methods related to a Deep Learning Accelerator and memory are described. For example, an integrated circuit device may be configured to execute instructions with matrix operands and configured with random access memory. A computing device running a compiler can interact and/or probe an integrated circuit device to identify hardware characteristics of the integrated circuit device in performing matrix computations. The compiler can generate and optimize a result of compilation from a description of an artificial neural network based at least in part on the hardware characteristics of the integrated circuit device. The result of compilation can include first data representative of parameters of the artificial neural network and second data representative of instructions executable by the integrated circuit device to generate an output of the artificial neural network based on the first data and an input to the artificial neural network.Type: ApplicationFiled: November 6, 2020Publication date: May 12, 2022Inventors: Aliasger Tayeb Zaidy, Marko Vitez, Eugenio Culurciello, Jaime Cummins, Andre Xian Ming Chang