Patents by Inventor Jaime Cummins
Jaime Cummins has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250117659Abstract: Systems, devices, and methods related to a deep learning accelerator and memory are described. An integrated circuit may be configured with: a central processing unit, a deep learning accelerator configured to execute instructions with matrix operands; random access memory configured to store first instructions of an artificial neural network executable by the deep learning accelerator and second instructions of an application executable by the central processing unit; one or connections among the random access memory, the deep learning accelerator and the central processing unit; and an input/output interface to an external peripheral bus. While the deep learning accelerator is executing the first instructions to convert sensor data according to the artificial neural network to inference results, the central processing unit may execute the application that uses inference results from the artificial neural network.Type: ApplicationFiled: December 17, 2024Publication date: April 10, 2025Inventors: Poorna Kale, Jaime Cummins
-
Patent number: 12237846Abstract: Examples described herein utilize multi-layer neural networks, such as multi-layer recurrent neural networks to estimate an error-reduced version of encoded data based on a retrieved version of encoded data (e.g., data encoded using one or more encoding techniques) from a memory. The neural networks and/or recurrent neural networks may have nonlinear mapping and distributed processing capabilities which may be advantageous in many systems employing a neural network or recurrent neural network to estimate an error-reduced version of encoded data for an error correction coding (ECC) decoder, e.g., to facilitate decoding of the error-reduced version of encoded data at the decoder. In this manner, neural networks or recurrent neural networks described herein may be used to improve or facilitate aspects of decoding at ECC decoders, e.g., by reducing errors present in encoded data due to storage or transmission.Type: GrantFiled: January 23, 2023Date of Patent: February 25, 2025Assignee: Micron Technology, Inc.Inventors: Fa-Long Luo, Jaime Cummins
-
Patent number: 12237918Abstract: Examples described herein include systems and methods which include wireless devices and systems with examples of mixing input data with coefficient data specific to a processing mode selection. For example, a computing system with processing units may mix the input data for a transmission in a radio frequency (RF) wireless domain with the coefficient data to generate output data that is representative of the transmission being processed according to a specific processing mode selection. The processing mode selection may include a single processing mode, a multi-processing mode, or a full processing mode. The processing mode selection may be associated with an aspect of a wireless protocol. Examples of systems and methods described herein may facilitate the processing of data for 5G wireless communications in a power-efficient and time-efficient manner.Type: GrantFiled: May 16, 2023Date of Patent: February 25, 2025Inventors: Fa-Long Luo, Jaime Cummins, Jeremy Chritz, Tamara Schmitz
-
Patent number: 12229044Abstract: Methods, apparatuses, and systems for tensor memory access are described. Multiple data located in different physical addresses of memory may be concurrently read or written by, for example, employing various processing patterns of tensor or matrix related computations. A memory controller, which may comprise a data address generator, may be configured to generate a sequence of memory addresses for a memory access operation based on a starting address and a dimension of a tensor or matrix. At least one dimension of a tensor or matrix may correspond to a row, a column, a diagonal, a determinant, or an Nth dimension of the tensor or matrix. The memory controller may also comprise a buffer configured to read and write the data generated from or according to a sequence of memory of addresses.Type: GrantFiled: August 16, 2022Date of Patent: February 18, 2025Assignee: Micron Technology, Inc.Inventors: Fa-Long Luo, Jaime Cummins, Tamara Schmitz, Jeremy Chritz
-
Publication number: 20250053342Abstract: Methods, systems, and apparatuses related to computational storage are described. For example, storage accessible to an accelerator may be shared between and, accessible to either of, a host and the accelerator. A computational storage system may include storage providing a portion of a shared file system accessible by a host and by accelerator logic of the computational storage system. Host interface logic may be configured to receive a storage command from the host to store data on the storage at a time the data is created. The host interface logic may be further configured to receive a storage command from the host for the accelerator logic to perform a computational task using the stored data on the storage. The accelerator logic can perform the computational task using the stored data on the storage.Type: ApplicationFiled: October 25, 2024Publication date: February 13, 2025Applicant: Micron Technology, Inc.Inventors: Shanyuan Gao, Sen Ma, Moon Mark Hur, Jaime Cummins
-
Publication number: 20250047320Abstract: A system includes a first wireless communication device comprising a first baseband processor neural network configured to process at least part of data for transmission to a second wireless communication device according to a collaborative processing configuration while collaborative processing is enabled to generate a first radio frequency (RF) signal. The first wireless communication device is configured to transmit the first RF signal. The system further includes a third wireless communication device comprising a second baseband processor neural network configured to, while the collaborative processing is enabled, process at least part of the data for transmission to the second wireless communication device according to a collaborative processing configuration to generate a second RF signal. The third wireless communication device is configured to transmit the second RF signal in collaboration with transmission of the first RF signal by the first baseband processor.Type: ApplicationFiled: February 21, 2024Publication date: February 6, 2025Applicant: Micron Technology, Inc.Inventors: Fa-Long Luo, Jaime Cummins
-
Publication number: 20250047309Abstract: A wireless device includes a sensor configured to receive input data, an antenna configured to transmit a radio frequency (RF) signal that is based at least in part on the input data, and one or more processing units coupled with the sensor and operable, during an active time period, to process the input data using a single neural network for a plurality of processing stages. The plurality of processing stages including a source data processing stage and communication processing stages.Type: ApplicationFiled: February 21, 2024Publication date: February 6, 2025Applicant: Micron Technology, Inc.Inventors: Fa-Long LUO, Jaime CUMMINS
-
Publication number: 20250036950Abstract: Systems, devices, and methods related to a Deep Learning Accelerator and memory are described. For example, an integrated circuit device may be configured to execute instructions with matrix operands and configured with random access memory. A computing device running a compiler can interact and/or probe an integrated circuit device to identify hardware characteristics of the integrated circuit device in performing matrix computations. The compiler can generate and optimize a result of compilation from a description of an artificial neural network based at least in part on the hardware characteristics of the integrated circuit device. The result of compilation can include first data representative of parameters of the artificial neural network and second data representative of instructions executable by the integrated circuit device to generate an output of the artificial neural network based on the first data and an input to the artificial neural network.Type: ApplicationFiled: October 10, 2024Publication date: January 30, 2025Inventors: Aliasger Tayeb Zaidy, Marko Vitez, Eugenio Culurciello, Jaime Cummins, Andre Xian Ming Chang
-
Patent number: 12182704Abstract: Systems, devices, and methods related to a deep learning accelerator and memory are described. An integrated circuit may be configured with: a central processing unit, a deep learning accelerator configured to execute instructions with matrix operands; random access memory configured to store first instructions of an artificial neural network executable by the deep learning accelerator and second instructions of an application executable by the central processing unit; one or more connections among the random access memory, the deep learning accelerator and the central processing unit; and an input/output interface to an external peripheral bus. While the deep learning accelerator is executing the first instructions to convert sensor data according to the artificial neural network to inference results, the central processing unit may execute the application that uses inference results from the artificial neural network.Type: GrantFiled: September 8, 2022Date of Patent: December 31, 2024Assignee: Micron Technology, Inc.Inventors: Poorna Kale, Jaime Cummins
-
Patent number: 12175130Abstract: Methods, systems, and apparatuses related to computational storage are described. For example, storage accessible to an accelerator may be shared between and, accessible to either of, a host and the accelerator. A computational storage system may include storage providing a portion of a shared file system accessible by a host and by accelerator logic of the computational storage system. Host interface logic may be configured to receive a storage command from the host to store data on the storage at a time the data is created. The host interface logic may be further configured to receive a storage command from the host for the accelerator logic to perform a computational task using the stored data on the storage. The accelerator logic can perform the computational task using the stored data on the storage.Type: GrantFiled: January 9, 2023Date of Patent: December 24, 2024Assignee: Micron Technology, Inc.Inventors: Shanyuan Gao, Sen Ma, Moon Mark Hur, Jaime Cummins
-
Patent number: 12177167Abstract: Examples described herein include apparatuses and methods for full duplex device-to-device cooperative communication. Example systems described herein may include self-interference noise calculators. The output of a self-interference noise calculator may be used to compensate for the interference experienced due to signals transmitted by another antenna of the same wireless device or system. In implementing such a self-interference noise calculator, a selected wireless relaying device or wireless destination device may operate in a full-duplex mode, such that relayed messages may be transmitted as well as information from other sources or destinations during a common time period (e.g., symbol, slot, subframe, etc.).Type: GrantFiled: September 11, 2020Date of Patent: December 24, 2024Assignee: Micron Technology, Inc.Inventors: Fa-Long Luo, Tamara Schmitz, Jeremy Chritz, Jaime Cummins
-
Publication number: 20240412796Abstract: A memory includes a receiver circuit configured to receive write data via a data terminal, and a neural network based preconditioning circuit configured to receive a write data signal according to the write data. A neural network of the preconditioning circuit is configured to precondition the write data signal based on a characteristic of a write data path to provide a modified write data signal. The memory further includes a memory array configured to store the write data based on the modified write data signal.Type: ApplicationFiled: June 5, 2024Publication date: December 12, 2024Applicant: Micron Technology, Inc.Inventors: Fa-Long Luo, Jaime Cummins
-
Publication number: 20240412050Abstract: The present disclosure relates to signal processing systems that employ various techniques to enhance data transfer quality. In some cases, a memory controller uses a neural network (e.g., time delay neural network (TDNN) to enable nonlinear processing to improve equalization. In some other cases, the memory controller uses an activation function to enable nonlinear processing to improve equalization. The systems may incorporate a finite impulse response (FIR) filter with the activation function applied to its output. A memory controller including a cache may store precomputed values of the activation function. Various types of activation functions or neural network configurations may be employed to introduce nonlinearity and adapt to different application requirements. The present disclosure is applicable in communication systems, control systems, and other digital signal processing systems requiring efficient processing of complex data transmission patterns.Type: ApplicationFiled: June 5, 2024Publication date: December 12, 2024Applicant: Micron Technology, Inc.Inventors: Fa-Long Luo, Jaime Cummins
-
Publication number: 20240411471Abstract: The present invention relates to a memory controller and a memory device that are configured to communicate with each other using multiple input multiple output (MIMO) technology. The memory controller includes a precoder that precodes data for transmission. The precoding is based on channel state information, a neural network, or both. The memory device receives the precoded data and decodes them to retrieve the original data. In some cases, the precoder uses the channel state information to optimize the precoding matrix for the given channel conditions. In some cases, a neural network is trained to predict the optimal precoding matrix for the current channel state. The precoding matrix is then used to encode the data, which is then transmitted to the memory device. The use of MIMO and precoding improves the reliability and efficiency of the communication between the memory controller and memory device.Type: ApplicationFiled: June 5, 2024Publication date: December 12, 2024Applicant: Micron Technology, Inc.Inventors: Fa-Long Luo, Jaime Cummins
-
Publication number: 20240411710Abstract: An example of compute express link (CXL) system includes a memory, and a tensor access circuit having a memory mapper configured to configure a memory map based on a compute express link (CXL) command associated with an access operation of the memory. The memory map includes a specific sequence of CXL instructions to access to the memory via a CXL bus.Type: ApplicationFiled: June 5, 2024Publication date: December 12, 2024Applicant: Micron Technology, Inc.Inventors: Fa-Long Luo, Jaime Cummins
-
Publication number: 20240385927Abstract: Apparatuses and methods for error correction based on data characteristics are disclosed. Data characteristics can include importance of the data. Data is received at a memory controller from a host device, and a characteristic of the received data is determined. A level of error correction is selected from a plurality of error correction levels for the received data based on the determined characteristic. The received data and an error correction code are written to a memory. The error correction code is generated based on the selected level of error correction. In some implementations, the characteristic of the received data is determined using a neural network.Type: ApplicationFiled: May 16, 2024Publication date: November 21, 2024Applicant: Micron Technology, Inc.Inventors: Fa-Long Luo, Jaime Cummins
-
Publication number: 20240385977Abstract: Apparatuses and methods for determining a channel characteristic are disclosed. The channel characteristic can be a characteristic of a channel between a memory controller and a memory. The channel characteristic is determined at the memory controller relating to logic levels of data written to or read from the memory over the channel, and transceiver settings of a transceiver of the memory controller are modified according to the determined characteristic. The channel characteristic can be determined based on storing a pilot signal at the memory controller, causing the pilot signal to be written to the memory, and comparing a read pilot signal corresponding to the written pilot signal with the stored pilot signal.Type: ApplicationFiled: May 16, 2024Publication date: November 21, 2024Applicant: Micron Technology, Inc.Inventors: Fa-Long Luo, Jaime Cummins
-
Publication number: 20240379181Abstract: A memory includes a read/write amplifier configured to retrieve read data from a memory array, and a neural network based preconditioning circuit configured to receive a read data signal according to the read data. A neural network of the preconditioning circuit is configured to precondition the read data signal based on a characteristic of a read data transmission path to provide a modified read data signal. The memory further includes an output driver configured to transmit the modified read data signal.Type: ApplicationFiled: April 30, 2024Publication date: November 14, 2024Applicant: Micron Technology, Inc.Inventors: Fa-Long Luo, Jaime Cummins
-
Patent number: 12143266Abstract: Examples described herein include methods, devices, and systems which may implement different processing stages for wireless communication in processing units. Such data processing may include a source data processing stage, a baseband processing stage, a digital front-end processing stage, and a radio frequency (RF) processing stage. Data may be received from a sensor of device and then processed in the stages to generate output data for transmission. Processing the data in the various stages may occur during an active time period of a discontinuous operating mode. During the active time period, a reconfigurable hardware platform may allocate all or a portion of the processing units to implement the processing stages. Examples of systems and methods described herein may facilitate the processing of data for 5G (e.g., New Radio (NR)) wireless communications in a power-efficient and time-efficient manner.Type: GrantFiled: May 14, 2021Date of Patent: November 12, 2024Inventors: Fa-Long Luo, Jaime Cummins, Tamara Schmitz, Jeremy Chritz
-
Publication number: 20240369632Abstract: A memory controller and a physical interface layer may accommodate multiple memory types. In some examples, the memory controller and/or PHY may include a register that includes operating parameters for multiple operating modes. Different operating modes may be compatible with different memory types. In some examples, the memory controller and physical interface may be included in a system for testing multiple memory types. The system may provide multiple interfaces for communicating with the memory. The different communication types may be used for performing different tests and/or simulating different types of devices that may utilize the memory.Type: ApplicationFiled: July 15, 2024Publication date: November 7, 2024Applicant: Micron Technology, Inc.Inventors: Kenneth M. Curewitz, Jaime Cummins, John D. Porter, Bryce D. Cook, Jeffrey P. Wright