Patents by Inventor Fa-Long Luo
Fa-Long Luo has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12267612Abstract: Methods and apparatus for performing multi-step image processing using a reconfigurable fabric device (RFD) in place of multiple discrete ICs. In one embodiment, the methods and apparatus operate according to a flexible time-divided schedule, and the processing is configured to process image sensor data by at least: (i) receiving RAW image data, programming an RFD to operate as a first functional unit such as an image signal processor (ISP), using the programmed RFD to perform image signal processing on the RAW image data, storing the ISP-result in temporary memory; and (ii) programming the RFD to operate as a second functional unit (e.g., deep learning accelerator (DLA)), using the programmed RFD to read out ISP-result from the temporary memory, perform deep learning processing on the ISP-result, and storing the DLA-result back into the temporary memory. In one variant, an on-die controller and memory are used in support of the RFD operations, thereby enabling a single-die processing solution.Type: GrantFiled: August 10, 2020Date of Patent: April 1, 2025Assignee: Micron Technology, Inc.Inventor: Fa-Long Luo
-
Patent number: 12249377Abstract: Systems, apparatuses, and methods related to organizing data to correspond to a matrix at a memory device are described. Data can be organized by circuitry coupled to an array of memory cells prior to the processing resources executing instructions on the data. The organization of data may thus occur on a memory device, rather than at an external processor. A controller coupled to the array of memory cells may direct the circuitry to organize the data in a matrix configuration to prepare the data for processing by the processing resources. The circuitry may be or include a column decode circuitry that organizes the data based on a command from the host associated with the processing resource. For example, data read in a prefetch operation may be selected to correspond to rows or columns of a matrix configuration.Type: GrantFiled: July 24, 2023Date of Patent: March 11, 2025Inventors: Glen E. Hush, Aaron P. Boehm, Fa-Long Luo
-
Patent number: 12237862Abstract: Examples described herein include systems and methods which include wireless devices and systems with examples of mixing input data delayed versions of at least a portion of the respective processing results with coefficient data specific to a processing mode selection. For example, a computing system with processing units may mix the input data delayed versions of respective outputs of various layers of multiplication/accumulation processing units (MAC units) for a transmission in a radio frequency (RF) wireless domain with the coefficient data to generate output data that is representative of the transmission being processed according to a wireless processing mode selection. In another example, such mixing input data with delayed versions of processing results may be to receive and process noisy wireless input data. Examples of systems and methods described herein may facilitate the processing of data for 5G wireless communications in a power-efficient and time-efficient manner.Type: GrantFiled: November 7, 2022Date of Patent: February 25, 2025Assignee: Micron Technology, Inc.Inventor: Fa-Long Luo
-
Patent number: 12237918Abstract: Examples described herein include systems and methods which include wireless devices and systems with examples of mixing input data with coefficient data specific to a processing mode selection. For example, a computing system with processing units may mix the input data for a transmission in a radio frequency (RF) wireless domain with the coefficient data to generate output data that is representative of the transmission being processed according to a specific processing mode selection. The processing mode selection may include a single processing mode, a multi-processing mode, or a full processing mode. The processing mode selection may be associated with an aspect of a wireless protocol. Examples of systems and methods described herein may facilitate the processing of data for 5G wireless communications in a power-efficient and time-efficient manner.Type: GrantFiled: May 16, 2023Date of Patent: February 25, 2025Inventors: Fa-Long Luo, Jaime Cummins, Jeremy Chritz, Tamara Schmitz
-
Patent number: 12237846Abstract: Examples described herein utilize multi-layer neural networks, such as multi-layer recurrent neural networks to estimate an error-reduced version of encoded data based on a retrieved version of encoded data (e.g., data encoded using one or more encoding techniques) from a memory. The neural networks and/or recurrent neural networks may have nonlinear mapping and distributed processing capabilities which may be advantageous in many systems employing a neural network or recurrent neural network to estimate an error-reduced version of encoded data for an error correction coding (ECC) decoder, e.g., to facilitate decoding of the error-reduced version of encoded data at the decoder. In this manner, neural networks or recurrent neural networks described herein may be used to improve or facilitate aspects of decoding at ECC decoders, e.g., by reducing errors present in encoded data due to storage or transmission.Type: GrantFiled: January 23, 2023Date of Patent: February 25, 2025Assignee: Micron Technology, Inc.Inventors: Fa-Long Luo, Jaime Cummins
-
Patent number: 12229044Abstract: Methods, apparatuses, and systems for tensor memory access are described. Multiple data located in different physical addresses of memory may be concurrently read or written by, for example, employing various processing patterns of tensor or matrix related computations. A memory controller, which may comprise a data address generator, may be configured to generate a sequence of memory addresses for a memory access operation based on a starting address and a dimension of a tensor or matrix. At least one dimension of a tensor or matrix may correspond to a row, a column, a diagonal, a determinant, or an Nth dimension of the tensor or matrix. The memory controller may also comprise a buffer configured to read and write the data generated from or according to a sequence of memory of addresses.Type: GrantFiled: August 16, 2022Date of Patent: February 18, 2025Assignee: Micron Technology, Inc.Inventors: Fa-Long Luo, Jaime Cummins, Tamara Schmitz, Jeremy Chritz
-
Publication number: 20250047320Abstract: A system includes a first wireless communication device comprising a first baseband processor neural network configured to process at least part of data for transmission to a second wireless communication device according to a collaborative processing configuration while collaborative processing is enabled to generate a first radio frequency (RF) signal. The first wireless communication device is configured to transmit the first RF signal. The system further includes a third wireless communication device comprising a second baseband processor neural network configured to, while the collaborative processing is enabled, process at least part of the data for transmission to the second wireless communication device according to a collaborative processing configuration to generate a second RF signal. The third wireless communication device is configured to transmit the second RF signal in collaboration with transmission of the first RF signal by the first baseband processor.Type: ApplicationFiled: February 21, 2024Publication date: February 6, 2025Applicant: Micron Technology, Inc.Inventors: Fa-Long Luo, Jaime Cummins
-
Publication number: 20250047309Abstract: A wireless device includes a sensor configured to receive input data, an antenna configured to transmit a radio frequency (RF) signal that is based at least in part on the input data, and one or more processing units coupled with the sensor and operable, during an active time period, to process the input data using a single neural network for a plurality of processing stages. The plurality of processing stages including a source data processing stage and communication processing stages.Type: ApplicationFiled: February 21, 2024Publication date: February 6, 2025Applicant: Micron Technology, Inc.Inventors: Fa-Long LUO, Jaime CUMMINS
-
Publication number: 20250036718Abstract: Methods and apparatus for performing matrix transforms within a memory fabric. Various embodiments of the present disclosure are directed to converting a memory array into a matrix fabric for matrix transformations and performing matrix operations therein. Exemplary embodiments described herein perform matrix transformations within a memory device that includes a matrix fabric and matrix multiplication unit (MMU). In one exemplary embodiment, the matrix fabric uses a “crossbar” construction of resistive elements. Each resistive element stores a level of impedance that represents the corresponding matrix coefficient value. The crossbar connectivity can be driven with an electrical signal representing the input vector as an analog voltage. The resulting signals can be converted from analog voltages to digital values by an MMU to yield a vector-matrix product. In some cases, the MMU may additionally perform various other logical operations within the digital domain.Type: ApplicationFiled: October 11, 2024Publication date: January 30, 2025Inventor: Fa-Long Luo
-
Patent number: 12199650Abstract: Examples described herein include methods, devices, and systems which may compensate input data for nonlinear power amplifier noise to generate compensated input data. In compensating the noise, during an uplink transmission time interval (TTI), a switch path is activated to provide amplified input data to a receiver stage including a recurrent neural network (RNN). The RNN may calculate an error representative of the noise based partly on the input signal to be transmitted and a feedback signal to generate filter coefficient data associated with the power amplifier noise. The feedback signal is provided, after processing through the receiver, to the RNN. During an uplink TTI, the amplified input data may also be transmitted as the RF wireless transmission via an RF antenna. During a downlink TTI, the switch path may be deactivated and the receiver stage may receive an additional RF wireless transmission to be processed in the receiver stage.Type: GrantFiled: March 6, 2023Date of Patent: January 14, 2025Inventor: Fa-Long Luo
-
Patent number: 12177167Abstract: Examples described herein include apparatuses and methods for full duplex device-to-device cooperative communication. Example systems described herein may include self-interference noise calculators. The output of a self-interference noise calculator may be used to compensate for the interference experienced due to signals transmitted by another antenna of the same wireless device or system. In implementing such a self-interference noise calculator, a selected wireless relaying device or wireless destination device may operate in a full-duplex mode, such that relayed messages may be transmitted as well as information from other sources or destinations during a common time period (e.g., symbol, slot, subframe, etc.).Type: GrantFiled: September 11, 2020Date of Patent: December 24, 2024Assignee: Micron Technology, Inc.Inventors: Fa-Long Luo, Tamara Schmitz, Jeremy Chritz, Jaime Cummins
-
Publication number: 20240412796Abstract: A memory includes a receiver circuit configured to receive write data via a data terminal, and a neural network based preconditioning circuit configured to receive a write data signal according to the write data. A neural network of the preconditioning circuit is configured to precondition the write data signal based on a characteristic of a write data path to provide a modified write data signal. The memory further includes a memory array configured to store the write data based on the modified write data signal.Type: ApplicationFiled: June 5, 2024Publication date: December 12, 2024Applicant: Micron Technology, Inc.Inventors: Fa-Long Luo, Jaime Cummins
-
Publication number: 20240411471Abstract: The present invention relates to a memory controller and a memory device that are configured to communicate with each other using multiple input multiple output (MIMO) technology. The memory controller includes a precoder that precodes data for transmission. The precoding is based on channel state information, a neural network, or both. The memory device receives the precoded data and decodes them to retrieve the original data. In some cases, the precoder uses the channel state information to optimize the precoding matrix for the given channel conditions. In some cases, a neural network is trained to predict the optimal precoding matrix for the current channel state. The precoding matrix is then used to encode the data, which is then transmitted to the memory device. The use of MIMO and precoding improves the reliability and efficiency of the communication between the memory controller and memory device.Type: ApplicationFiled: June 5, 2024Publication date: December 12, 2024Applicant: Micron Technology, Inc.Inventors: Fa-Long Luo, Jaime Cummins
-
Publication number: 20240412050Abstract: The present disclosure relates to signal processing systems that employ various techniques to enhance data transfer quality. In some cases, a memory controller uses a neural network (e.g., time delay neural network (TDNN) to enable nonlinear processing to improve equalization. In some other cases, the memory controller uses an activation function to enable nonlinear processing to improve equalization. The systems may incorporate a finite impulse response (FIR) filter with the activation function applied to its output. A memory controller including a cache may store precomputed values of the activation function. Various types of activation functions or neural network configurations may be employed to introduce nonlinearity and adapt to different application requirements. The present disclosure is applicable in communication systems, control systems, and other digital signal processing systems requiring efficient processing of complex data transmission patterns.Type: ApplicationFiled: June 5, 2024Publication date: December 12, 2024Applicant: Micron Technology, Inc.Inventors: Fa-Long Luo, Jaime Cummins
-
Publication number: 20240411710Abstract: An example of compute express link (CXL) system includes a memory, and a tensor access circuit having a memory mapper configured to configure a memory map based on a compute express link (CXL) command associated with an access operation of the memory. The memory map includes a specific sequence of CXL instructions to access to the memory via a CXL bus.Type: ApplicationFiled: June 5, 2024Publication date: December 12, 2024Applicant: Micron Technology, Inc.Inventors: Fa-Long Luo, Jaime Cummins
-
Publication number: 20240385927Abstract: Apparatuses and methods for error correction based on data characteristics are disclosed. Data characteristics can include importance of the data. Data is received at a memory controller from a host device, and a characteristic of the received data is determined. A level of error correction is selected from a plurality of error correction levels for the received data based on the determined characteristic. The received data and an error correction code are written to a memory. The error correction code is generated based on the selected level of error correction. In some implementations, the characteristic of the received data is determined using a neural network.Type: ApplicationFiled: May 16, 2024Publication date: November 21, 2024Applicant: Micron Technology, Inc.Inventors: Fa-Long Luo, Jaime Cummins
-
Publication number: 20240385977Abstract: Apparatuses and methods for determining a channel characteristic are disclosed. The channel characteristic can be a characteristic of a channel between a memory controller and a memory. The channel characteristic is determined at the memory controller relating to logic levels of data written to or read from the memory over the channel, and transceiver settings of a transceiver of the memory controller are modified according to the determined characteristic. The channel characteristic can be determined based on storing a pilot signal at the memory controller, causing the pilot signal to be written to the memory, and comparing a read pilot signal corresponding to the written pilot signal with the stored pilot signal.Type: ApplicationFiled: May 16, 2024Publication date: November 21, 2024Applicant: Micron Technology, Inc.Inventors: Fa-Long Luo, Jaime Cummins
-
Publication number: 20240379181Abstract: A memory includes a read/write amplifier configured to retrieve read data from a memory array, and a neural network based preconditioning circuit configured to receive a read data signal according to the read data. A neural network of the preconditioning circuit is configured to precondition the read data signal based on a characteristic of a read data transmission path to provide a modified read data signal. The memory further includes an output driver configured to transmit the modified read data signal.Type: ApplicationFiled: April 30, 2024Publication date: November 14, 2024Applicant: Micron Technology, Inc.Inventors: Fa-Long Luo, Jaime Cummins
-
Patent number: 12143266Abstract: Examples described herein include methods, devices, and systems which may implement different processing stages for wireless communication in processing units. Such data processing may include a source data processing stage, a baseband processing stage, a digital front-end processing stage, and a radio frequency (RF) processing stage. Data may be received from a sensor of device and then processed in the stages to generate output data for transmission. Processing the data in the various stages may occur during an active time period of a discontinuous operating mode. During the active time period, a reconfigurable hardware platform may allocate all or a portion of the processing units to implement the processing stages. Examples of systems and methods described herein may facilitate the processing of data for 5G (e.g., New Radio (NR)) wireless communications in a power-efficient and time-efficient manner.Type: GrantFiled: May 14, 2021Date of Patent: November 12, 2024Inventors: Fa-Long Luo, Jaime Cummins, Tamara Schmitz, Jeremy Chritz
-
Patent number: 12118056Abstract: Methods and apparatus for performing matrix transforms within a memory fabric. Various embodiments of the present disclosure are directed to converting a memory array into a matrix fabric for matrix transformations and performing matrix operations therein. Exemplary embodiments described herein perform matrix transformations within a memory device that includes a matrix fabric and matrix multiplication unit (MMU). In one exemplary embodiment, the matrix fabric uses a “crossbar” construction of resistive elements. Each resistive element stores a level of impedance that represents the corresponding matrix coefficient value. The crossbar connectivity can be driven with an electrical signal representing the input vector as an analog voltage. The resulting signals can be converted from analog voltages to a digital values by an MMU to yield a vector-matrix product. In some cases, the MMU may additionally perform various other logical operations within the digital domain.Type: GrantFiled: May 3, 2019Date of Patent: October 15, 2024Assignee: Micron Technology, Inc.Inventor: Fa-Long Luo