Patents by Inventor MEHRAN NEKUII

MEHRAN NEKUII has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11729640
    Abstract: Methods and apparatus for configuring a front end to process multiple sectors with multiple radio frequency frames. In an exemplary embodiment, a method includes decoding instructions included in a job description list, and configuring one or more processing functions of a transceiver to process a radio signal associated with a selected sector based on the decoded instructions. The configuration of the processing functions is synchronized according to time control instructions included in the job description list.
    Type: Grant
    Filed: November 1, 2021
    Date of Patent: August 15, 2023
    Assignee: Marvell Asia Pte, Ltd.
    Inventors: Mehran Nekuii, Frank H Worrell, Hong Jik Kim
  • Patent number: 11526731
    Abstract: A new approach is proposed to support efficient convolution for deep learning by vectorizing multi-dimensional input data for multi-dimensional fast Fourier transform (FFT) and direct memory access (DMA) for data transfer. Specifically, a deep learning processor (DLP) includes a plurality of tensor engines each configured to perform convolution operations by applying one or more kernels on multi-dimensional input data for pattern recognition and classification based on a neural network, wherein each tensor engine includes, among other components, one or more vector processing engines each configured to vectorize the multi-dimensional input data at each layer of the neural network to generate a plurality of vectors and to perform multi-dimensional FFT on the generated vectors and/or the kernels to create output for the convolution operations. Each tensor engine further includes a data engine configured to prefetch the multi-dimensional data and/or the kernels to both on-chip and external memories via DMA.
    Type: Grant
    Filed: September 1, 2020
    Date of Patent: December 13, 2022
    Assignee: Marvell Asia Pte Ltd
    Inventor: Mehran Nekuii
  • Publication number: 20220060912
    Abstract: Methods and apparatus for configuring a front end to process multiple sectors with multiple radio frequency frames. In an exemplary embodiment, a method includes decoding instructions included in a job description list, and configuring one or more processing functions of a transceiver to process a radio signal associated with a selected sector based on the decoded instructions. The configuration of the processing functions is synchronized according to time control instructions included in the job description list.
    Type: Application
    Filed: November 1, 2021
    Publication date: February 24, 2022
    Applicant: Marvell Asia Pte. Ltd.
    Inventors: Mehran Nekuii, Frank H. Worrell, Hong Jik Kim
  • Patent number: 11172379
    Abstract: Methods and apparatus for configuring a front end to process multiple sectors with multiple radio frequency frames. In an exemplary embodiment, a method includes decoding instructions included in a job description list, and configuring one or more processing functions of a transceiver to process a radio signal associated with a selected sector based on the decoded instructions. The configuration of the processing functions is synchronized according to time control instructions included in the job description list.
    Type: Grant
    Filed: January 4, 2017
    Date of Patent: November 9, 2021
    Assignee: Marvell Asia Pte, Ltd.
    Inventors: Mehran Nekuii, Frank Henry Worrell, Hong Jik Kim
  • Publication number: 20200401872
    Abstract: A new approach is proposed to support efficient convolution for deep learning by vectorizing multi-dimensional input data for multi-dimensional fast Fourier transform (FFT) and direct memory access (DMA) for data transfer. Specifically, a deep learning processor (DLP) includes a plurality of tensor engines each configured to perform convolution operations by applying one or more kernels on multi-dimensional input data for pattern recognition and classification based on a neural network, wherein each tensor engine includes, among other components, one or more vector processing engines each configured to vectorize the multi-dimensional input data at each layer of the neural network to generate a plurality of vectors and to perform multi-dimensional FFT on the generated vectors and/or the kernels to create output for the convolution operations. Each tensor engine further includes a data engine configured to prefetch the multi-dimensional data and/or the kernels to both on-chip and external memories via DMA.
    Type: Application
    Filed: September 1, 2020
    Publication date: December 24, 2020
    Inventor: Mehran Nekuii
  • Patent number: 10796220
    Abstract: A new approach is proposed to support efficient convolution for deep learning by vectorizing multi-dimensional input data for multi-dimensional fast Fourier transform (FFT) and direct memory access (DMA) for data transfer. Specifically, a deep learning processor (DLP) includes a plurality of tensor engines each configured to perform convolution operations by applying one or more kernels on multi-dimensional input data for pattern recognition and classification based on a neural network, wherein each tensor engine includes, among other components, one or more vector processing engines each configured to vectorize the multi-dimensional input data at each layer of the neural network to generate a plurality of vectors and to perform multi-dimensional FFT on the generated vectors and/or the kernels to create output for the convolution operations. Each tensor engine further includes a data engine configured to prefetch the multi-dimensional data and/or the kernels to both on-chip and external memories via DMA.
    Type: Grant
    Filed: May 11, 2017
    Date of Patent: October 6, 2020
    Assignee: Marvell Asia Pte, Ltd.
    Inventor: Mehran Nekuii
  • Patent number: 10140250
    Abstract: Methods and apparatus for providing an FFT engine using a reconfigurable single delay feedback architecture. In one aspect, an apparatus includes a radix-2 (R2) single delay feedback (SDF) stage that generates a radix-2 output and a radix-3 (R3) SDF stage that generates a radix-3 output. The apparatus also includes one or more radix-2 squared (R2^2) SDF stages that generate a radix-4 output. The apparatus further includes a controller that configures a sequence of radix stages selected from the R2, R3, and R2^2 stages based on an FFT point size to form an FFT engine. The FFT engine receives input samples at a first stage of the sequence and generate an FFT output result that is output from a last stage of the sequence. The sequence includes no more than one R3 stage.
    Type: Grant
    Filed: December 14, 2016
    Date of Patent: November 27, 2018
    Assignee: CAVIUM
    Inventors: Mehran Nekuii, Hong Jik Kim
  • Patent number: 9882678
    Abstract: A process capable of employing compression and decompression mechanism to receive and decode soft information is disclosed. The process, in one aspect, is able to receive a data stream formatted with soft information from a communication network such as a wireless network. After identifying a set of bits representing a first logic value from a portion of the data stream in accordance with a predefined soft encoding scheme, the set of bits is compressed into a compressed set of bits. The compressed set of bits which represents the first logic value is subsequently stored in a local memory.
    Type: Grant
    Filed: September 23, 2014
    Date of Patent: January 30, 2018
    Assignee: CAVIUM, INC.
    Inventor: Mehran Nekuii
  • Publication number: 20170344880
    Abstract: A new approach is proposed to support efficient convolution for deep learning by vectorizing multi-dimensional input data for multi-dimensional fast Fourier transform (FFT) and direct memory access (DMA) for data transfer. Specifically, a deep learning processor (DLP) includes a plurality of tensor engines each configured to perform convolution operations by applying one or more kernels on multi-dimensional input data for pattern recognition and classification based on a neural network, wherein each tensor engine includes, among other components, one or more vector processing engines each configured to vectorize the multi-dimensional input data at each layer of the neural network to generate a plurality of vectors and to perform multi-dimensional FFT on the generated vectors and/or the kernels to create output for the convolution operations. Each tensor engine further includes a data engine configured to prefetch the multi-dimensional data and/or the kernels to both on-chip and external memories via DMA.
    Type: Application
    Filed: May 11, 2017
    Publication date: November 30, 2017
    Inventor: Mehran Nekuii
  • Patent number: 9768912
    Abstract: A process capable of employing compression and decompression mechanism to receive and decode soft information is disclosed. The process, in one aspect, is able to receive a data stream formatted with soft information from a communication network such as a wireless network. After identifying a set of bits representing a first logic value from a portion of the data stream in accordance with a predefined soft encoding scheme, the set of bits is compressed into a compressed set of bits. The compressed set of bits which represents the first logic value is subsequently stored in a local memory.
    Type: Grant
    Filed: September 23, 2014
    Date of Patent: September 19, 2017
    Assignee: Cavium, Inc.
    Inventor: Mehran Nekuii
  • Publication number: 20170220523
    Abstract: Methods and apparatus for providing an FFT engine using a reconfigurable single delay feedback architecture. In one aspect, an apparatus includes a radix-2 (R2) single delay feedback (SDF) stage that generates a radix-2 output and a radix-3 (R3) SDF stage that generates a radix-3 output. The apparatus also includes one or more radix-2 squared (R2?2) SDF stages that generate a radix-4 output. The apparatus further includes a controller that configures a sequence of radix stages selected from the R2, R3, and R2?2 stages based on an FFT point size to form an FFT engine. The FFT engine receives input samples at a first stage of the sequence and generate an FFT output result that is output from a last stage of the sequence. The sequence includes no more than one R3 stage.
    Type: Application
    Filed: December 14, 2016
    Publication date: August 3, 2017
    Applicant: Cavium, Inc.
    Inventors: Mehran Nekuii, Hong Jik Kim
  • Publication number: 20170195900
    Abstract: Methods and apparatus for configuring a front end to process multiple sectors with multiple radio frequency frames. In an exemplary embodiment, a method includes decoding instructions included in a job description list, and configuring one or more processing functions of a transceiver to process a radio signal associated with a selected sector based on the decoded instructions. The configuration of the processing functions is synchronized according to time control instructions included in the job description list.
    Type: Application
    Filed: January 4, 2017
    Publication date: July 6, 2017
    Applicant: Cavium, Inc.
    Inventors: Mehran Nekuii, Frank Henry Worrell, Hong Jik Kim
  • Patent number: 9503218
    Abstract: A process capable of employing compression and decompression mechanism to receive and decode soft information is disclosed. Upon receiving a set of signals representing a logic value from a transmitter via a physical communication channel, the set of signals is demodulated in accordance with a soft decoding scheme and subsequently, a Log Likelihood Ratio (“LLR”) value representing the logic value is generated. After generating a quantized LLR value in response to the LLR value via a non-linear LLR quantizer, the quantized LLR value representing the compressed logic value is stored in a local storage.
    Type: Grant
    Filed: September 23, 2014
    Date of Patent: November 22, 2016
    Assignee: Cavium, Inc.
    Inventor: Mehran Nekuii
  • Publication number: 20160085615
    Abstract: A process capable of employing compression and decompression mechanism to receive and decode soft information is disclosed. The process, in one aspect, is able to receive a data stream formatted with soft information from a communication network such as a wireless network. After identifying a set of bits representing a first logic value from a portion of the data stream in accordance with a predefined soft encoding scheme, the set of bits is compressed into a compressed set of bits. The compressed set of bits which represents the first logic value is subsequently stored in a local memory.
    Type: Application
    Filed: September 23, 2014
    Publication date: March 24, 2016
    Applicant: Cavium, Inc.
    Inventor: MEHRAN NEKUII
  • Publication number: 20160087758
    Abstract: A process capable of employing compression and decompression mechanism to receive and decode soft information is disclosed. The process, in one aspect, is able to receive a data stream formatted with soft information from a communication network such as a wireless network. After identifying a set of bits representing a first logic value from a portion of the data stream in accordance with a predefined soft encoding scheme, the set of bits is compressed into a compressed set of bits. The compressed set of bits which represents the first logic value is subsequently stored in a local memory.
    Type: Application
    Filed: September 23, 2014
    Publication date: March 24, 2016
    Applicant: Cavium, Inc.
    Inventor: Mehran Nekuii
  • Publication number: 20160087757
    Abstract: A process capable of employing compression and decompression mechanism to receive and decode soft information is disclosed. Upon receiving a set of signals representing a logic value from a transmitter via a physical communication channel, the set of signals is demodulated in accordance with a soft decoding scheme and subsequently, a Log Likelihood Ratio (“LLR”) value representing the logic value is generated. After generating a quantized LLR value in response to the LLR value via a non-linear LLR quantizer, the quantized LLR value representing the compressed logic value is stored in a local storage.
    Type: Application
    Filed: September 23, 2014
    Publication date: March 24, 2016
    Applicant: Cavium, Inc.
    Inventor: Mehran Nekuii