Patents by Inventor An Pang Li

An Pang Li has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220197836
    Abstract: A non-volatile memory control technology. In response to a read command, a non-volatile memory interface controller temporarily stores data read from a non-volatile memory to a system memory and, accordingly, asserts a flag in the system memory. Through a write channel provided by the interconnect bus, the host bridge controller confirms that the flag is asserted to correctly read the data from the system memory. A master computing unit reads the system memory through a read channel provided by the interconnect bus, without being delayed by the status checking of the flag. The host bridge controller executes a data detection command or a preset vendor command to issue a write request for programming data in a virtual address, to trigger a handshake between the host bridge controller and the system memory through the write channel. During the handshake, flag checking is achieved.
    Type: Application
    Filed: March 9, 2022
    Publication date: June 23, 2022
    Inventor: An-Pang LI
  • Publication number: 20220197835
    Abstract: A non-volatile memory control technology. In response to a read command, a non-volatile memory interface controller temporarily stores data read from a non-volatile memory to the system memory and, accordingly, asserts a flag in the system memory. Through a flag reading channel provided by a interconnect bus, the host bridge controller confirms that the flag is asserted to correctly read the data from the system memory. A master computing unit reads the system memory through a data reading channel provided by the interconnect bus, without being delayed by the status checking of the flag. The interconnect bus further provides a flag writing channel and a data writing channel.
    Type: Application
    Filed: March 9, 2022
    Publication date: June 23, 2022
    Inventor: An-Pang LI
  • Patent number: 11366775
    Abstract: An efficient control technology for non-volatile memory. In a controller, a host bridge controller and a master computing unit are coupled to a system memory via an interconnect bus, and then coupled to a non-volatile memory interface controller. In response to a read command issued by a host, the non-volatile memory interface controller temporarily stores data read from a non-volatile memory to the system memory and, accordingly, asserts a flag in the system memory. Through a first channel provided by the interconnect bus, the host bridge controller confirms that the flag is asserted to correctly read the data from system memory and returns the data to the host. The master computing unit reads the system memory through a second channel provided by the interconnect bus, without being delayed by the status checking of the flag.
    Type: Grant
    Filed: January 19, 2021
    Date of Patent: June 21, 2022
    Assignee: SILICON MOTION, INC.
    Inventor: An-Pang Li
  • Patent number: 11334415
    Abstract: A data storage device and a method for sharing memory of controller thereof are provided. The data storage device comprises a non-volatile memory and a controller, which is electrically coupled to the non-volatile memory and comprises an access interface, a redundant array of independent disks (RAID) error correcting code (ECC) engine and a central processing unit (CPU). The CPU has a first memory for storing temporary data, the RAID ECC engine has a second memory, and the controller maps the unused memory space of the second memory to the first memory to be virtualized as part of the first memory when the second memory is not fully used so that the CPU can also use the unused memory space of the second memory to store the temporary data.
    Type: Grant
    Filed: July 4, 2019
    Date of Patent: May 17, 2022
    Assignee: Silicon Motion, Inc.
    Inventor: An-Pang Li
  • Patent number: 11314571
    Abstract: A multi-processor system with a distributed mailbox architecture and a communication method thereof are provided. The multi-processor system comprises a plurality of processors, each of the processors is correspondingly configured with an exclusive mailbox and an exclusive channel, and the communication method comprises the following steps. When a first processor of the processors needs to communicate with a second processor, the first processor writes data into the exclusive mailbox of the second processor through a public bus; and when the writing of the data has completed, the exclusive mailbox of the second processor sends an interrupt signal to the second processor, after receiving the interrupt signal, the second processor reads the data in the exclusive mailbox through the corresponding exclusive channel.
    Type: Grant
    Filed: April 30, 2019
    Date of Patent: April 26, 2022
    Assignee: Silicon Motion, Inc.
    Inventor: An-Pang Li
  • Publication number: 20220121614
    Abstract: The present invention provides a SoC including a first CPU, a first tightly-coupled memory, a second CPU and a second tightly-coupled memory is disclosed. The first CPU includes a first core circuit, a first level one memory interface and a first level two memory interface. The first tightly-coupled memory is directly coupled to the first level one memory interface, and the first tightly-coupled memory includes a first mailbox. The second CPU includes a second core circuit, a second level one memory interface and a second level two memory interface. The second tightly-coupled memory is directly coupled to the second level one memory interface, and the second tightly-coupled memory includes a second mailbox. When the first CPU sends a command to the second mailbox within the second tightly-coupled memory, the second core circuit directly reads the command from the second mailbox, without going through the second level two memory interface.
    Type: Application
    Filed: October 15, 2020
    Publication date: April 21, 2022
    Inventor: An-Pang Li
  • Patent number: 11216207
    Abstract: The invention introduces a method for programming data of page groups into flash units to include steps for: obtaining, by a host interface (I/F) controller, user data of a page group from a host side, wherein the page group comprises multiple pages; storing, by the host I/F controller, the user data on the pages in a random access memory (RAM) through a bus architecture, outputting the user data on the pages to an engine via an I/F, and enabling the engine to calculate a parity of the page group according to the user data on the pages; obtaining, by a direct memory access (DMA) controller, the parity of the page group from the engine and storing the parity of the page group in the RAM through the bus architecture; and obtaining, by a flash I/F controller, the user data on the pages and the parity of the page group from the RAM through the bus architecture, and programming the user data on the pages and the parity of the page group into a flash module.
    Type: Grant
    Filed: September 2, 2020
    Date of Patent: January 4, 2022
    Assignee: SILICON MOTION, INC.
    Inventor: An-Pang Li
  • Patent number: 11194515
    Abstract: The present disclosure provides a memory system, a method of operating memory, and a non-transitory computer readable storage medium. The memory system includes a memory chip and a controller. The controller is coupled with the memory chip, which the controller is configured to: receive a first data corresponding to a first version from a file system in order to store the first data corresponding to the first version in a first page of the flash memory chip; and program the first data corresponding to a second version in the first page in response to the first data of the second version, which the second version is newer than the first version.
    Type: Grant
    Filed: September 16, 2019
    Date of Patent: December 7, 2021
    Assignee: MACRONIX INTERNATIONAL CO., LTD.
    Inventors: Ping-Hsien Lin, Wei-Chen Wang, Hsiang-Pang Li, Shu-Hsien Liao, Che-Wei Tsao, Yuan-Hao Chang, Tei-Wei Kuo
  • Publication number: 20210373798
    Abstract: A technology for controlling non-volatile memory with a multi-stage controller is shown. The multi-stage controller uses an upper on-chip interconnect and a lower on-chip interconnect and includes a serial peripheral bus (SPI) loader, a frond-end central processing unit (FE CPU), and an arbitrator. When being connected to the lower on-chip interconnect, the SPI loader performs code loading for the multi-stage controller. After the SPI loader finishes the code loading, the SPI loader is disconnected from the lower-stage on-chip bus, and the arbitrator connects the FE CPU to the lower on-chip interconnect. This way, the communication channel between the upper on-chip interconnect and the lower on-chip interconnect is not occupied by the FE CPU.
    Type: Application
    Filed: March 17, 2021
    Publication date: December 2, 2021
    Inventor: An-Pang LI
  • Publication number: 20210240642
    Abstract: An efficient control technology for non-volatile memory. In a controller, a host bridge controller and a master computing unit are coupled to a system memory via an interconnect bus, and then coupled to a non-volatile memory interface controller. In response to a read command issued by a host, the non-volatile memory interface controller temporarily stores data read from a non-volatile memory to the system memory and, accordingly, asserts a flag in the system memory. Through a first channel provided by the interconnect bus, the host bridge controller confirms that the flag is asserted to correctly read the data from system memory and returns the data to the host. The master computing unit reads the system memory through a second channel provided by the interconnect bus, without being delayed by the status checking of the flag.
    Type: Application
    Filed: January 19, 2021
    Publication date: August 5, 2021
    Inventor: An-Pang LI
  • Publication number: 20210240443
    Abstract: A computation method and a computation apparatus exploiting weight sparsity, adapted for a processor to perform multiply-and-accumulate operations on a memory including multiple input and output lines crossing each other. In the method, weights are mapped to the cells of each operation unit (OU) in the memory. The rows of the cells of each OU are compressed by removing at least one row of the cells each mapped with a weight of 0, and an index including values each indicating a distance between every two rows of the cells including at least one cell mapped with a non-zero weight for each OU is encoded. Inputs are inputted to the input lines corresponding to the rows of each OU excluding the rows of the cells with the weight of 0 according to the index and outputs are sensed from the output lines corresponding to the OU to compute a computation result.
    Type: Application
    Filed: February 4, 2020
    Publication date: August 5, 2021
    Applicant: MACRONIX International Co., Ltd.
    Inventors: Hung-Sheng Chang, Han-Wen Hu, Hsiang-Pang Li, Tzu-Hsien Yang, I-Ching Tseng, Hsiang-Yun Cheng, Chia-Lin Yang
  • Patent number: 11050440
    Abstract: An encoding method includes: receiving, by an encoder, an information for encoding; generating, by the encoder, a first portion codeword according to a first encoding rule and the information for encoding, wherein the first encoding rule is an encoding rule configured to generate LDPC code; generating, by the encoder, a second portion codeword according to a second encoding rule different from the first encoding rule and a double check region of the first portion codeword; and concatenating, by the encoder, the first portion codeword and the second portion codeword to generate a codeword. A plurality of trapping sets corresponding to the first encoding rule include at least one error bit within the double check region.
    Type: Grant
    Filed: October 21, 2019
    Date of Patent: June 29, 2021
    Assignee: MACRONIX INTERNATIONAL CO., LTD.
    Inventors: Chih-Huai Shih, Yu-Ming Huang, Hsiang-Pang Li, Hsi-Chia Chang
  • Publication number: 20210181972
    Abstract: The invention introduces a method for programming data of page groups into flash units to include steps for: obtaining, by a host interface (I/F) controller, user data of a page group from a host side, wherein the page group comprises multiple pages; storing, by the host I/F controller, the user data on the pages in a random access memory (RAM) through a bus architecture, outputting the user data on the pages to an engine via an I/F, and enabling the engine to calculate a parity of the page group according to the user data on the pages; obtaining, by a direct memory access (DMA) controller, the parity of the page group from the engine and storing the parity of the page group in the RAM through the bus architecture; and obtaining, by a flash I/F controller, the user data on the pages and the parity of the page group from the RAM through the bus architecture, and programming the user data on the pages and the parity of the page group into a flash module.
    Type: Application
    Filed: September 2, 2020
    Publication date: June 17, 2021
    Applicant: Silicon Motion, Inc.
    Inventor: An-Pang LI
  • Publication number: 20210119645
    Abstract: The present invention discloses an encoder, a decoder, an encoding method and a decoding method based on Low-Density Parity-Check (LDPC) code. The encoding method comprises: receiving, by an encoder, an information for encoding; generating, by the encoder, a first portion codeword according to a first encoding rule and the information for encoding, wherein the first encoding rule is an encoding rule configured to generate LDPC code; generating, by the encoder, a second portion codeword according to a second encoding rule different from the first encoding rule and a double check region of the first portion codeword; and concatenating, by the encoder, the first portion codeword and the second portion codeword to generate a codeword. A plurality of trapping sets corresponding to the first encoding rule include at least one error bit within the double check region.
    Type: Application
    Filed: October 21, 2019
    Publication date: April 22, 2021
    Inventors: Chih-Huai SHIH, Yu-Ming HUANG, Hsiang-Pang LI, Hsi-Chia CHANG
  • Publication number: 20210081140
    Abstract: The present disclosure provides a memory system, a method of operating memory, and a non-transitory computer readable storage medium. The memory system includes a memory chip and a controller. The controller is coupled with the memory chip, which the controller is configured to: receive a first data corresponding to a first version from a file system in order to store the first data corresponding to the first version in a first page of the flash memory chip; and program the first data corresponding to a second version in the first page in response to the first data of the second version, which the second version is newer than the first version.
    Type: Application
    Filed: September 16, 2019
    Publication date: March 18, 2021
    Inventors: Ping-Hsien LIN, Wei-Chen WANG, Hsiang-Pang LI, Shu-Hsien LIAO, Che-Wei TSAO, Yuan-Hao CHANG, Tei-Wei KUO
  • Publication number: 20200402426
    Abstract: The invention introduces an apparatus for encrypting and decrypting user data, including a memory, a bypass-flag writing circuit and a flash interface controller. The bypass-flag writing circuit writes a bypass flag in a remaining bit of space of the memory that is originally allocated for storing an End-to-End Data Path Protection (E2E DPP), where the bypass flag indicates whether user data has been encrypted. The flash interface controller reads the user data, the E2E DPP and the bypass flag from the memory and programs the user data, the E2E DPP and the bypass flag into the flash device.
    Type: Application
    Filed: December 5, 2019
    Publication date: December 24, 2020
    Applicant: Silicon Motion, Inc.
    Inventor: An-Pang LI
  • Publication number: 20200312405
    Abstract: A method and an apparatus for neural network computation using adaptive data representation, adapted for a processor to perform multiply-and-accumulate operations on a memory having a crossbar architecture, are provided. The memory comprises multiple input and output lines crossing each other, multiple cells respectively disposed at intersections of the input and output lines, and multiple sense amplifiers respectively connected to the output lines. In the method, an input cycle of kth bits respectively in an input data is adaptively divided into multiple sub-cycles, wherein a number of the divided sub-cycles is determined according to a value of k. The kth bits of the input data are inputted to the input lines with the sub-cycles and computation results of the output lines are sensed by the sense amplifiers. The computation results sensed in each sub-cycle are combined to obtain the output data corresponding to the kth bits of the input data.
    Type: Application
    Filed: February 21, 2020
    Publication date: October 1, 2020
    Applicant: MACRONIX International Co., Ltd.
    Inventors: SHU-YIN HO, Hsiang-Pang Li, Yao-Wen Kang, Chun-Feng Wu, Yuan-Hao Chang, Tei-Wei Kuo
  • Patent number: 10671296
    Abstract: Disclosed is a management system for managing a memory device having sub-chips each having a container area and a data area. A CPU selects a target sub-chip according to respective temperature of the sub-chips. When the CPU intends to access a first original data in one of the data areas, a hot date tracking device acquires a first original address of the first original data from the CPU. When the first original address is recorded in one of a plurality of tracking layers, the CPU is indicated to access a first copied data corresponding to the first original data in the container area of the target sub-chip according to a current tracking layer recording the first original address. When the first original address is not recorded in the tracking layers, the CPU accesses the first original data in the data area according to the first original address.
    Type: Grant
    Filed: August 9, 2017
    Date of Patent: June 2, 2020
    Assignee: MACRONIX INTERNATIONAL CO., LTD.
    Inventors: Hung-Sheng Chang, Hsiang-Pang Li, Yuan-Hao Chang, Tei-Wei Kuo
  • Publication number: 20200081780
    Abstract: A data storage device and a parity code processing method thereof are provided. The data storage device includes a non-volatile memory and a controller. The controller includes a RAID ECC engine. The RAID ECC engine has a memory, wherein after completing an encoding operation on each N pages of user data to generate a corresponding parity code. The RAID ECC engine compresses the parity code and stores the compressed parity code in the memory, wherein after all K parity codes of the K×N pages of the user data are compressed and stored in the memory, the RAID ECC engine writes the compressed K parity codes to the non-volatile memory. As such, the frequency of switching the state of the RAID ECC engine is reduced, and the number and time of writing data to the non-volatile memory is reduced.
    Type: Application
    Filed: August 7, 2019
    Publication date: March 12, 2020
    Inventor: AN-PANG LI
  • Publication number: 20200065190
    Abstract: A data storage device and a method for sharing memory of controller thereof are provided. The data storage device comprises a non-volatile memory and a controller, which is electrically coupled to the non-volatile memory and comprises an access interface, a redundant array of independent disks (RAID) error correcting code (ECC) engine and a central processing unit (CPU). The CPU has a first memory for storing temporary data, the RAID ECC engine has a second memory, and the controller maps the unused memory space of the second memory to the first memory to be virtualized as part of the first memory when the second memory is not fully used so that the CPU can also use the unused memory space of the second memory to store the temporary data.
    Type: Application
    Filed: July 4, 2019
    Publication date: February 27, 2020
    Inventor: AN-PANG LI