Patents by Inventor An Pang Li

An Pang Li has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240135118
    Abstract: A hybrid analog and digital computational system is created by receiving equations in which a set of solution values is unknown. A residual iterative algorithm is implemented to solve the set of solution values for the equations. The residual iterative algorithm includes an outer update loop computed using a digital computing device with a set of residue values initially set to a first initial value and a set of solution update values set to a second initial value. The residual iterative algorithm includes an inner residual loop, which is iteratively computed using an analog accelerator until one or more inner residual loop stopping criteria is met and returning the set of solution update values. Next, the set of solution updates are used to update the set of residue values and a range of a next set of solution update values, thereby adjusting a computational precision of the inner residual loop.
    Type: Application
    Filed: February 19, 2021
    Publication date: April 25, 2024
    Inventors: Pang SHUO, Guifang LI, Zheyuan ZHU
  • Publication number: 20240105848
    Abstract: A semiconductor device structure is provided. The semiconductor device structure includes multiple semiconductor nanostructures, and the semiconductor nanostructures include a first semiconductor material. The semiconductor device structure also includes multiple epitaxial structures extending from edges of the semiconductor nanostructures. The epitaxial structures include a second semiconductor material that is different than the first semiconductor material. The semiconductor device structure further includes a gate stack wrapped around the semiconductor nanostructures.
    Type: Application
    Filed: November 29, 2023
    Publication date: March 28, 2024
    Applicant: Taiwan Semiconductor Manufacturing Company, Ltd.
    Inventors: Shuen-Shin LIANG, Pang-Yen TSAI, Keng-Chu LIN, Sung-Li WANG, Pinyen LIN
  • Patent number: 11704054
    Abstract: A method for performing access management of a memory device with aid of buffer usage reduction control and associated apparatus are provided. The method includes: determining whether any host command among a plurality of host commands from a host device is a trim-related read command, wherein the trim-related read command represents a read command indicating that reading from at least one trimmed location is required; in response to the any host command being the trim-related read command, determining an estimated trim-related read operation count regarding a data buffer according to a trimmed range of the at least one trimmed location and a predetermined unit size of accessing the data buffer; writing predetermined trimmed data having the predetermined unit size into the data buffer; and controlling a transmission interface circuit to read the predetermined trimmed data from the data buffer multiple times, for being returned to the host device.
    Type: Grant
    Filed: January 5, 2022
    Date of Patent: July 18, 2023
    Assignee: Silicon Motion, Inc.
    Inventor: An-Pang Li
  • Patent number: 11704246
    Abstract: A memory system for maintaining data consistency and an operation method thereof are provided. The operation method includes: receiving a first data in a first cache of a first memory from a processor; reading the first data from the first cache and writing the first data as a redo log into a log buffer of the first memory; writing the redo log from the log buffer into a memory controller of the processor; performing an in-memory copy in a second memory to copy a second data as an undo log, wherein the second data is an old version of the first data; and writing the redo log from the memory controller into the second memory for covering the second data by the redo log as a third data, wherein the redo log, the third data and the first data are the same.
    Type: Grant
    Filed: December 1, 2021
    Date of Patent: July 18, 2023
    Assignee: MACRONIX INTERNATIONAL CO., LTD.
    Inventors: Bo-Rong Lin, Ming-Liang Wei, Hsiang-Pang Li, Nai-Jia Dong, Hsiang-Yun Cheng, Chia-Lin Yang
  • Publication number: 20230214154
    Abstract: A method for performing access management of a memory device with aid of buffer usage reduction control and associated apparatus are provided. The method includes: determining whether any host command among a plurality of host commands from a host device is a trim-related read command, wherein the trim-related read command represents a read command indicating that reading from at least one trimmed location is required; in response to the any host command being the trim-related read command, determining an estimated trim-related read operation count regarding a data buffer according to a trimmed range of the at least one trimmed location and a predetermined unit size of accessing the data buffer; writing predetermined trimmed data having the predetermined unit size into the data buffer; and controlling a transmission interface circuit to read the predetermined trimmed data from the data buffer multiple times, for being returned to the host device.
    Type: Application
    Filed: January 5, 2022
    Publication date: July 6, 2023
    Applicant: Silicon Motion, Inc.
    Inventor: An-Pang Li
  • Patent number: 11651707
    Abstract: The invention introduces an apparatus for encrypting and decrypting user data, including a memory, a bypass-flag writing circuit and a flash interface controller. The bypass-flag writing circuit writes a bypass flag in a remaining bit of space of the memory that is originally allocated for storing an End-to-End Data Path Protection (E2E DPP), where the bypass flag indicates whether user data has been encrypted. The flash interface controller reads the user data, the E2E DPP and the bypass flag from the memory and programs the user data, the E2E DPP and the bypass flag into the flash device.
    Type: Grant
    Filed: December 5, 2019
    Date of Patent: May 16, 2023
    Assignee: SILICON MOTION, INC.
    Inventor: An-Pang Li
  • Patent number: 11594277
    Abstract: A method for neural network computation using adaptive data representation, adapted for a processor to perform multiply-and-accumulate operations on a memory having a crossbar architecture, is provided. The memory comprises multiple input and output lines crossing each other, multiple cells respectively disposed at intersections of the input and output lines, and multiple sense amplifiers respectively connected to the output lines. In the method, an input cycle of kth bits respectively in an input data is adaptively divided into multiple sub-cycles, wherein a number of the divided sub-cycles is determined according to a value of k. The kth bits of the input data are inputted to the input lines with the sub-cycles and computation results of the output lines are sensed by the sense amplifiers. The computation results sensed in each sub-cycle are combined to obtain the output data corresponding to the kth bits of the input data.
    Type: Grant
    Filed: July 22, 2022
    Date of Patent: February 28, 2023
    Assignee: MACRONIX International Co., Ltd.
    Inventors: Shu-Yin Ho, Hsiang-Pang Li, Yao-Wen Kang, Chun-Feng Wu, Yuan-Hao Chang, Tei-Wei Kuo
  • Patent number: 11573730
    Abstract: A technology for controlling non-volatile memory with a multi-stage controller is shown. The multi-stage controller uses an upper on-chip interconnect and a lower on-chip interconnect and includes a serial peripheral bus (SPI) loader, a frond-end central processing unit (FE CPU), and an arbitrator. When being connected to the lower on-chip interconnect, the SPI loader performs code loading for the multi-stage controller. After the SPI loader finishes the code loading, the SPI loader is disconnected from the lower-stage on-chip bus, and the arbitrator connects the FE CPU to the lower on-chip interconnect. This way, the communication channel between the upper on-chip interconnect and the lower on-chip interconnect is not occupied by the FE CPU.
    Type: Grant
    Filed: March 17, 2021
    Date of Patent: February 7, 2023
    Assignee: SILICON MOTION, INC.
    Inventor: An-Pang Li
  • Publication number: 20230033998
    Abstract: A memory system for maintaining data consistency and an operation method thereof are provided. The operation method includes: receiving a first data in a first cache of a first memory from a processor; reading the first data from the first cache and writing the first data as a redo log into a log buffer of the first memory; writing the redo log from the log buffer into a memory controller of the processor; performing an in-memory copy in a second memory to copy a second data as an undo log, wherein the second data is an old version of the first data; and writing the redo log from the memory controller into the second memory for covering the second data by the redo log as a third data, wherein the redo log, the third data and the first data are the same.
    Type: Application
    Filed: December 1, 2021
    Publication date: February 2, 2023
    Inventors: Bo-Rong LIN, Ming-Liang WEI, Hsiang-Pang LI, Nai-Jia DONG, Hsiang-Yun CHENG, Chia-Lin YANG
  • Publication number: 20230011874
    Abstract: A device combining positioning and injection systems for injection within a body cavity, comprises: a tubular housing surrounding an accommodating space and having an opening hole fluidly connected to the accommodating space; a curved channel fluidly connected to the accommodating space and the opening hole; an injection needle disposed inside the accommodating space and the curved channel; the injection needle having a piercing portion penetrating through the opening hole and configured to extend outward from or retract inward into the opening hole; a plurality of ultrasonic transducers installed at the tubular housing and at two opposite sides of the opening hole; wherein at least one of the ultrasonic transducers being used to transmit a detection signal, and at least one of the ultrasonic transducers being used to receive the detection signal; a monitoring unit connected to the ultrasonic transducers via a transmission unit for information transmission therebetween.
    Type: Application
    Filed: July 11, 2021
    Publication date: January 12, 2023
    Inventors: CHENG-SHING CHEN, WEI-HUNG CHEN, PANG-LI YANG
  • Patent number: 11550740
    Abstract: A non-volatile memory control technology. In response to a read command, a non-volatile memory interface controller temporarily stores data read from a non-volatile memory to a system memory and, accordingly, asserts a flag in the system memory. Through a write channel provided by the interconnect bus, the host bridge controller confirms that the flag is asserted to correctly read the data from the system memory. A master computing unit reads the system memory through a read channel provided by the interconnect bus, without being delayed by the status checking of the flag. The host bridge controller executes a data detection command or a preset vendor command to issue a write request for programming data in a virtual address, to trigger a handshake between the host bridge controller and the system memory through the write channel. During the handshake, flag checking is achieved.
    Type: Grant
    Filed: March 9, 2022
    Date of Patent: January 10, 2023
    Assignee: SILICON MOTION, INC.
    Inventor: An-Pang Li
  • Patent number: 11526454
    Abstract: A non-volatile memory control technology. In response to a read command, a non-volatile memory interface controller temporarily stores data read from a non-volatile memory to the system memory and, accordingly, asserts a flag in the system memory. Through a flag reading channel provided by a interconnect bus, the host bridge controller confirms that the flag is asserted to correctly read the data from the system memory. A master computing unit reads the system memory through a data reading channel provided by the interconnect bus, without being delayed by the status checking of the flag. The interconnect bus further provides a flag writing channel and a data writing channel.
    Type: Grant
    Filed: March 9, 2022
    Date of Patent: December 13, 2022
    Assignee: SILICON MOTION, INC.
    Inventor: An-Pang Li
  • Patent number: 11526328
    Abstract: A computation method and a computation apparatus exploiting weight sparsity, adapted for a processor to perform multiply-and-accumulate operations on a memory including multiple input and output lines crossing each other. In the method, weights are mapped to the cells of each operation unit (OU) in the memory. The rows of the cells of each OU are compressed by removing at least one row of the cells each mapped with a weight of 0, and an index including values each indicating a distance between every two rows of the cells including at least one cell mapped with a non-zero weight for each OU is encoded. Inputs are inputted to the input lines corresponding to the rows of each OU excluding the rows of the cells with the weight of 0 according to the index and outputs are sensed from the output lines corresponding to the OU to compute a computation result.
    Type: Grant
    Filed: February 4, 2020
    Date of Patent: December 13, 2022
    Assignee: MACRONIX INTERNATIONAL CO., LTD.
    Inventors: Hung-Sheng Chang, Han-Wen Hu, Hsiang-Pang Li, Tzu-Hsien Yang, I-Ching Tseng, Hsiang-Yun Cheng, Chia-Lin Yang
  • Publication number: 20220359003
    Abstract: A method for neural network computation using adaptive data representation, adapted for a processor to perform multiply-and-accumulate operations on a memory having a crossbar architecture, is provided. The memory comprises multiple input and output lines crossing each other, multiple cells respectively disposed at intersections of the input and output lines, and multiple sense amplifiers respectively connected to the output lines. In the method, an input cycle of kth bits respectively in an input data is adaptively divided into multiple sub-cycles, wherein a number of the divided sub-cycles is determined according to a value of k. The kth bits of the input data are inputted to the input lines with the sub-cycles and computation results of the output lines are sensed by the sense amplifiers. The computation results sensed in each sub-cycle are combined to obtain the output data corresponding to the kth bits of the input data.
    Type: Application
    Filed: July 22, 2022
    Publication date: November 10, 2022
    Applicant: MACRONIX International Co., Ltd.
    Inventors: Shu-Yin Ho, Hsiang-Pang Li, Yao-Wen Kang, Chun-Feng Wu, Yuan-Hao Chang, Tei-Wei Kuo
  • Patent number: 11443797
    Abstract: A method and an apparatus for neural network computation using adaptive data representation, adapted for a processor to perform multiply-and-accumulate operations on a memory having a crossbar architecture, are provided. The memory comprises multiple input and output lines crossing each other, multiple cells respectively disposed at intersections of the input and output lines, and multiple sense amplifiers respectively connected to the output lines. In the method, an input cycle of kth bits respectively in an input data is adaptively divided into multiple sub-cycles, wherein a number of the divided sub-cycles is determined according to a value of k. The kth bits of the input data are inputted to the input lines with the sub-cycles and computation results of the output lines are sensed by the sense amplifiers. The computation results sensed in each sub-cycle are combined to obtain the output data corresponding to the kth bits of the input data.
    Type: Grant
    Filed: February 21, 2020
    Date of Patent: September 13, 2022
    Assignee: MACRONIX INTERNATIONAL CO., LTD.
    Inventors: Shu-Yin Ho, Hsiang-Pang Li, Yao-Wen Kang, Chun-Feng Wu, Yuan-Hao Chang, Tei-Wei Kuo
  • Patent number: 11372800
    Abstract: The present invention provides a SoC including a first CPU, a first tightly-coupled memory, a second CPU and a second tightly-coupled memory is disclosed. The first CPU includes a first core circuit, a first level one memory interface and a first level two memory interface. The first tightly-coupled memory is directly coupled to the first level one memory interface, and the first tightly-coupled memory includes a first mailbox. The second CPU includes a second core circuit, a second level one memory interface and a second level two memory interface. The second tightly-coupled memory is directly coupled to the second level one memory interface, and the second tightly-coupled memory includes a second mailbox. When the first CPU sends a command to the second mailbox within the second tightly-coupled memory, the second core circuit directly reads the command from the second mailbox, without going through the second level two memory interface.
    Type: Grant
    Filed: October 15, 2020
    Date of Patent: June 28, 2022
    Assignee: Silicon Motion, Inc.
    Inventor: An-Pang Li
  • Publication number: 20220197836
    Abstract: A non-volatile memory control technology. In response to a read command, a non-volatile memory interface controller temporarily stores data read from a non-volatile memory to a system memory and, accordingly, asserts a flag in the system memory. Through a write channel provided by the interconnect bus, the host bridge controller confirms that the flag is asserted to correctly read the data from the system memory. A master computing unit reads the system memory through a read channel provided by the interconnect bus, without being delayed by the status checking of the flag. The host bridge controller executes a data detection command or a preset vendor command to issue a write request for programming data in a virtual address, to trigger a handshake between the host bridge controller and the system memory through the write channel. During the handshake, flag checking is achieved.
    Type: Application
    Filed: March 9, 2022
    Publication date: June 23, 2022
    Inventor: An-Pang LI
  • Publication number: 20220197835
    Abstract: A non-volatile memory control technology. In response to a read command, a non-volatile memory interface controller temporarily stores data read from a non-volatile memory to the system memory and, accordingly, asserts a flag in the system memory. Through a flag reading channel provided by a interconnect bus, the host bridge controller confirms that the flag is asserted to correctly read the data from the system memory. A master computing unit reads the system memory through a data reading channel provided by the interconnect bus, without being delayed by the status checking of the flag. The interconnect bus further provides a flag writing channel and a data writing channel.
    Type: Application
    Filed: March 9, 2022
    Publication date: June 23, 2022
    Inventor: An-Pang LI
  • Patent number: 11366775
    Abstract: An efficient control technology for non-volatile memory. In a controller, a host bridge controller and a master computing unit are coupled to a system memory via an interconnect bus, and then coupled to a non-volatile memory interface controller. In response to a read command issued by a host, the non-volatile memory interface controller temporarily stores data read from a non-volatile memory to the system memory and, accordingly, asserts a flag in the system memory. Through a first channel provided by the interconnect bus, the host bridge controller confirms that the flag is asserted to correctly read the data from system memory and returns the data to the host. The master computing unit reads the system memory through a second channel provided by the interconnect bus, without being delayed by the status checking of the flag.
    Type: Grant
    Filed: January 19, 2021
    Date of Patent: June 21, 2022
    Assignee: SILICON MOTION, INC.
    Inventor: An-Pang Li
  • Patent number: 11334415
    Abstract: A data storage device and a method for sharing memory of controller thereof are provided. The data storage device comprises a non-volatile memory and a controller, which is electrically coupled to the non-volatile memory and comprises an access interface, a redundant array of independent disks (RAID) error correcting code (ECC) engine and a central processing unit (CPU). The CPU has a first memory for storing temporary data, the RAID ECC engine has a second memory, and the controller maps the unused memory space of the second memory to the first memory to be virtualized as part of the first memory when the second memory is not fully used so that the CPU can also use the unused memory space of the second memory to store the temporary data.
    Type: Grant
    Filed: July 4, 2019
    Date of Patent: May 17, 2022
    Assignee: Silicon Motion, Inc.
    Inventor: An-Pang Li