Patents by Inventor Xiaochen PENG

Xiaochen PENG has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240284553
    Abstract: Various aspects of the present disclosure generally relate to wireless communication. In some aspects, a user equipment (UE) may disconnect from a first radio access technology (RAT) service, independent of signaling from the first RAT service, after sensing that the UE has entered a coverage hole. The UE may use a connection to a second RAT service while in the coverage hole in association with disconnecting from the first RAT service. The UE may disconnect from the second RAT service, independent of signaling from the second RAT service, after sensing that the UE has exited the coverage hole and before expiration of a radio link failure (RLF) timer for the second RAT service. The UE may reconnect to the first RAT service before expiration of an RLF timer for the first RAT service. Numerous other aspects are described.
    Type: Application
    Filed: December 2, 2021
    Publication date: August 22, 2024
    Inventors: Jing DAI, Xianwei ZHU, Manisha PRIYADARSHINI, Qin Xue FRANTTI, Xinning SHEN, Yiming XU, Xiao PENG, Hewu GU, Yunjia NIU, Shan QING, Sumit Kumar SINGH, Xiaochen CHEN, Thomas CHRISTOL, Shanshan WANG, Arvind Vardarajan SANTHANAM, Yue HONG, Xiaoning LU, Xuqiang ZHANG, Jiming GUO, Tom CHIN, Jun DENG, Peng HU
  • Publication number: 20240242071
    Abstract: The present disclosure provides an accelerator circuit, a semiconductor device, and a method for accelerating convolution in a convolutional neural network. The accelerator circuit includes a plurality of sub processing-element (PE) arrays, and each of the plurality of sub PE arrays includes a plurality of processing elements. The processing elements in each of the plurality of sub PE arrays implement a standard convolutional layer during a first configuration applied to the accelerator circuit, and implement a depth-wise convolutional layer during a second configuration applied to the accelerator circuit.
    Type: Application
    Filed: January 18, 2023
    Publication date: July 18, 2024
    Inventors: XIAOCHEN PENG, MURAT KEREM AKARVARDAR, XIAOYU SUN
  • Publication number: 20240203463
    Abstract: A 3D memory device is provided. The 3D memory device includes a first logic base layer, a second layer, and a third layer. The first logic base layer comprises a first type DEMUX, a plurality of second type DEMUXs coupled to the first type DEMUX, a first type MUX, and a plurality of second type MUXs coupled to the first type MUX. The second layer comprises a first group of memory units. Each of the first group of memory units is respectively coupled to a corresponding DEMUX of the plurality of second type DEMUXs and a corresponding MUX of the plurality of second type MUXs. The third layer comprises a second group of memory units. Each of the second group of memory units is respectively coupled to a corresponding DEMUX of the plurality of second type DEMUXs and a corresponding MUX of the plurality of second type MUXs.
    Type: Application
    Filed: January 17, 2023
    Publication date: June 20, 2024
    Inventors: MURAT KEREM AKARVARDAR, XIAOCHEN PENG
  • Publication number: 20240069971
    Abstract: An artificial intelligence (AI) accelerator device may include a plurality of on-chip mini buffers that are associated with a processing element (PE) array. Each mini buffer is associated with a subset of rows or a subset of columns of the PE array. Partitioning an on-chip buffer of the AI accelerator device into the mini buffers described herein may reduce the size and complexity of the on-chip buffer. The reduced size of the on-chip buffer may reduce the wire routing complexity of the on-chip buffer, which may reduce latency and may reduce access energy for the AI accelerator device. This may increase the operating efficiency and/or may increase the performance of the AI accelerator device. Moreover, the mini buffers may increase the overall bandwidth that is available for the mini buffers to transfer data to and from the PE array.
    Type: Application
    Filed: August 31, 2022
    Publication date: February 29, 2024
    Inventors: Xiaoyu SUN, Xiaochen PENG, Murat Kerem AKARVARDAR