Patents by Inventor Liang Han

Liang Han has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11769688
    Abstract: A method for manufacturing a flash memory device is provided. The method includes: providing a substrate structure including a substrate, a plurality of active regions and a plurality of first isolation regions alternately arranged in a first direction and extending in a second direction different from the first direction, a plurality of gate structures on the substrate, the gate structures being spaced apart from each other and extending in the second direction, and a gap structure between the gate structures; forming an overhang surrounding an upper portion of the gate structures to form a gap structure between the gate structures; and forming a second isolation region filling an upper portion of the gap structures and leaving a first air gap between the gap structures.
    Type: Grant
    Filed: December 23, 2021
    Date of Patent: September 26, 2023
    Assignees: Semiconductor Manufacturing International (Shanghai) Corporation, Semiconductor Manufacturing International (Beijing) Corporation
    Inventors: Shengfen Chiu, Liang Chen, Liang Han
  • Patent number: 11755892
    Abstract: Improved convolutional layers for neural networks can obtain an input feature map comprising groups of channels. Each group of channels can include one or more channels having a predetermined size. The predetermined sizes can differ between the groups. The convolutional layer can generate, for each one of the groups of channels, an output channel. Generation of the output channel can include resizing the channels in the remaining groups of channels to match the predetermined size of the each one of the groups of channels. Generation can further include combining the channels in the each one of the groups with the resized channels and applying the combined channels to a convolutional sub-layer to generate the output channel.
    Type: Grant
    Filed: November 7, 2019
    Date of Patent: September 12, 2023
    Assignee: Alibaba Group Holding Limited
    Inventor: Liang Han
  • Publication number: 20230267095
    Abstract: A system includes a high-bandwidth inter-chip network (ICN) that allows communication between parallel processing units (PPUs) in the system. For example, the ICN allows a PPU to communicate with other PPUs on the same compute node or server and also with PPUs on other compute nodes or servers. In embodiments, communication may be at the command level (e.g., at the direct memory access level) and at the instruction level (e.g., the finer-grained load/store instruction level). The ICN allows PPUs in the system to communicate without using a PCIe bus, thereby avoiding its bandwidth limitations and relative lack of speed. The respective routing tables comprise information of multiple paths to any given other PPU.
    Type: Application
    Filed: July 15, 2022
    Publication date: August 24, 2023
    Inventors: Liang HAN, Yunxiao ZOU
  • Publication number: 20230259486
    Abstract: Systems and methods for exchanging synchronization information between processing units using a synchronization network are disclosed. The disclosed systems and methods include a device including a host and associated neural processing units. Each of the neural processing units can include a command communication module and a synchronization communication module. The command communication module can include circuitry for communicating with the host device over a host network. The synchronization communication module can include circuitry enabling communication between neural processing units over a synchronization network. The neural processing units can be configured to each obtain a synchronized update for a machine learning model. This synchronized update can be obtained at least in part by exchanging synchronization information using the synchronization network. The neural processing units can each maintain a version of the machine learning model and can synchronize it using the synchronized update.
    Type: Application
    Filed: November 2, 2020
    Publication date: August 17, 2023
    Inventors: Liang HAN, Chengyuan WU, Ye LU
  • Publication number: 20230257135
    Abstract: A micro-cathode arc propulsion system. By replacing an inductive circuit in a traditional micro-cathode arc propulsion system with a capacitor circuit, the stability of the operation of a micro-cathode arc thruster can be improved due to the stable discharging mode of the capacitor, and as the internal resistance of the capacitor is small during operation, the additional power consumption of the circuit is reduced, and the efficiency of the system is improved. In addition, as a pulse power supply is used to power in a pulse manner, the average power inputted into the micro-cathode arc thruster is greatly reduced.
    Type: Application
    Filed: September 14, 2020
    Publication date: August 17, 2023
    Inventors: Liqiu Wei, Yongjie Ding, Hong Li, Liang Han, Daren Yu
  • Publication number: 20230255024
    Abstract: A memory structure is provided in the present disclosure. The memory structure includes a substrate, a plurality of discrete memory gate structures on the substrate where each of the plurality of memory gate structures includes a floating gate layer and a control gate layer on the floating gate layer, an isolation layer formed between adjacent memory gate structures where a top surface of the isolation layer is lower than a top surface of the control gate layer and higher than a bottom surface of the control gate layer, an opening is formed on an exposed sidewall of the control gate layer, and a bottom of the opening is lower than or coplanar with the top surface of the isolation layer, and a metal silicide layer on an exposed surface of the control gate layer and the top surface of the isolation layer.
    Type: Application
    Filed: April 17, 2023
    Publication date: August 10, 2023
    Inventors: Liang HAN, Hai Ying WANG
  • Patent number: 11720521
    Abstract: An accelerator system can include one or more clusters of eight processing units. The processing units can include seven communication ports. Each cluster of eight processing units can be organized into two subsets of four processing units. Each processing unit can be coupled to each of the other processing units in the same subset by a respective set of two bi-directional communication links. Each processing unit can also be coupled to a corresponding processing unit in the other subset by a respective single bi-directional communication link. Input data can be divided into one or more groups of four subsets of data. Each processing unit can be configured to sum corresponding subsets of the input data received on the two bi-directional communication links from the other processing units in the same subset with the input data of the respective processing unit to generate a respective set of intermediate data.
    Type: Grant
    Filed: March 29, 2021
    Date of Patent: August 8, 2023
    Assignee: Alibaba Singapore Holding Private Limited
    Inventor: Liang Han
  • Publication number: 20230244626
    Abstract: The presented systems enable efficient and effective network communications. In one embodiment, a system comprises a parallel processing unit (PPU) included in a chip and a plurality of interconnects in an inter-chip network (ICN) configured to communicatively couple a plurality of PPUs that communicate with one another via the ICN. Corresponding communications are configured in accordance with routing tables. The routing tables can be stored and reside in registers of an ICN subsystem included in the PPU and include indications of minimum links available to forward a communication from the PPU and another PPU. Respective ones of the routing tables include indications of a correlation between the minimum links and respective ones of a plurality of egress ports that are available for communication coupling to the respective ones of PPUs that are possible destination PPUs. The routing tables can include indications of a single path between two PPUs per respective information communication flow.
    Type: Application
    Filed: May 25, 2022
    Publication date: August 3, 2023
    Inventors: Liang HAN, ChengYuan WU, Guoyu ZHU, Yang JIAO, Rong ZHONG, Yunxiao ZOU
  • Publication number: 20230244273
    Abstract: An electronic apparatus includes a first body, a second body, and a flexible screen. The second body is rotatably connected to the first body through a rotation shaft device to cause an opening and closing angle of the first body relative to the second body to be adjustable. The flexible screen is configured to display images. An end of the flexible screen is slidably arranged at the first body along a connection line direction of the first body and the second body. A second end is slidably arranged at the second body along the line connection direction. The first end and the second end are two ends of the flexible screen in the connection line direction. The first end and the second end of the flexible screen synchronously slide along the connection line direction to adjust a size of a part of the flexible screen located at the first body.
    Type: Application
    Filed: January 30, 2023
    Publication date: August 3, 2023
    Inventors: Liang HAN, Ran ZHANG, Tianshui TAN
  • Publication number: 20230185749
    Abstract: A system includes a high-bandwidth inter-chip network (ICN) that allows communication between neural network processing units (NPUs) in the system. For example, the ICN allows an NPU to communicate with other NPUs on the same compute node (server) and also with NPUs on other compute nodes (servers). Communication can be at the direct memory access (DMA) command level and at the finer-grained load/store instruction level. The ICN system and the programming model allows NPUs in the system to communicate without using a traditional network (e.g., Ethernet) that uses a relatively narrow and slow Peripheral Component Interconnect Express (PCIe) bus.
    Type: Application
    Filed: May 25, 2022
    Publication date: June 15, 2023
    Inventors: Liang HAN, ChengYuan WU, Guoyu ZHU, Rong ZHONG, Yang JIAO, Ye LU, Wei WU, Yunxiao ZOU, Li YIN
  • Publication number: 20230176737
    Abstract: The time required to read from and write to multi-terabyte memory chips in an inter-chip network can be reduced by breaking each memory chip into a number of memory spaces, and then individually addressing each memory space with an address that identifies the memory space, a line number within the memory space, and a number of transmit/receive ports to be used to access the line number.
    Type: Application
    Filed: May 25, 2022
    Publication date: June 8, 2023
    Inventors: Liang HAN, Yunxiao ZOU
  • Publication number: 20230170927
    Abstract: A radio frequency device has a multifunctional tuner that stores measurements of reflection coefficient parameter in a register. The radio frequency device also has a transceiver that has a transmitter. The transceiver may detect a transmitter signal from the transmitter to an antenna in an initial tuning state and then determine whether the transmitter signal is stable. In response to the transmitter signal being stable, the transceiver may measuring the reflection coefficient parameters at the multifunctional tuner. Furthermore, the radio frequency device has a baseband controller that has a memory to store instructions and a processor to execute the instructions. The instructions cause the processor to determine an antenna impedance based on the reflection coefficient parameters, and in response to determining that the antenna impedance is greater than or less than a threshold antenna impedance, iteratively tune the antenna using the multifunctional tuner.
    Type: Application
    Filed: January 23, 2023
    Publication date: June 1, 2023
    Inventors: Liang Han, Enrique Ayala Vazquez, Thomas E. Biedka, Hongfei Hu, Erdinc Irci, Nanbo Jin, James G. Judkins, Victor C. Lee, Matthew A. Mow, Mattia Pascolini, Ming-Ju Tsai, Yiren Wang, Yuancheng Xu, Yijun Zhou
  • Patent number: 11659710
    Abstract: A memory structure and its fabrication method are provided in the present disclosure. The method includes providing a substrate, forming a plurality of discrete memory gate structures on the substrate where an isolation trench is between adjacent memory gate structures and a memory gate structure includes a floating gate layer and a control gate layer, forming an isolation layer in the isolation trench where a top surface of the isolation layer is lower than a top surface of the control gate layer and higher than a bottom surface of the control gate layer, forming an opening on an exposed sidewall of the control gate layer where a bottom of the opening is lower than or coplanar with the top surface of the isolation layer, and forming an initial metal silicide layer on an exposed surface of the control gate layer and the top surface of the isolation layer.
    Type: Grant
    Filed: September 22, 2020
    Date of Patent: May 23, 2023
    Assignees: Semiconductor Manufacturing International (Shanghai) Corporation, Semiconductor Manufacturing International (Beijing) Corporation
    Inventors: Liang Han, Hai Ying Wang
  • Patent number: 11654945
    Abstract: A method, a device and a system for safely and reliably performing real-time speed measurement and continuous positioning are provided. With the method, inertial navigation data from an inertial navigation signal source arranged in a train is detected, and correction data from a correction signal source is detected. In a case that no correction data is detected, a current speed and a current position of the train is determined based on the inertial navigation data, and in a case that the correction data is detected, the inertial navigation data is corrected with the correction data, and a current speed and position of the train are determined based on the corrected inertial navigation data. Therefore, even in the case that no correction data is detected, the real-time speed measurement and continuous positioning can be performed safely and reliably based on the inertial navigation data.
    Type: Grant
    Filed: March 16, 2017
    Date of Patent: May 23, 2023
    Assignee: CRRC ZHUZHOU ELECTRIC LOCOMOTIVE RESEARCH INSTITUTE CO., LTD.
    Inventors: Gaohua Chen, Jianghua Feng, Shu Cheng, Rongjun Ding, Chaoqun Xiang, Yijing Xu, Liang Han
  • Publication number: 20230153164
    Abstract: The present disclosure provides a system comprising: a first group of computing nodes and a second group of computing nodes, wherein the first and second groups are neighboring devices and each of the first and second groups comprising: a set of computing nodes A-D, and a set of intra-group interconnects, wherein the set of intra-group interconnects communicatively couple computing node A with computing nodes B and C and computing node D with computing nodes B and C; and a set of inter-group interconnects, wherein the set of inter-group interconnects communicatively couple computing node A of the first group with computing node A of the second group, computing node B of the first group with computing node B of the second group, computing node C of the first group with computing node C of the second group, and computing node D of the first group with computing node D of the second group.
    Type: Application
    Filed: January 6, 2023
    Publication date: May 18, 2023
    Inventors: Liang HAN, Yang JIAO
  • Publication number: 20230147692
    Abstract: A method and system device for performing multi-carrying of a linear motor for magnetic levitation transportation is provided. With the method for performing multi-carrying of a linear motor for magnetic levitation transportation, linear motor traction power information and other linear motor carried information are generated, and the other linear motor carried information is transmitted through a channel for carrying the linear motor traction power information that is constructed based on a linear motor structure.
    Type: Application
    Filed: April 16, 2020
    Publication date: May 11, 2023
    Applicant: CRRC ZHUZHOU ELECTRIC LOCOMOTIVE RESEARCH INSTITUTE CO., LTD.
    Inventors: Gaohua CHEN, Jianghua FENG, Rongjun DING, Yijing XU, Yu SHI, Liang HAN, Yanhui WEN, Yonghui NAN, Anfeng ZHAO, Haojiong LV, Kai FANG, Huadong LIU, Hui SHEN, Shu CHENG, Jungui SU, Zhenbang ZHOU, Cheng LI
  • Patent number: 11640563
    Abstract: A device may obtain first data relating to a machine learning model. The device may pre-process the first data to alter the first data to generate second data. The device may process the second data to select a set of features from the second data. The device may analyze the set of features to evaluate a plurality of types of machine learning models with respect to the set of features. The device may select a particular type of machine learning model for the set of features based on analyzing the set of features to evaluate the plurality of types of machine learning models. The device may tune a set of parameters of the particular type of machine learning model to train the machine learning model. The device may receive third data for prediction. The device may provide a prediction using the particular type of machine learning model.
    Type: Grant
    Filed: March 23, 2020
    Date of Patent: May 2, 2023
    Assignee: Accenture Global Solutions Limited
    Inventors: Luke Higgins, Liang Han, Koushik M Vijayaraghavan, Rajendra T. Prasad, Aditi Kulkarni, Gayathri Pallail, Charles Grenet, Jean-Francois Depoitre, Xiwen Sun, Jérémy Aeck, Yuqing Xi, Srikanth Prasad, Pankaj Jetley, Jayashri Sridevi, Easwer Chinnadurai, Niju Prabha
  • Patent number: 11620502
    Abstract: The present disclosure provides a method for syncing data of a computing task across a plurality of groups of computing nodes. Each group including a set of computing nodes A-D, a set of intra-group interconnects that communicatively couple computing node A with computing nodes B and C and computing node D with computing nodes B and C, and a set of inter-group interconnects that communicatively couple each of computing nodes A-D with corresponding computing nodes A-D in each of a plurality of neighboring groups. The method comprises syncing data at a computing node of the plurality of groups of computing nodes using inter-group interconnects and intra-group interconnects along four different directions relative to the node; and broadcasting synced data from the node to the plurality of groups of computing nodes using inter-group interconnects and intra-group interconnects along four different directions relative to the node.
    Type: Grant
    Filed: January 30, 2020
    Date of Patent: April 4, 2023
    Assignee: Alibaba Group Holding Limited
    Inventors: Liang Han, Yang Jiao
  • Publication number: 20230088237
    Abstract: The present disclosure provides a method for syncing data of a computing task across a plurality of groups of computing nodes, each group comprising a set of computing nodes A-D, a set of intra-group interconnects that communicatively couple computing node A with computing nodes B and C and computing node D with computing nodes B and C, and a set of inter-group interconnects that communicatively couple a computing node A of a first group of the plurality of groups with a computing node A of a second group neighboring the first group, a computing node B of the first group with a computing node B of the second group, a computing node C of the first group with the computing node C of the second group, and a computing node D of the to first group with a computing node D of the second group, the method comprising: syncing across a first dimension of computing nodes using a first set of ring connections, wherein the first set of ring connections are formed using inter-group and intra-group interconnects that commun
    Type: Application
    Filed: November 28, 2022
    Publication date: March 23, 2023
    Inventors: Liang HAN, Yang JIAO
  • Publication number: 20230090604
    Abstract: Virtualization techniques can include determining virtual function routing tables for the virtual parallel processing units (PPUs) from a logical topology of a virtual function. A first mapping of the virtual PPUs to a first set of a plurality of physical PPUs can be generated. Virtualization can also include generating a first set of physical function routing tables for the first set of physical PPUs based on the virtual function tables and the first virtual PPU to physical PPU mapping. An application can be migrated from the first set of physical PPUs to a second set of PPUs by generating a second mapping of the virtual PPUs to a second set of a plurality of physical PPUs. A second set of physical function routing table for the second set of physical PPUs can also be generated based on the virtual function tables and the second virtual PPU to physical PPU mapping.
    Type: Application
    Filed: September 16, 2021
    Publication date: March 23, 2023
    Inventors: Liang HAN, Guoyu ZHU, ChengYuan WU, Rong ZHONG