Patents by Inventor Hua Ma

Hua Ma has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250068914
    Abstract: A system and method of performing tensor operations with a multi-step operation processing system in a memory-efficient manner. The method includes the stages of dividing an N-dimensional tensor into a set of tensor slices. The tensor slices consist of one or more consecutive rows. The tensor slices may further be segmented. The tensor slice segments, along with the dependency data, form based on the tensor dependencies are used for an tensor operation computation to generate a first result. Each processed slice segment is fused into a result slice by removing extra data used in the computation. This process is repeated for each slice to be processed and combined into a final processed tensor result.
    Type: Application
    Filed: October 30, 2024
    Publication date: February 27, 2025
    Inventors: Suhail Ibrahim Alnahari, Kai-Er Chuang, Siyad Chih-Hua Ma, Shang-Tse Chuang, Sharad Vasantrao Chole
  • Publication number: 20250064641
    Abstract: The present application discloses an ocular implant delivery device comprising a housing, a rotating wheel, a push needle linkage assembly, a retracting linkage assembly, a sleeve, and a puncture needle. A through hole is provided at a distal end of the housing, and the sleeve extends from an inside of the housing to an outside of the housing through hole and is fixedly connected to the housing. The puncture needle runs through the sleeve, and an end of the puncture needle facing away from a needle tip of the puncture needle is fixedly connected to the retracting linkage assembly. A push needle of the push needle linkage assembly extends through an inner hole of the puncture needle, the rotating wheel is connected to the housing via a rotating shaft, and an opening corresponding to the rotating wheel is provided on the housing.
    Type: Application
    Filed: November 5, 2024
    Publication date: February 27, 2025
    Applicant: HEALTH GUARD (SUZHOU) BIOMED. TECHNOLOGY CO., LTD.
    Inventors: Hua LIU, Man Ma
  • Patent number: 12229589
    Abstract: Artificial intelligence is an increasingly important sector of the computer industry. However, artificial intelligence is an extremely computationally intensive field such that performing artificial intelligence calculations can be expensive, time consuming, and energy consuming. Fortunately, many of the calculations required for artificial intelligence applications can be performed in parallel such that specialized linear algebra matrix processors can greatly increase computational performance. But even with linear algebra matrix processors; performance can be limited due to complex data dependencies. Without proper coordination, linear algebra matrix processors may end up idle or spending large amounts of time moving data around. Thus, this document discloses methods for efficiently scheduling linear algebra matrix processors.
    Type: Grant
    Filed: May 7, 2020
    Date of Patent: February 18, 2025
    Assignee: Expedera, Inc.
    Inventors: Shang-Tse Chuang, Sharad Vasantrao Chole, Siyad Chih-Hua Ma
  • Publication number: 20250053614
    Abstract: Artificial intelligence is an increasingly important sector of the computer industry. However, artificial intelligence is extremely computationally intensive field such that it can be expensive, time consuming, and energy consuming. Fortunately, many of the calculations required for artificial intelligence can be performed in parallel such that specialized processors can great increase computational performance. Specifically, artificial intelligence generally requires large numbers of matrix operations to implement neural networks such that specialized Matrix Processor circuits can improve performance. But a neural network is more than a collection of matrix operations; it is a set of specifically coordinated matrix operations with complex data dependencies. Without proper coordination, Matrix Processor circuits may end up idle or spending large amounts of time loading in different weight matrix data.
    Type: Application
    Filed: October 29, 2024
    Publication date: February 13, 2025
    Inventors: Ramteja Tadishetti, Vaibhav Vivek Kamat, Sharad Vasantrao Chole, Shang-Tse Chuang, Siyad Chih-Hua Ma
  • Publication number: 20250045569
    Abstract: Disclosed are systems and methods for processing a multilayer neural network incorporating skip connections while reducing the memory footprint and processing time of processing a neural network. The method comprises loading within a memory partition with a portion of an input tensor and a portion of layer weights associated with computing a portion of the one or more intermediate layer tensors associated with a first portion of the skip connection tensor. Next, a neural processing unit is used to recompute portions of the skip connection tensor using the portion of the input tensor and associated weights. Upon completion, the memory utilized for the recomputing is free for further computations.
    Type: Application
    Filed: July 10, 2024
    Publication date: February 6, 2025
    Inventors: Ramteja Tadishetti, Sharad Vasantrao Chole, Shang-Tse Chuang, Siyad Chih-Hua Ma
  • Publication number: 20250045122
    Abstract: Machine learning model scalability with distributed multi-layer processing is disclosed herein. A method for processing and deploying machine learning models that enhances scalability and efficiency by executing a subset of a neural network on each of a plurality of interconnected processing units. The method involves partitioning compute tasks across these processing units to reduce latency, including broadcast and reduction processes for inputs and outputs. It also includes managing the allocation of samples in a batch to specific master processing units within the distributed arrangement and synchronizing computation between fully connected layers within each processing unit. Additionally, the method implements data reduction during the transfer of data across the processing units, wherein data is accumulated with a current processing unit's partial sum as it is transferred to the destination processing unit.
    Type: Application
    Filed: July 30, 2024
    Publication date: February 6, 2025
    Inventors: Shang-Tse Chuang, Siyad Chih-Hua Ma, Sharad Vasantrao Chole, Costas Calamvokis
  • Publication number: 20250032079
    Abstract: Image data of contrast dye in a vessel in a body is acquired for determining flow rate. Based on images acquired under and angle relative to one another, a three-dimensional model of the vessel is constructed and length of a vessel section is determined. A series of at least two images, apart in time, under a first angle is assessed for determining progress of a front of the dye bolus in the vessel in time. In the images, the vessel may be segmented and brightness or a derivative thereof over at least one of time and distance may be assessed to determine the front. Progress distance is mapped to the three-dimensional model, for example by mapping segments from the image to the model, to obtain a more accurate and natural distance of progress over time. Flow rate is determined by natural progress distance over progress time.
    Type: Application
    Filed: October 20, 2021
    Publication date: January 30, 2025
    Inventors: Catalina TOBÓN GÓMEZ, Hua MA, Johannes Petrus JANSSEN, Gianni PEDRIZZETTI, Johan Hendrikus Christiaan REIBER
  • Publication number: 20250032579
    Abstract: A formulation, comprising erythropoietin, a buffer ingredient, a surfactant and an osmotic pressure regulator. The present invention also relates to a method for preparing the formulation and the use of the formulation. The formulation is safe and stable and has a long validity period, and the preparation method is simple and easy to implement.
    Type: Application
    Filed: December 2, 2022
    Publication date: January 30, 2025
    Inventors: Jianwei ZHU, Yueqing XIE, Hua JIANG, Zhenyu WANG, Jin MA, Lei HAN
  • Patent number: 12213158
    Abstract: Aspects of the present application provide methods and device for use in User Equipment (UE) cooperation. UE cooperation may include the cooperating UEs (CUEs) forwarding traffic to or from one or more target UEs (TUEs) with redundant signal transmissions or receptions. Methods involve a base station transmitting configuration information to at least one cooperative user equipment (CUE) and to a target user equipment (TUE). The configuration information includes an indication of resources for a sidelink (SL) transmission and a redundancy parameter for the SL transmission. The SL transmission is a transmission for the at least one CUE to forward a packet intended for the TUE. The base station also transmits the packet intended for the TUE, to a plurality of UEs comprising the at least one CUE and the TUE.
    Type: Grant
    Filed: November 9, 2023
    Date of Patent: January 28, 2025
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Liqing Zhang, Jianglei Ma, Hua Xu, Seyedarvin Ayoughi
  • Publication number: 20250024635
    Abstract: Managing thermal capabilities of an information handling system, including identifying a maximum supported ambient temperature of the information handling system; identifying a current ambient temperature of the information handling system; calculating a temperature delta based on the maximum supported ambient temperature of the information handling system and the current ambient temperature of the information handling system; adjusting, based on the temperature delta, one or more thermal control trigger points; adjusting, based on the adjusted thermal control trigger points, a fan speed of a fan of the information handling system; and determining, based on the adjusted fan speed of the fan, an updated cooling capacity associated with the information handling system.
    Type: Application
    Filed: July 13, 2023
    Publication date: January 16, 2025
    Inventors: Xin Zhi Ma, Seth Weber, Ying Hua Huang, Jianguo Zhang
  • Publication number: 20250023234
    Abstract: A frame sealing adhesive of the liquid-crystal phase shifter is disposed between two transparent substrates, the frame sealing adhesive encloses a first cavity, a first part of the metal-trace layer is located inside the first cavity, and a second part of the metal-trace layer is located outside the first cavity. The second part is disposed on first surfaces or second surfaces of the two transparent substrates. If the second part is disposed on the first surfaces of the two transparent substrates, metal cushion layers are provided between the frame sealing adhesive and the first surfaces of the two transparent substrates. If the second part is disposed on the second surfaces of the two transparent substrates, the first part and the second part are electrically connected by metal via holes provided in the transparent substrates, and the frame sealing adhesive contacts the first surfaces of the two transparent substrates.
    Type: Application
    Filed: July 27, 2022
    Publication date: January 16, 2025
    Applicant: BOE Technology Group Co., Ltd.
    Inventors: Yong Ma, Hua Huang, Xin Gu, Zhao Kang, Shulei Li, Changhan Hsieh, Chengtan Zhao, Zhao Cui
  • Publication number: 20250013259
    Abstract: Disclosed are semiconductor devices that implement relaxed clock forwarding between logic blocks. In one embodiment the system includes a set of logic blocks forming a first processing path. Another set of logic blocks form additional processing paths. A clock is configured to forward the data between the logic blocks asynchronously in the first processing path in which the data is asynchronously forwarded between logic blocks. This forwarding is asynchronous with the additional processing paths data and clock. The ends or last logic block in each path can be synchronized using a synchronizer component. The synchronizer can be a plurality of asynchronous FIFOs. In one embodiment, logic blocks form a matrix and the processing paths are along the rows or columns of the matrix.
    Type: Application
    Filed: June 28, 2024
    Publication date: January 9, 2025
    Inventors: Siyad Chih-Hua Ma, Shang-Tse Chuang, Sharad Vasantrao Chole
  • Patent number: 12182717
    Abstract: Artificial intelligence is an increasingly important sector of the computer industry. One of the most important applications for artificial intelligence is object recognition and classification from digital images. Convolutional neural networks have proven to be a very effective tool for object recognition and classification from digital images. However, convolutional neural networks are extremely computationally intensive thus requiring high-performance processors, significant computation time, and significant energy consumption. To reduce the computation time and energy consumption a “cone of dependency” and “cone of influence” processing techniques are disclosed. These two techniques arrange the computations required in a manner that minimizes memory accesses such that computations may be performed in local cache memory. These techniques significantly reduce the time to perform the computations and the energy consumed by the hardware implementing a convolutional neural network.
    Type: Grant
    Filed: October 18, 2021
    Date of Patent: December 31, 2024
    Assignee: Expedera, Inc.
    Inventors: Shang-Tse Chuang, Sharad Vasantrao Chole, Siyad Chih-Hua Ma
  • Publication number: 20240427839
    Abstract: Artificial intelligence is an increasingly important sector of the computer industry. However, artificial intelligence is extremely computationally intensive field such that it can be expensive, time consuming, and energy consuming. Fortunately, many of the calculations required for artificial intelligence can be performed in parallel such that specialized processors can great increase computational performance. Specifically, artificial intelligence generally requires large numbers of matrix operations to implement neural networks such that specialized matrix processor circuits can improve performance. Thus, this document discloses apparatus and methods for efficiently processing matrix operations.
    Type: Application
    Filed: June 21, 2023
    Publication date: December 26, 2024
    Inventors: Sharad Vasantrao Chole, Shang-Tse Chuang, Siyad Chih-Hua Ma
  • Publication number: 20240428046
    Abstract: Artificial intelligence is an increasingly important sector of the computer industry. However, artificial intelligence is extremely computationally intensive field such that it can be expensive, time consuming, and energy consuming. Fortunately, many of the calculations required for artificial intelligence can be performed in parallel such that specialized processors can great increase computational performance. Specifically, artificial intelligence generally requires large numbers of matrix operations to implement neural networks such that specialized matrix processor circuits can improve performance. To perform all these matrix operations, the neural processing circuits must be quickly and efficiently supplied with data to process or else the matrix processor circuits end up idle or spending large amounts of time loading weight matrix data.
    Type: Application
    Filed: June 21, 2023
    Publication date: December 26, 2024
    Inventors: Sharad Vasantrao Chole, Shang-Tse Chuang, Siyad Chih-Hua Ma
  • Publication number: 20240403166
    Abstract: Provided is a method comprising obtaining information about a detected memory error in a memory device, the memory device being connected to a first host via a compute express link, CXL, interface. The method comprises further recording the memory error information into a firmware of the memory device.
    Type: Application
    Filed: June 6, 2024
    Publication date: December 5, 2024
    Inventors: Zhonghua SUN, Pei GAO, Yue LIU, Liangqi ZHU, Yue YAO, Junyu TONG, Hua MA, Cong ZHANG
  • Patent number: 12141226
    Abstract: Artificial intelligence is an increasingly important sector of the computer industry. However, artificial intelligence is extremely computationally intensive field such that it can be expensive, time consuming, and energy consuming. Fortunately, many of the calculations required for artificial intelligence can be performed in parallel such that specialized processors can greatly increase computational performance. Specifically, artificial intelligence generally requires large numbers of matrix operations to implement neural networks such that specialized Matrix Processor circuits can improve performance. But a neural network is more than a collection of matrix operations; it is a set of specifically coordinated matrix operations with complex data dependencies. Without proper coordination, Matrix Processor circuits may end up idle or spending large amounts of time loading in different weight matrix data.
    Type: Grant
    Filed: April 5, 2019
    Date of Patent: November 12, 2024
    Assignee: Expedera, Inc.
    Inventors: Siyad Chih-Hua Ma, Shang-Tse Chuang, Sharad Vasantrao Chole
  • Publication number: 20240285808
    Abstract: Disclosed herein are methods of molecular magnetic resonance (MR) imaging and positron emission tomography using extracellular probes that target extracellular allysine aldehyde and act as a noninvasive biomarker of fibrogenesis with high sensitivity and specificity in detecting fibrogenesis, for example, in rodent models and human fibrotic tissues.
    Type: Application
    Filed: May 13, 2022
    Publication date: August 29, 2024
    Inventors: Peter Caravan, Yingying Ning, Hua MA, Sergey Shuvaev, Eman Akam
  • Patent number: 12073520
    Abstract: An augmented reality implementing method applied to a server, which includes a plurality of augmented reality objects and a plurality of setting records corresponding to the augmented reality objects respectively is provided. Firstly, the server receives an augmented reality request from a mobile device, where the augmented reality request is related to a target device. Then, the server is communicated with the target device to access current information. Then, the server determines the current information corresponds to which one of the setting records, and selects one of the augmented reality objects based on the determined setting record as a virtual object provided to the mobile device.
    Type: Grant
    Filed: October 4, 2022
    Date of Patent: August 27, 2024
    Assignee: ASUSTEK COMPUTER INC.
    Inventors: Kuo-Chung Chiu, Hsuan-Wu Wei, Yen-Ting Liu, Shang-Chih Liang, Shih-Hua Ma, Yi-Hsuan Tsai, Jun-Ting Chen, Kuan-Ling Chen
  • Publication number: 20240265234
    Abstract: Artificial intelligence is an increasingly important sector of the computer industry. However, artificial intelligence is very computationally intensive field. Fortunately, many of the required calculations can be performed in parallel such that specialized processors can greatly increase computation performance. In particular, Graphics Processor Units (GPUs) are often used in artificial intelligence. Although GPUs have helped, they are not ideal for artificial intelligence. Specifically, GPUs are used to compute matrix operations in one direction with a pipelined architecture. However, artificial intelligence is a field that uses both forward propagation computations and back propagation calculations. To efficiently perform artificial intelligence calculations, a symmetric matrix processing element is introduced. The symmetric matrix processing element can perform forward propagation and backward propagation calculations just as easily.
    Type: Application
    Filed: April 17, 2024
    Publication date: August 8, 2024
    Inventors: Sharad Vasantrao Chole, Shang-Tse Chuang, Siyad Chih-Hua Ma