Patents by Inventor Hua Ma

Hua Ma has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250023234
    Abstract: A frame sealing adhesive of the liquid-crystal phase shifter is disposed between two transparent substrates, the frame sealing adhesive encloses a first cavity, a first part of the metal-trace layer is located inside the first cavity, and a second part of the metal-trace layer is located outside the first cavity. The second part is disposed on first surfaces or second surfaces of the two transparent substrates. If the second part is disposed on the first surfaces of the two transparent substrates, metal cushion layers are provided between the frame sealing adhesive and the first surfaces of the two transparent substrates. If the second part is disposed on the second surfaces of the two transparent substrates, the first part and the second part are electrically connected by metal via holes provided in the transparent substrates, and the frame sealing adhesive contacts the first surfaces of the two transparent substrates.
    Type: Application
    Filed: July 27, 2022
    Publication date: January 16, 2025
    Applicant: BOE Technology Group Co., Ltd.
    Inventors: Yong Ma, Hua Huang, Xin Gu, Zhao Kang, Shulei Li, Changhan Hsieh, Chengtan Zhao, Zhao Cui
  • Publication number: 20250024635
    Abstract: Managing thermal capabilities of an information handling system, including identifying a maximum supported ambient temperature of the information handling system; identifying a current ambient temperature of the information handling system; calculating a temperature delta based on the maximum supported ambient temperature of the information handling system and the current ambient temperature of the information handling system; adjusting, based on the temperature delta, one or more thermal control trigger points; adjusting, based on the adjusted thermal control trigger points, a fan speed of a fan of the information handling system; and determining, based on the adjusted fan speed of the fan, an updated cooling capacity associated with the information handling system.
    Type: Application
    Filed: July 13, 2023
    Publication date: January 16, 2025
    Inventors: Xin Zhi Ma, Seth Weber, Ying Hua Huang, Jianguo Zhang
  • Publication number: 20250013259
    Abstract: Disclosed are semiconductor devices that implement relaxed clock forwarding between logic blocks. In one embodiment the system includes a set of logic blocks forming a first processing path. Another set of logic blocks form additional processing paths. A clock is configured to forward the data between the logic blocks asynchronously in the first processing path in which the data is asynchronously forwarded between logic blocks. This forwarding is asynchronous with the additional processing paths data and clock. The ends or last logic block in each path can be synchronized using a synchronizer component. The synchronizer can be a plurality of asynchronous FIFOs. In one embodiment, logic blocks form a matrix and the processing paths are along the rows or columns of the matrix.
    Type: Application
    Filed: June 28, 2024
    Publication date: January 9, 2025
    Inventors: Siyad Chih-Hua Ma, Shang-Tse Chuang, Sharad Vasantrao Chole
  • Patent number: 12182717
    Abstract: Artificial intelligence is an increasingly important sector of the computer industry. One of the most important applications for artificial intelligence is object recognition and classification from digital images. Convolutional neural networks have proven to be a very effective tool for object recognition and classification from digital images. However, convolutional neural networks are extremely computationally intensive thus requiring high-performance processors, significant computation time, and significant energy consumption. To reduce the computation time and energy consumption a “cone of dependency” and “cone of influence” processing techniques are disclosed. These two techniques arrange the computations required in a manner that minimizes memory accesses such that computations may be performed in local cache memory. These techniques significantly reduce the time to perform the computations and the energy consumed by the hardware implementing a convolutional neural network.
    Type: Grant
    Filed: October 18, 2021
    Date of Patent: December 31, 2024
    Assignee: Expedera, Inc.
    Inventors: Shang-Tse Chuang, Sharad Vasantrao Chole, Siyad Chih-Hua Ma
  • Publication number: 20240427839
    Abstract: Artificial intelligence is an increasingly important sector of the computer industry. However, artificial intelligence is extremely computationally intensive field such that it can be expensive, time consuming, and energy consuming. Fortunately, many of the calculations required for artificial intelligence can be performed in parallel such that specialized processors can great increase computational performance. Specifically, artificial intelligence generally requires large numbers of matrix operations to implement neural networks such that specialized matrix processor circuits can improve performance. Thus, this document discloses apparatus and methods for efficiently processing matrix operations.
    Type: Application
    Filed: June 21, 2023
    Publication date: December 26, 2024
    Inventors: Sharad Vasantrao Chole, Shang-Tse Chuang, Siyad Chih-Hua Ma
  • Publication number: 20240428046
    Abstract: Artificial intelligence is an increasingly important sector of the computer industry. However, artificial intelligence is extremely computationally intensive field such that it can be expensive, time consuming, and energy consuming. Fortunately, many of the calculations required for artificial intelligence can be performed in parallel such that specialized processors can great increase computational performance. Specifically, artificial intelligence generally requires large numbers of matrix operations to implement neural networks such that specialized matrix processor circuits can improve performance. To perform all these matrix operations, the neural processing circuits must be quickly and efficiently supplied with data to process or else the matrix processor circuits end up idle or spending large amounts of time loading weight matrix data.
    Type: Application
    Filed: June 21, 2023
    Publication date: December 26, 2024
    Inventors: Sharad Vasantrao Chole, Shang-Tse Chuang, Siyad Chih-Hua Ma
  • Publication number: 20240403166
    Abstract: Provided is a method comprising obtaining information about a detected memory error in a memory device, the memory device being connected to a first host via a compute express link, CXL, interface. The method comprises further recording the memory error information into a firmware of the memory device.
    Type: Application
    Filed: June 6, 2024
    Publication date: December 5, 2024
    Inventors: Zhonghua SUN, Pei GAO, Yue LIU, Liangqi ZHU, Yue YAO, Junyu TONG, Hua MA, Cong ZHANG
  • Patent number: 12141226
    Abstract: Artificial intelligence is an increasingly important sector of the computer industry. However, artificial intelligence is extremely computationally intensive field such that it can be expensive, time consuming, and energy consuming. Fortunately, many of the calculations required for artificial intelligence can be performed in parallel such that specialized processors can greatly increase computational performance. Specifically, artificial intelligence generally requires large numbers of matrix operations to implement neural networks such that specialized Matrix Processor circuits can improve performance. But a neural network is more than a collection of matrix operations; it is a set of specifically coordinated matrix operations with complex data dependencies. Without proper coordination, Matrix Processor circuits may end up idle or spending large amounts of time loading in different weight matrix data.
    Type: Grant
    Filed: April 5, 2019
    Date of Patent: November 12, 2024
    Assignee: Expedera, Inc.
    Inventors: Siyad Chih-Hua Ma, Shang-Tse Chuang, Sharad Vasantrao Chole
  • Publication number: 20240285808
    Abstract: Disclosed herein are methods of molecular magnetic resonance (MR) imaging and positron emission tomography using extracellular probes that target extracellular allysine aldehyde and act as a noninvasive biomarker of fibrogenesis with high sensitivity and specificity in detecting fibrogenesis, for example, in rodent models and human fibrotic tissues.
    Type: Application
    Filed: May 13, 2022
    Publication date: August 29, 2024
    Inventors: Peter Caravan, Yingying Ning, Hua MA, Sergey Shuvaev, Eman Akam
  • Patent number: 12073520
    Abstract: An augmented reality implementing method applied to a server, which includes a plurality of augmented reality objects and a plurality of setting records corresponding to the augmented reality objects respectively is provided. Firstly, the server receives an augmented reality request from a mobile device, where the augmented reality request is related to a target device. Then, the server is communicated with the target device to access current information. Then, the server determines the current information corresponds to which one of the setting records, and selects one of the augmented reality objects based on the determined setting record as a virtual object provided to the mobile device.
    Type: Grant
    Filed: October 4, 2022
    Date of Patent: August 27, 2024
    Assignee: ASUSTEK COMPUTER INC.
    Inventors: Kuo-Chung Chiu, Hsuan-Wu Wei, Yen-Ting Liu, Shang-Chih Liang, Shih-Hua Ma, Yi-Hsuan Tsai, Jun-Ting Chen, Kuan-Ling Chen
  • Publication number: 20240265234
    Abstract: Artificial intelligence is an increasingly important sector of the computer industry. However, artificial intelligence is very computationally intensive field. Fortunately, many of the required calculations can be performed in parallel such that specialized processors can greatly increase computation performance. In particular, Graphics Processor Units (GPUs) are often used in artificial intelligence. Although GPUs have helped, they are not ideal for artificial intelligence. Specifically, GPUs are used to compute matrix operations in one direction with a pipelined architecture. However, artificial intelligence is a field that uses both forward propagation computations and back propagation calculations. To efficiently perform artificial intelligence calculations, a symmetric matrix processing element is introduced. The symmetric matrix processing element can perform forward propagation and backward propagation calculations just as easily.
    Type: Application
    Filed: April 17, 2024
    Publication date: August 8, 2024
    Inventors: Sharad Vasantrao Chole, Shang-Tse Chuang, Siyad Chih-Hua Ma
  • Publication number: 20240242078
    Abstract: Artificial intelligence is an increasingly important sector of the computer industry. One of the most important applications for artificial intelligence is object recognition and classification from digital images. Convolutional neural networks have proven to be a very effective tool for object recognition and classification from digital images. However, convolutional neural networks are extremely computationally intensive thus requiring high-performance processors, significant computation time, and significant energy consumption. To reduce the computation time and energy consumption a “cone of dependency” and “cone of influence” processing techniques are disclosed. These two techniques arrange the computations required in a manner that minimizes memory accesses such that computations may be performed in local cache memory. These techniques significantly reduce the time to perform the computations and the energy consumed by the hardware implementing a convolutional neural network.
    Type: Application
    Filed: October 18, 2021
    Publication date: July 18, 2024
    Inventors: Shang-Tse Chuang, Sharad Vasantrao Chole, Siyad Chih-Hua Ma
  • Patent number: 11983616
    Abstract: Artificial intelligence is an increasingly important sector of the computer industry. However, artificial intelligence is very computationally intensive field. Fortunately, many of the required calculations can be performed in parallel such that specialized processors can greatly increase computation performance. In particular, Graphics Processor Units (GPUs) are often used in artificial intelligence. Although GPUs have helped, they are not ideal for artificial intelligence. Specifically, GPUs are used to compute matrix operations in one direction with a pipelined architecture. However, artificial intelligence is a field that uses both forward propagation computations and back propagation calculations. To efficiently perform artificial intelligence calculations, a symmetric matrix processing element is introduced. The symmetric matrix processing element can perform forward propagation and backward propagation calculations just as easily.
    Type: Grant
    Filed: October 1, 2018
    Date of Patent: May 14, 2024
    Assignee: Expedera, Inc.
    Inventors: Sharad Vasantrao Chole, Shang-Tse Chuang, Siyad Chih-Hua Ma
  • Publication number: 20240152761
    Abstract: Artificial intelligence is an increasingly important sector of the computer industry. However, artificial intelligence is extremely computationally intensive field such that it can be expensive, time consuming, and energy consuming field. Fortunately, many of the calculations required for artificial intelligence can be performed in parallel such that specialized processors can great increase computational performance for AI applications. Specifically, artificial intelligence generally requires large numbers of matrix operations such that specialized matrix processor circuits can greatly improve performance. To efficiently execute all these matrix operations, the matrix processor circuits must be quickly and efficiently supplied with a stream of data and instructions to process or else the matrix processor circuits end up idle. Thus, this document discloses packet architecture for efficiently creating and supplying neural network processors with work packets to process.
    Type: Application
    Filed: October 20, 2022
    Publication date: May 9, 2024
    Applicant: Expedera, Inc.
    Inventors: Sharad Vasantrao Chole, Shang-Tse Chuang, Siyad Chih-Hua Ma
  • Publication number: 20230401775
    Abstract: A face model building method is provided. The face model building method includes: obtaining a plurality of facial feature animation objects and a plurality of object parameters respectively corresponding to the facial feature animation objects, where the facial feature animation objects include a plurality of two-dimensional facial feature animation objects and at least one three-dimensional facial feature animation object; and integrating the facial feature animation objects according to the object parameters to generate a three-dimensional face model. A face model building system is further provided.
    Type: Application
    Filed: December 20, 2022
    Publication date: December 14, 2023
    Inventors: Yi-Hsuan TSAI, Kuan-Ling CHEN, Jo-Hsuan HUANG, Jun-Ting CHEN, Shih-Hua MA, Chieh-Han CHUANG
  • Publication number: 20230401776
    Abstract: A face model editing method adapted to a face model editing system having a modeling platform and an editing platform is provided. The modeling platform has a plurality of face feature animation objects and a plurality of object parameters thereof. The face model editing method includes: receiving an object selection instruction by using the editing platform, and accessing the object parameter of the face feature animation object from the modeling platform according to the object selection instruction; receiving an adjusting instruction by using the editing platform, and adjusting the accessed object parameter; transmitting, by the editing platform, the adjusted object parameter to the modeling platform to update the object parameters; and generating, by the modeling platform, a three-dimensional face model by using the updated object parameters in combination with the face feature animation objects, and transmitting the three-dimensional face model to the editing platform for demonstration.
    Type: Application
    Filed: December 20, 2022
    Publication date: December 14, 2023
    Inventors: Kuan-Ling CHEN, Yi-Hsuan TSAI, Jo-Hsuan HUANG, Chieh-Han CHUANG, Jun-Ting CHEN, Shih-Hua MA
  • Publication number: 20230338268
    Abstract: Ethylcellulose dispersions in water are film forming compositions that have found use in personal care applications. Conventional methods can require that aqueous dispersions of water-insoluble polymers such as ethylcellulose be obtained using surfactants such as sodium lauryl sulfate (SLS). There is a desire to reduce the degree of consumer exposure to SLS in personal care products. The present invention describes ethylcellulose dispersions comprising surfactants to substantially replace sodium lauryl sulfate. The present invention also describes methods of reducing or substantially eliminating SLS in personal care products that comprise ethylcellulose dispersions.
    Type: Application
    Filed: June 30, 2023
    Publication date: October 26, 2023
    Inventors: Hui S. Yang, Hua Ma
  • Publication number: 20230326146
    Abstract: An augmented reality implementing method applied to a server, which includes a plurality of augmented reality objects and a plurality of setting records corresponding to the augmented reality objects respectively is provided. Firstly, the server receives an augmented reality request from a mobile device, where the augmented reality request is related to a target device. Then, the server is communicated with the target device to access current information. Then, the server determines the current information corresponds to which one of the setting records, and selects one of the augmented reality objects based on the determined setting record as a virtual object provided to the mobile device.
    Type: Application
    Filed: October 4, 2022
    Publication date: October 12, 2023
    Inventors: Kuo-Chung CHIU, Hsuan-Wu WEI, Yen-Ting LIU, Shang-Chih LIANG, Shih-Hua MA, Yi-Hsuan TSAI, Jun-Ting CHEN, Kuan-Ling CHEN
  • Publication number: 20230309943
    Abstract: Methods and systems are provided for dynamically visualizing an object of interest that includes part of the vasculature of a patient, which employ a 3D model of the object to generate at least one roadmap that includes information that characterizes properties of the object (such as centerlines, contours, and an image mask. Reference location(s) corresponding to the roadmap(s) are determined for an interventional device used to treat the object. In an online phase, non-contrast-enhanced x-ray image data of the object are obtained and processed to determine location of the interventional device in the image data, and a particular roadmap is selected or accessed. The reference location corresponding to the particular roadmap and the determined location of interventional device are used to transform the particular roadmap. A visual representation of the transformed roadmap is overlaid on the image data for display.
    Type: Application
    Filed: June 7, 2023
    Publication date: October 5, 2023
    Applicant: Pie Medical Imaging B.V.
    Inventors: Theo van Walsum, Hua Ma, Jean-Paul Aben, Dennis Koehn
  • Publication number: 20230300050
    Abstract: Some embodiments of the present disclosure relate to methods, devices and computer-readable media for measuring traffic hit time during path switch. The method performed at a first device includes: sending, to a second device, operation administration and maintenance (OAM) frames with continuous sequence numbers; receiving from the second device response OAM frames; determining traffic hit start time and traffic hit stop time based on the received OAM frames; detecting an occurrence of the path switch, and determining that the path switch is completed before the traffic hit stop time; and in response to determining that the path switch is completed before the traffic hit stop time, calculating traffic hit time based at least on the traffic hit start time and the traffic hit stop time.
    Type: Application
    Filed: March 7, 2023
    Publication date: September 21, 2023
    Applicant: Nokia Solutions and Networks Oy
    Inventors: Xiao Hua MA, Ming ZHONG, Jin Jiang CHEN