Patents by Inventor Haobo Xu

Haobo Xu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240112971
    Abstract: An integrated circuit (IC) device comprises a substrate comprising a glass core. The glass core comprises a first surface and a second surface opposite the first surface, and a first sidewall between the first surface and the second surface. The glass core may include a conductor within a through-glass via extending from the first surface to the second surface and a build-up layer. The glass cord comprises a plurality of first areas of the glass core and a plurality of laser-treated areas on the first sidewall. A first one of the plurality of laser-treated areas may be spaced away from a second one of the plurality of laser-treated areas. A first area may comprise a first nanoporosity and a laser-treated area may comprise a second nanoporosity, wherein the second nanoporosity is greater than the first nanoporosity.
    Type: Application
    Filed: September 30, 2022
    Publication date: April 4, 2024
    Applicant: Intel Corporation
    Inventors: Yiqun Bai, Dingying Xu, Srinivas Pietambaram, Hongxia Feng, Gang Duan, Xiaoying Guo, Ziyin Lin, Bai Nie, Haobo Chen, Kyle Arrington, Bohan Shan
  • Publication number: 20240088052
    Abstract: A die assembly is disclosed. The die assembly includes a die, one or more die pads on a first surface of the die and a die attach film on the die where the die attach film includes one or more openings that expose the one or more die pads and that extend to one or more edges of the die.
    Type: Application
    Filed: November 17, 2023
    Publication date: March 14, 2024
    Inventors: Bai NIE, Gang DUAN, Srinivas PIETAMBARAM, Jesse JONES, Yosuke KANAOKA, Hongxia FENG, Dingying XU, Rahul MANEPALLI, Sameer PAITAL, Kristof DARMAWIKARTA, Yonggang LI, Meizi JIAO, Chong ZHANG, Matthew TINGEY, Jung Kyu HAN, Haobo CHEN
  • Patent number: 11923312
    Abstract: A die assembly is disclosed. The die assembly includes a die, one or more die pads on a first surface of the die and a die attach film on the die where the die attach film includes one or more openings that expose the one or more die pads and that extend to one or more edges of the die.
    Type: Grant
    Filed: March 27, 2019
    Date of Patent: March 5, 2024
    Assignee: Intel Corporation
    Inventors: Bai Nie, Gang Duan, Srinivas Pietambaram, Jesse Jones, Yosuke Kanaoka, Hongxia Feng, Dingying Xu, Rahul Manepalli, Sameer Paital, Kristof Darmawikarta, Yonggang Li, Meizi Jiao, Chong Zhang, Matthew Tingey, Jung Kyu Han, Haobo Chen
  • Patent number: 11551068
    Abstract: The present invention provides a processing system for a binary weight convolutional neural network. The system comprises: at least one storage unit for storing data and instructions; at least one control unit for acquiring the instructions stored in the storage unit and sending out a control signal; and, at least one calculation unit for acquiring, from the storage unit, node values of a layer in a convolutional neural network and corresponding binary weight value data and obtaining node values of a next layer by performing addition and subtraction operations. With the system of the present invention, the data bit width during the calculation process of a convolutional neural network is reduced, the convolutional operation speed is improved, and the storage capacity and operational energy consumption are reduced.
    Type: Grant
    Filed: February 11, 2018
    Date of Patent: January 10, 2023
    Assignee: Institute of Computing Technology, Chinese Academy of Sciences
    Inventors: Yinhe Han, Haobo Xu, Ying Wang
  • Patent number: 11531889
    Abstract: Disclosed are a weight data storage method and a convolution computation method that may be implemented in a neural network. The weight data storage method comprises searching for effective weights in a weight convolution kernel matrix and acquiring an index of effective weights. The effective weights are non-zero weights, and the index of effective weights is used to mark the position of the effective weights in the weight convolution kernel matrix. The weight data storage method further comprises storing the effective weights and the index of effective weights. According to the weight data storage method and the convolution computation method of the present disclosure, storage space can be saved, and computation efficiency can be improved.
    Type: Grant
    Filed: February 28, 2018
    Date of Patent: December 20, 2022
    Assignee: INSTITUTE OF COMPUTING TECHNOLOGY, CHINESE ACADEMY OF SCIENCES
    Inventors: Yinhe Han, Feng Min, Haobo Xu, Ying Wang
  • Patent number: 11521048
    Abstract: The present invention relates to a weight management method and system for neural network processing. The method includes two stages, i.e., off-chip encryption stage and on-chip decryption stage: encrypting trained neural network weight data in advance, inputting the encrypted weight into a neural network processor chip, and decrypting the weight in real time by a decryption unit inside the neural network processor chip to perform related neural network calculation. The method and system realizes the protection of weight data without affecting the normal operation of a neural network processor.
    Type: Grant
    Filed: March 22, 2018
    Date of Patent: December 6, 2022
    Assignee: Institute of Computing Technology, Chinese Academy of Sciences
    Inventors: Yinhe Han, Haobo Xu, Ying Wang
  • Patent number: 11510354
    Abstract: The present invention discloses a multifunctional folding spade, which comprises a spade plate and a spade handle, a turntable is welded at the tail end of the spade plate, a spade handle sleeve is provided at the head end of the spade handle, a socket is provided at the head end of the spade handle sleeve, a part of the turntable body is inserted into the socket, the turntable and the spade handle sleeve are provided with matching rotation holes, and a pin shaft passes through the rotation holes to realize the rotating connection between the turntable and the spade handle; the turntable body is further provided with a plurality of angle adjusting holes circumferentially, a locking hole is provided with at the tail end of the spade handle sleeve correspondingly, a locking bolt is provided at one side of the locking hole.
    Type: Grant
    Filed: October 15, 2019
    Date of Patent: November 29, 2022
    Assignee: Taigu Changlin Shovel Co., Ltd.
    Inventors: Qiang Wang, Baojun Zhang, Dingbo Zhang, Haobo Xu, Yixiao Wang, Xinyin Zhang, Zhiying Du
  • Publication number: 20210227740
    Abstract: The present invention discloses a multifunctional folding spade, which comprises a spade plate and a spade handle, a turntable is welded at the tail end of the spade plate, a spade handle sleeve is provided at the head end of the spade handle, a socket is provided at the head end of the spade handle sleeve, a part of the turntable body is inserted into the socket, the turntable and the spade handle sleeve are provided with matching rotation holes, and a pin shaft passes through the rotation holes to realize the rotating connection between the turntable and the spade handle; the turntable body is further provided with a plurality of angle adjusting holes circumferentially, a locking hole is provided with at the tail end of the spade handle sleeve correspondingly, a locking bolt is provided at one side of the locking hole.
    Type: Application
    Filed: October 15, 2019
    Publication date: July 29, 2021
    Inventors: Qiang Wang, Baojun Zhang, Dingbo Zhang, Haobo Xu, Yixiao Wang, Xinyin Zhang, Zhiying Du
  • Publication number: 20210182666
    Abstract: Disclosed are a weight data storage method and a convolution computation method that may be implemented in a neural network. The weight data storage method comprises searching for effective weights in a weight convolution kernel matrix and acquiring an index of effective weights. The effective weights are non-zero weights, and the index of effective weights is used to mark the position of the effective weights in the weight convolution kernel matrix. The weight data storage method further comprises storing the effective weights and the index of effective weights. According to the weight data storage method and the convolution computation method of the present disclosure, storage space can be saved, and computation efficiency can be improved.
    Type: Application
    Filed: February 28, 2018
    Publication date: June 17, 2021
    Applicant: INSTITUTE OF COMPUTING TECHNOLOGY, CHINESE ACADEMY OF SCIENCES
    Inventors: Yinhe HAN, Feng MIN, Haobo XU, Ying WANG
  • Publication number: 20210089871
    Abstract: The present invention provides a processing system for a binary weight convolutional neural network. The system comprises: at least one storage unit for storing data and instructions; at least one control unit for acquiring the instructions stored in the storage unit and sending out a control signal; and, at least one calculation unit for acquiring, from the storage unit, node values of a layer in a convolutional neural network and corresponding binary weight value data and obtaining node values of a next layer by performing addition and subtraction operations. With the system of the present invention, the data bit width during the calculation process of a convolutional neural network is reduced, the convolutional operation speed is improved, and the storage capacity and operational energy consumption are reduced.
    Type: Application
    Filed: February 11, 2018
    Publication date: March 25, 2021
    Inventors: YINHE HAN, HAOBO XU, YING WANG
  • Patent number: 10581982
    Abstract: The disclosure is directed to moving an application, e.g., a messenger service in a social networking application, to various locations in a distributed computing system, e.g., to improve an efficiency of the application. For example, the application can be moved to a data center that is closer to a location of a user to decrease a latency associated with accessing the application. In another example, the application can be moved to a data center that is closer to a location of a storage system that stores data associated with the application to improve a throughput of the application, e.g., a rate at which data is read and/or written.
    Type: Grant
    Filed: April 8, 2016
    Date of Patent: March 3, 2020
    Assignee: Facebook, Inc.
    Inventors: Thomas Apostolos Georgiou, Haobo Xu, Jason Curtis Jenks, Hairong Kuang
  • Publication number: 20200019843
    Abstract: The present invention relates to a weight management method and system for neural network processing. The method includes two stages, i.e., off-chip encryption stage and on-chip decryption stage: encrypting trained neural network weight data in advance, inputting the encrypted weight into a neural network processor chip, and decrypting the weight in real time by a decryption unit inside the neural network processor chip to perform related neural network calculation. The method and system realizes the protection of weight data without affecting the normal operation of a neural network processor.
    Type: Application
    Filed: March 22, 2018
    Publication date: January 16, 2020
    Inventors: Yinhe HAN, Haobo XU, Ying WANG
  • Patent number: 10387416
    Abstract: Technology is disclosed for retrieving data from a specific storage layer of a storage system (“the technology”). A query application programming interface (API) is provided that allows an application to specify a storage layer on which the query should be executed. The query API can be used in a multi-threaded environment which employs a combination of fast threads and slow threads to serve read/write requests from applications. The fast threads are configured to query on a first set of storage layers, e.g., storage layers in a primary storage, while the slow threads are configured to query on a second set of storage layers, e.g., storage layers in a secondary storage. If a fast thread does not find the requested data in the first set, the request is transferred to a slow thread and the fast thread is allocated to another request while the slow thread is serving the current request.
    Type: Grant
    Filed: November 14, 2013
    Date of Patent: August 20, 2019
    Assignee: Facebook, Inc.
    Inventors: Mayank Agarwal, Dhrubajyoti Borthakur, Nagavamsi Ponnekanti, Haobo Xu
  • Patent number: 10346381
    Abstract: Technology is disclosed for performing atomic update operations in a storage system (“the technology”). The technology can receive an update command to update a value associated with a key stored in the storage system as a function of an input value; store the input value in a log stored at the storage system but not updating the value stored in the storage system; and update the value associated with the key with the received input values value based on the a function to generate an updated value, the updating occurring asynchronously with respect to receiving the update command.
    Type: Grant
    Filed: November 14, 2013
    Date of Patent: July 9, 2019
    Assignee: Facebook, Inc.
    Inventors: Deon Chris Nicholas, Haobo Xu, Dhrubajyoti Borthakur
  • Publication number: 20170293540
    Abstract: The disclosure is directed to a failover mechanism for failing over an application service, e.g., a messaging service, from servers in a first region to servers in a second region. Data is stored as shards in which each shard contains data associated with a subset of the users. Data access requests are served by a primary region of the shard. A global shard manager manages failing over the application service from a current primary region to a secondary region of the shard. A leader service in the application service replicates data associated with the application service from the primary to the secondary region, and ensures that the state of various other services of the application service in the secondary region is consistent. The leader service confirms that there is no replication lag between the primary and secondary regions and fails over the application service to the secondary region.
    Type: Application
    Filed: April 8, 2016
    Publication date: October 12, 2017
    Inventors: Vikas Mehta, Haobo Xu, Jason Curtis Jenks, Hairong Kuang
  • Publication number: 20170295246
    Abstract: The disclosure is directed to moving an application, e.g., a messenger service in a social networking application, to various locations in a distributed computing system, e.g., to improve an efficiency of the application. For example, the application can be moved to a data center that is closer to a location of a user to decrease a latency associated with accessing the application. In another example, the application can be moved to a data center that is closer to a location of a storage system that stores data associated with the application to improve a throughput of the application, e.g., a rate at which data is read and/or written.
    Type: Application
    Filed: April 8, 2016
    Publication date: October 12, 2017
    Inventors: Thomas Apostolos Georgiou, Haobo Xu, Jason Curtis Jenks, Hairong Kuang
  • Publication number: 20170206148
    Abstract: The disclosure is directed to a failover mechanism for failing over an application service, e.g., a messaging service, from servers in a first region to servers in a second region. Data is stored as shards in which each shard contains data associated with a subset of the users. Data access requests are served by a primary region of the shard. A global shard manager manages failing over the application service from a current primary region of a shard to a secondary region of the shard. The current primary determines whether a criterion for failing over, e.g., a replication lag between the primary and the secondary regions is within a threshold, and if it is within the threshold, the failover process waits until the lag is zero. After the replication lag is zero, the application service is failed over to the second region, which then becomes the primary for the shard.
    Type: Application
    Filed: January 20, 2016
    Publication date: July 20, 2017
    Inventors: Vikas Mehta, Haobo Xu, Jason Curtis Jenks, Hairong Kuang, Pierre-Luc Bertrand, Andrei Lutsenko, Zhenhua Guo, Jun Ying
  • Patent number: 9569514
    Abstract: Techniques for replicating data in database systems are described. In an example embodiment, a set of changes is received at a destination database, where the set of changes has been applied at a source database and is being replicated from the source database to the destination database. The set of changes is analyzed and it is determined that the set of changes includes two or more of: a subset of row-level changes, a subset of statement-level changes, and a subset of procedure-level changes. A set of dependencies is determined at least between the changes that are included in the subsets of changes. The changes, in the subsets of changes, are assigned to two or more processing elements. The set of changes is applied to the destination database by executing the two or more processing elements in parallel to each other and based on the set of dependencies.
    Type: Grant
    Filed: October 11, 2013
    Date of Patent: February 14, 2017
    Assignee: Oracle International Corporation
    Inventors: Edwina M. Lu, James W. Stamos, Nimar S. Arora, Lik Wong, Haobo Xu, Thuvan Hoang, Byron Wang, Lakshminarayanan Chidambaran
  • Patent number: 9230002
    Abstract: A method for sharing information between a publisher and multiple subscribers is provided. The publisher uses a latch-free, single publisher, multiple subscriber shared queue to share information. Logical change records representing changes made to a database are enqueued in the shared queue as messages in a stream of messages, and subscribers read the logical change records. Subscribers may filter logical change records before sending to apply processes for processing. An identifying property of the source instance of a change encapsulated in a logical change record may be included with each message enqueued.
    Type: Grant
    Filed: January 30, 2009
    Date of Patent: January 5, 2016
    Assignee: ORACLE INTERNATIONAL CORPORATION
    Inventors: Lik Wong, Nimar Arora, Lei Gao, Thuvan Hoang, Haobo Xu
  • Publication number: 20150134602
    Abstract: Technology is disclosed for performing atomic update operations in a storage system (“the technology”). The technology can receive an update command to update a value associated with a key stored in the storage system as a function of an input value; store the input value in a log stored at the storage system but not updating the value stored in the storage system; and update the value associated with the key with the received input values value based on the a function to generate an updated value, the updating occurring asynchronously with respect to receiving the update command.
    Type: Application
    Filed: November 14, 2013
    Publication date: May 14, 2015
    Inventors: Deon Chris Nicholas, Haobo Xu, Dhrubajyoti Borthakur