Patents by Inventor Jichuan Chang
Jichuan Chang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11442729Abstract: A method and system for processing a bit-packed array using one or more processors, including determining a data element size of the bit-packed array, determining a lane configuration of a single-instruction multiple-data (SIMD) unit for processing the bit-packed array based at least in part on the determined data element size, the lane configuration being determined from among a plurality of candidate lane configurations, each candidate lane configuration having a different number of vector register lanes and a corresponding bit capacity per vector register lane, configuring the SIMD unit according to the determined lane configuration, and loading one or more data elements into each vector register lane of the SIMD unit. SIMD instructions may be executed on the loaded one or more data elements of each vector register lane in parallel, and a result of the SIMD instruction may be stored in memory.Type: GrantFiled: October 26, 2020Date of Patent: September 13, 2022Assignee: Google LLCInventors: Junwhan Ahn, Jichuan Chang, Andrew McCormick, Yuanwei Fang, Yixin Luo
-
Publication number: 20220129269Abstract: A method and system for processing a bit-packed array using one or more processors, including determining a data element size of the bit-packed array, determining a lane configuration of a single-instruction multiple-data (SIMD) unit for processing the bit-packed array based at least in part on the determined data element size, the lane configuration being determined from among a plurality of candidate lane configurations, each candidate lane configuration having a different number of vector register lanes and a corresponding bit capacity per vector register lane, configuring the SIMD unit according to the determined lane configuration, and loading one or more data elements into each vector register lane of the SIMD unit. SIMD instructions may be executed on the loaded one or more data elements of each vector register lane in parallel, and a result of the SIMD instruction may be stored in memory.Type: ApplicationFiled: October 26, 2020Publication date: April 28, 2022Applicant: Google LLCInventors: Junwhan Ahn, Jichuan Chang, Andrew McCormick, Yuanwei Fang, Yixin Luo
-
Patent number: 10817178Abstract: A method for compressing and compacting memory on a memory device is described. The method includes organizing a number of compressed memory pages referenced in a number of compaction table entries based on a size of the number of compressed memory pages and compressing the number of compaction table entries, in which a compaction table entry comprise a number of fields.Type: GrantFiled: October 31, 2013Date of Patent: October 27, 2020Assignee: Hewlett Packard Enterprise Development LPInventors: Jichuan Chang, Sheng Li, Parthasarathy Ranganathan
-
Patent number: 10691344Abstract: A first memory controller receives an access command from a second memory controller, where the access command is timing non-deterministic with respect to a timing specification of a memory. The first memory controller sends at least one access command signal corresponding to the access command to the memory, wherein the at least one access command signal complies with the timing specification. The first memory controller determines a latency of access of the memory. The first memory controller sends feedback information relating to the latency to the second memory controller.Type: GrantFiled: May 30, 2013Date of Patent: June 23, 2020Assignee: Hewlett Packard Enterprise Development LPInventors: Doe Hyun Yoon, Sheng Li, Jichuan Chang, Ke Chen, Parthasarathy Ranganathan, Norman Paul Jouppi
-
Patent number: 10585602Abstract: An example method involves receiving, at a first memory node, data to be written at a memory location in the first memory node. The data is received from a device. At the first memory node, old data is read from the memory location, without sending the old data to the device. The data is written to the memory location. The data and the old data are sent from the first memory node to a second memory node to store parity information in the second memory node without the device determining the parity information. The parity information is based on the data stored in the first memory node.Type: GrantFiled: June 18, 2018Date of Patent: March 10, 2020Assignee: Hewlett Packard Enterprise Development LPInventors: Doe Hyun Yoon, Naveen Muralimanohar, Jichuan Chang, Parthasarathy Ranganathan
-
Patent number: 10572378Abstract: Dynamic memory expansion based on data compression is described. Data represented in at least one page to be written to a main memory of a computing device is received. The data is compressed in the at least one page to generate at least one compressed physical page and a metadata entry corresponding to each page of the at least one compressed physical page. The metadata entry is cached in a metadata cache including metadata entries and pointers to the uncompressed region of the at least one compressed physical page.Type: GrantFiled: March 20, 2014Date of Patent: February 25, 2020Assignee: Hewlett Packard Enterprise Development LPInventors: Sheng Li, Jichuan Chang, Jishen Zhao
-
Patent number: 10474584Abstract: A technique includes using a cache controller of an integrated circuit to control a cache including cached data content and associated cache metadata. The technique includes storing the metadata and the cached data content off of the integrated circuit and organizing the storage of the metadata relative to the cached data content such that a bus operation initiated by the cache controller to target the cached data content also targets the associated metadata.Type: GrantFiled: April 30, 2012Date of Patent: November 12, 2019Assignee: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LPInventors: Jichuan Chang, Justin James Meza, Parthasarathy Ranganathan
-
Patent number: 10331560Abstract: Methods and systems for providing cache coherence in multi-compute-engine systems are described herein. In on example, concise cache coherency directory (CDir) for providing cache coherence in the multi-compute-engine systems is described. The CDir comprises a common pattern aggregated entry for one or more cache lines from amongst a plurality of cache lines of a shared memory. The one or more cache lines that correspond to the common pattern aggregated entry are associated with a common sharing pattern from amongst a predetermined number of sharing patterns that repeat most frequently in the region.Type: GrantFiled: January 31, 2014Date of Patent: June 25, 2019Assignee: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LPInventors: Jichuan Chang, Sheng Li
-
Patent number: 10270652Abstract: A system and method for network management are described herein. The system includes a number of servers and a first network coupling the servers to each other and configured to connect the servers to one or more client computing devices. The system also includes a second network coupling the servers to each other, wherein data transferred between the servers is transferred though the second network. Network management requests for configuring the second network are communicated to the servers through the first network.Type: GrantFiled: April 25, 2012Date of Patent: April 23, 2019Assignee: Hewlett Packard Enterprise Development LPInventors: Jichuan Chang, Parthasarathy Ranganathan
-
Patent number: 10152247Abstract: A technique includes acquiring a plurality of write requests from at least one memory controller and logging information associated with the plurality of write requests in persistent storage. The technique includes applying the plurality of write requests atomically as a group to persistent storage.Type: GrantFiled: January 23, 2014Date of Patent: December 11, 2018Assignee: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LPInventors: Sheng Li, Jishen Zhao, Jichuan Chang, Parthasarathy Ranganathan, Alistair Veitch, Kevin T. Lim, Mark Lillibridge
-
Publication number: 20180307420Abstract: An example method involves receiving, at a first memory node, data to be written at a memory location in the first memory node. The data is received from a device. At the first memory node, old data is read from the memory location, without sending the old data to the device. The data is written to the memory location. The data and the old data are sent from the first memory node to a second memory node to store parity information in the second memory node without the device determining the parity information. The parity information is based on the data stored in the first memory node.Type: ApplicationFiled: June 18, 2018Publication date: October 25, 2018Inventors: Doe Hyun Yoon, Naveen Muralimanohar, Jichuan Chang, Parthasarathy Ranganathan
-
Patent number: 10108239Abstract: Systems and methods for operating based on recovered waste heat are described. In one example, the method includes receiving recovered waste heat power and operating at least one system component of a recovered waste heat based computing device based on the recovered waste heat power, where the at least one system component is coupled to a non-volatile memory of the recovered waste heat based computing device. The method further includes preserving operational states of the at least one system component in the non-volatile memory based on a current power level associated with the recovered waste heat power.Type: GrantFiled: January 31, 2014Date of Patent: October 23, 2018Assignee: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LPInventors: Chandrakant Patel, Jichuan Chang, Cullen E. Bash
-
Patent number: 10025663Abstract: Local checkpointing using a multi-level call is described herein. An example method includes storing a first datum in a first level of a multi-level cell. A second datum is stored in a second level of the multi-level cell, the second datum representing a checkpoint of the first datum. The first datum is copied from the first level to the second level of the multi-level cell to create the checkpoint.Type: GrantFiled: April 27, 2012Date of Patent: July 17, 2018Assignee: Hewlett Packard Enterprise Development LPInventors: Doe Hyun Yoon, Robert Schreiber, Paolo Faraboschi, Jichuan Chang, Naveen Muralimanohar, Parthasarathy Ranganathan
-
Patent number: 10019176Abstract: An example method involves receiving, at a first memory node, data to be written at a memory location in the first memory node. The data is received from a device. At the first memory node, old data is read from the memory location, without sending the old data to the device. The data is written to the memory location. The data and the old data are sent from the first memory node to a second memory node to store parity information in the second memory node without the device determining the parity information. The parity information is based on the data stored in the first memory node.Type: GrantFiled: October 30, 2012Date of Patent: July 10, 2018Assignee: Hewlett Packard Enterprise Development LPInventors: Doe Hyun Yoon, Naveen Muralimanohar, Jichuan Chang, Parthasarathy Ranganathan
-
Patent number: 9934085Abstract: A detector detects, using an error code, an error in data stored in a memory. The detector determines whether the error is uncorrectable using the error code. In response to determining that the error is uncorrectable, an error handler associated with an application is invoked to handle the error in the data by recovering the data to an application-wide consistent state.Type: GrantFiled: May 29, 2013Date of Patent: April 3, 2018Assignee: Hewlett Packard Enterprise Development LPInventors: Doe-Hyun Yoon, Jichuan Chang, Naveen Muralimanohar, Parthasarathy Ranganathan, Robert Schreiber, Norman Paul Jouppi
-
Patent number: 9846653Abstract: Write operations on main memory comprise predicting a last write in a dirty cache line. The predicted last write indicates a predicted pattern of the dirty cache line before the dirty cache line is evicted from a cache memory. Further, the predicted pattern is compared with a pattern of original data bits stored in the main memory for identifying changes to be made in the original data bits. Based on the comparison, an optimization operation to be performed on the original data bits is determined. The optimization operation modifies the original data bits based on the predicted pattern of a last write cache line before the last write cache line is evicted from the cache memory.Type: GrantFiled: February 21, 2014Date of Patent: December 19, 2017Assignee: Hewlett Packard Enterprise Development LPInventors: Jichuan Chang, Doe Hyun Yoon, Robert Schreiber
-
Patent number: 9773531Abstract: A disclosed example method involves performing simultaneous data accesses on at least first and second independently selectable logical sub-ranks to access first data via a wide internal data bus in a memory device. The memory device includes a translation buffer chip, memory chips in independently selectable logical sub-ranks, a narrow external data bus to connect the translation buffer chip to a memory controller, and the wide internal data bus between the translation buffer chip and the memory chips. A data access is performed on only the first independently selectable logical sub-rank to access second data via the wide internal data bus. The example method also involves locating a first portion of the first data, a second portion of the first data, and the second data on the narrow external data bus during separate data transfers.Type: GrantFiled: June 8, 2012Date of Patent: September 26, 2017Assignee: Hewlett Packard Enterprise Development LPInventors: Doe Hyun Yoon, Naveen Muralimanohar, Jichuan Chang, Parthasarathy Ranganthan
-
Patent number: 9767070Abstract: One embodiment is a storage system having one or more compute blades to generate and use data and one or more memory blades to generate a computational result. The computational result is generated by a computational function that transforms the data generated and used by the one or more compute blades. One or more storage devices are in communication with and remotely located from the one or more compute blades. The one or more storage devices store and serve the data for the one or more compute blades.Type: GrantFiled: November 6, 2009Date of Patent: September 19, 2017Assignee: Hewlett Packard Enterprise Development LPInventors: Jichuan Chang, Kevin T Lim, Parthasarathy Ranganathan
-
Patent number: 9710335Abstract: According to an example, versioned memory implementation may include comparing a global memory version to a block memory version. The global memory version may correspond to a plurality of memory blocks, and the block memory version may correspond to one of the plurality of memory blocks. A subblock-bit-vector (SBV) corresponding to a plurality of subblocks of the one of the plurality of memory blocks may be evaluated. Based on the comparison and the evaluation, a determination may be made as to which level in a cell of one of the plurality of subblocks of the one of the plurality of memory blocks checkpoint data is stored.Type: GrantFiled: July 31, 2013Date of Patent: July 18, 2017Assignee: Hewlett Packard Enterprise Development LPInventors: Doe Hyun Yoon, Terence P. Kelly, Jichuan Chang, Naveen Muralimanohar, Robert Schreiber, Parthasarathy Ranganathan
-
Patent number: 9601189Abstract: A memory device includes a group or block of k-level memory cells, where k>2, and where each of the k-level memory cells has k programmable states represented by respective resistance levels.Type: GrantFiled: April 24, 2013Date of Patent: March 21, 2017Assignee: Hewlett Packard Enterprise Development LPInventors: Doe Hyun Yoon, Jichuan Chang, Naveen Muralimanohar, Robert Schreiber, Norman P. Jouppi