Patents by Inventor Junwhan AHN

Junwhan AHN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11442729
    Abstract: A method and system for processing a bit-packed array using one or more processors, including determining a data element size of the bit-packed array, determining a lane configuration of a single-instruction multiple-data (SIMD) unit for processing the bit-packed array based at least in part on the determined data element size, the lane configuration being determined from among a plurality of candidate lane configurations, each candidate lane configuration having a different number of vector register lanes and a corresponding bit capacity per vector register lane, configuring the SIMD unit according to the determined lane configuration, and loading one or more data elements into each vector register lane of the SIMD unit. SIMD instructions may be executed on the loaded one or more data elements of each vector register lane in parallel, and a result of the SIMD instruction may be stored in memory.
    Type: Grant
    Filed: October 26, 2020
    Date of Patent: September 13, 2022
    Assignee: Google LLC
    Inventors: Junwhan Ahn, Jichuan Chang, Andrew McCormick, Yuanwei Fang, Yixin Luo
  • Publication number: 20220129269
    Abstract: A method and system for processing a bit-packed array using one or more processors, including determining a data element size of the bit-packed array, determining a lane configuration of a single-instruction multiple-data (SIMD) unit for processing the bit-packed array based at least in part on the determined data element size, the lane configuration being determined from among a plurality of candidate lane configurations, each candidate lane configuration having a different number of vector register lanes and a corresponding bit capacity per vector register lane, configuring the SIMD unit according to the determined lane configuration, and loading one or more data elements into each vector register lane of the SIMD unit. SIMD instructions may be executed on the loaded one or more data elements of each vector register lane in parallel, and a result of the SIMD instruction may be stored in memory.
    Type: Application
    Filed: October 26, 2020
    Publication date: April 28, 2022
    Applicant: Google LLC
    Inventors: Junwhan Ahn, Jichuan Chang, Andrew McCormick, Yuanwei Fang, Yixin Luo
  • Patent number: 10990589
    Abstract: A computing apparatus may process an operation. The computing apparatus may output information regarding an aggregation operation and an operand corresponding to a variable stored in a memory, store information regarding an operator and the aggregation operands regarding the aggregation operation, perform a first partial operation with respect to the aggregation operands and store a result value of the first partial operation, and process the aggregation operation based on storing the variable, performing a second partial operation with respect to the result value of the first partial operation stored in the cache and the operand corresponding to the variable, and storing a result value of the second partial operation.
    Type: Grant
    Filed: August 9, 2017
    Date of Patent: April 27, 2021
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Junwhan Ahn, Sungjoo Yoo, Kiyoung Choi
  • Patent number: 10255182
    Abstract: A method of managing a cache includes storing first data of an upper level cache in a lower level cache, predicting a reuse distance level of second data having a same signature as the first data based on access information about the first data, and storing the second data in one of the lower level cache and a main memory based on the predicted reuse distance level of the second data.
    Type: Grant
    Filed: February 9, 2016
    Date of Patent: April 9, 2019
    Assignees: SAMSUNG ELECTRONICS CO., LTD., SEOUL NATIONAL UNIVERSITY R&DB FOUNDATION
    Inventors: Namhyung Kim, Junwhan Ahn, Kiyoung Choi, Woong Seo
  • Publication number: 20180046666
    Abstract: A computing apparatus may process an operation. The computing apparatus may output information regarding an aggregation operation and an operand corresponding to a variable stored in a memory, store information regarding an operator and the aggregation operands regarding the aggregation operation, perform a first partial operation with respect to the aggregation operands and store a result value of the first partial operation, and process the aggregation operation based on storing the variable, performing a second partial operation with respect to the result value of the first partial operation stored in the cache and the operand corresponding to the variable, and storing a result value of the second partial operation.
    Type: Application
    Filed: August 9, 2017
    Publication date: February 15, 2018
    Applicant: Samsung Electronics Co., Ltd.
    Inventors: Junwhan AHN, Sungjoo YOO, Kiyoung CHOI
  • Publication number: 20160232093
    Abstract: A method of managing a cache includes storing first data of an upper level cache in a lower level cache, predicting a reuse distance level of second data having a same signature as the first data based on access information about the first data, and storing the second data in one of the lower level cache and a main memory based on the predicted reuse distance level of the second data.
    Type: Application
    Filed: February 9, 2016
    Publication date: August 11, 2016
    Applicants: Samsung Electronics Co., Ltd., Seoul National University R&DB Foundation
    Inventors: Namhyung KIM, Junwhan AHN, Kiyoung CHOI, Woong SEO
  • Publication number: 20140149669
    Abstract: In one example embodiment of the inventive concepts, a cache memory system includes a main cache memory including a nonvolatile random access memory, the main cache memory configured to exchange data with an external device and store the exchange data, each exchanged data includes less significant bit (LSB) data and more significant bit (MSB) data. The cache memory system further includes a sub-cache memory including a random access memory, the sub-cache memory configured to store LSB data of at least a portion of data stored at the main cache memory, wherein the main cache memory and the sub-cache memory are formed of a single-level cache memory.
    Type: Application
    Filed: November 21, 2013
    Publication date: May 29, 2014
    Inventors: Sungyeum KIM, Hyeokman KWON, Youngjun KWON, Kiyoung CHOI, Junwhan AHN