Patents by Inventor Kwangwoong Park

Kwangwoong Park has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11562213
    Abstract: Logic may reduce the size of runtime memory for deep neural network inference computations. Logic may determine, for two or more stages of a neural network, a count of shared block allocations, or shared memory block allocations, that concurrently exist during execution of the two or more stages. Logic may compare counts of the shared block allocations to determine a maximum count of the counts. Logic may reduce inference computation time for deep neural network inference computations. Logic may determine a size for each of the shared block allocations of the count of shared memory block allocations, to accommodate data to store in a shared memory during execution of the two or more stages of the cascaded neural network. Logic may determine a batch size per stage of the two or more stages of a cascaded neural network based on a lack interdependencies between input data.
    Type: Grant
    Filed: April 17, 2018
    Date of Patent: January 24, 2023
    Assignee: INTEL CORPORATION
    Inventors: Byoungwon Choe, Kwangwoong Park, Seokyong Byun
  • Publication number: 20190042925
    Abstract: Logic may reduce the size of runtime memory for deep neural network inference computations. Logic may determine, for two or more stages of a neural network, a count of shared block allocations, or shared memory block allocations, that concurrently exist during execution of the two or more stages. Logic may compare counts of the shared block allocations to determine a maximum count of the counts. Logic may reduce inference computation time for deep neural network inference computations. Logic may determine a size for each of the shared block allocations of the count of shared memory block allocations, to accommodate data to store in a shared memory during execution of the two or more stages of the cascaded neural network. Logic may determine a batch size per stage of the two or more stages of a cascaded neural network based on a lack interdependencies between input data.
    Type: Application
    Filed: April 17, 2018
    Publication date: February 7, 2019
    Applicant: INTEL CORPORATION
    Inventors: Byoungwon Choe, Kwangwoong Park, Seokyong Byun