Patents by Inventor Jung Ko

Jung Ko has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11481612
    Abstract: Some embodiments provide a neural network inference circuit (NNIC) for executing a neural network that includes multiple computation nodes at multiple layers. Each of a set of the computation nodes includes a dot product of input values and weight values. The NNIC includes dot product cores, each of which includes (i) partial dot product computation circuits to compute dot products between input values and weight values and (ii) memories to store the weight values and input values for a layer of the NN. The input values for a particular layer of the NN are stored in the memories of multiple cores. A starting memory location in a first core for the input values of the layer stored in the first core is the same as a starting memory location for the input values in each of the other cores that store the input values for the layer.
    Type: Grant
    Filed: March 15, 2019
    Date of Patent: October 25, 2022
    Assignee: PERCEIVE CORPORATION
    Inventors: Kenneth Duong, Jung Ko, Steven L. Teig
  • Patent number: 11468145
    Abstract: Some embodiments provide a neural network inference circuit (NNIC) for executing a NN that includes multiple computation nodes at multiple layers. Each of a set of the computation nodes includes a dot product of input values and weight values. The NNIC includes a set of dot product cores, each of which includes (i) partial dot product computation circuits to compute dot products between input values and weight values and (ii) memories to store the sets of weight values and sets of input values for a layer of the neural network. The input values for a particular layer are arranged in a plurality of two-dimensional grids. A particular core stores all of the input values of a subset of the two-dimensional grids. Input values having a same set of coordinates in each respective grid of the subset of the two-dimensional grids are stored sequentially within the memories of the particular core.
    Type: Grant
    Filed: March 15, 2019
    Date of Patent: October 11, 2022
    Assignee: PERCEIVE CORPORATION
    Inventors: Kenneth Duong, Jung Ko, Steven L. Teig
  • Publication number: 20220291739
    Abstract: For a neural network inference circuit that executes a neural network including multiple computation nodes at multiple layers for which data is stored in a plurality of memory banks, some embodiments provide a method for dynamically putting memory banks into a sleep mode of operation to conserve power. The method tracks the accesses to individual memory banks and, if a certain number of clock cycles elapse with no access to a particular memory bank, sends a signal to the memory bank indicating that it should operate in a sleep mode. Circuit components involved in dynamic memory sleep, in some embodiments, include a core RAM pipeline, a core RAM sleep controller, a set of core RAM bank select decoders, and a set of core RAM memory bank wrappers.
    Type: Application
    Filed: May 27, 2022
    Publication date: September 15, 2022
    Inventors: Jung Ko, Kenneth Duong, Steven L. Teig
  • Publication number: 20220251725
    Abstract: A method of growing on-axis silicon carbide single crystal includes the steps of (A) sieving a silicon carbide source material by size, and only the part that has a size larger than 1 cm is adopted for use as a sieved silicon carbide source material; (B) filling the sieved silicon carbide source material in the bottom of a graphite crucible; (C) positioning an on-axis silicon carbide on a top of the graphite crucible to serve as a seed crystal; (D) placing the graphite crucible having the sieved silicon carbide source material and the seed crystal received therein in an induction furnace for the physical vapor transport process; (E) starting a silicon carbide crystal growth process; and (F) obtaining a silicon carbide single crystal.
    Type: Application
    Filed: February 9, 2021
    Publication date: August 11, 2022
    Inventors: CHIH-WEI KUO, CHENG-JUNG KO, HSUEH-I CHEN, JUN-BIN HUANG, YING-TSUNG CHAO, CHIA-HUNG TAI
  • Patent number: 11403530
    Abstract: Some embodiments provide a method for compiling a neural network program for a neural network inference circuit. The method receives a neural network definition including multiple weight values arranged as multiple filters. For each filter, each of the weight values is one of a set of weight values associated with the filter. At least one of the filters has more than three different associated weight values. The method generates program instructions for instructing the neural network inference circuit to execute the neural network. The neural network inference circuit includes circuitry for executing neural networks with a maximum of three different weight values per filter.
    Type: Grant
    Filed: December 6, 2018
    Date of Patent: August 2, 2022
    Assignee: PERCEIVE CORPORATION
    Inventors: Jung Ko, Kenneth Duong, Steven L. Teig
  • Publication number: 20220234511
    Abstract: The present invention relates to a carrier device installed on a vehicle roof and, more specifically, to a vehicle rear roof carrier device comprising: roof supports fixed to both sides of a vehicle roof in the lengthwise direction; hollow roof fixing rails connected and fixed to side portions of the roof supports, respectively; operation rails disposed on the roof fixing rails and connected to a trunk; a transfer cart which slides along the roof fixing rails; and a carrier box which can be attached or detached to the transfer cart and stores an object therein, wherein the operation rails are connected to the rear portion of the trunk and the transfer cart is transferred and lowered along the roof fixing rails and the operation rails to allow an object to be stored in the carrier box, so that an item can be conveniently loaded or unloaded in the carrier box provided on the vehicle roof through the rear side of a vehicle.
    Type: Application
    Filed: May 20, 2019
    Publication date: July 28, 2022
    Inventor: Soon Jung KO
  • Patent number: 11361213
    Abstract: Some embodiments provide a neural network inference circuit for implementing a neural network that includes multiple computation nodes at multiple layers. Each of a set of the computation nodes includes (i) a linear function that includes a dot product of input values and weight values and (ii) a non-linear activation function. The neural network inference circuit includes (i) a set of dot product circuits to compute dot products for the plurality of computation nodes and (ii) at least one computation node post-processing circuit to (i) receive a dot product for a computation node computed by the set of dot product circuits, (ii) compute a result of the linear function for the computation node based on the dot product, and (iii) use a lookup table to compute the non-linear activation function of the computation node from the result of the linear function to determine an output of the computation node.
    Type: Grant
    Filed: December 6, 2018
    Date of Patent: June 14, 2022
    Assignee: PERCEIVE CORPORATION
    Inventors: Kenneth Duong, Jung Ko, Steven L. Teig
  • Patent number: 11347297
    Abstract: For a neural network inference circuit that executes a neural network including multiple computation nodes at multiple layers for which data is stored in a plurality of memory banks, some embodiments provide a method for dynamically putting memory banks into a sleep mode of operation to conserve power. The method tracks the accesses to individual memory banks and, if a certain number of clock cycles elapse with no access to a particular memory bank, sends a signal to the memory bank indicating that it should operate in a sleep mode. Circuit components involved in dynamic memory sleep, in some embodiments, include a core RAM pipeline, a core RAM sleep controller, a set of core RAM bank select decoders, and a set of core RAM memory bank wrappers.
    Type: Grant
    Filed: May 30, 2019
    Date of Patent: May 31, 2022
    Assignee: PERCEIVE CORPORATION
    Inventors: Jung Ko, Kenneth Duong, Steven L. Teig
  • Patent number: 11341397
    Abstract: Some embodiments provide a method for a neural network inference circuit (NNIC) that implements a neural network including multiple computation nodes at multiple layers. Each computation node includes a dot product of input values and weight values and a set of post-processing operations. The method retrieves a set of weight values and a set of input values for a computation node from a set of memories of the NNIC. The method computes a dot product of the retrieved sets of weight values and input values. The method performs the post-processing operations for the computation node on a result of the dot product computation to compute an output value for the computation node. The method stores the output value in the set of memories. No intermediate results of the dot product or the set of post-processing operations are stored in any RAM of the NNIC during the computation.
    Type: Grant
    Filed: December 6, 2018
    Date of Patent: May 24, 2022
    Assignee: PERCEIVE CORPORATION
    Inventors: Kenneth Duong, Jung Ko, Steven L. Teig
  • Patent number: 11295200
    Abstract: Some embodiments provide a method for a neural network inference circuit that executes a neural network including multiple nodes. The method loads a first set of weight values into a first set of weight value buffers, a second set of weight values into a second set of weight value buffers, a first set of input values into a first set of input value buffers, and a second set of input values into a second set of input value buffers. In a first clock cycle, the method computes a first dot product of the first set of weight values and the first set of input values. In a second clock cycle, the method computes a second dot product of the second set of weight values and the second set of input values. The method adds the first and second dot products to compute a dot product for the node.
    Type: Grant
    Filed: December 6, 2018
    Date of Patent: April 5, 2022
    Assignee: PERCEIVE CORPORATION
    Inventors: Jung Ko, Kenneth Duong, Steven L. Teig
  • Publication number: 20220103509
    Abstract: A method of providing information on a social networking service (SNS) activity to a chatroom, performed by a server, includes: providing an SNS for each of a plurality of anonymous profiles created to be interlinked with an account for an instant messaging service (IMS); receiving information on an SNS activity performed through a first anonymous profile selected corresponding to a chatroom of the IMS, among the plurality of anonymous profiles of a user participating in the chatroom; providing the information on the SNS activity performed through the first anonymous profile to the chatroom; receiving a request to change the profile of the user selected corresponding to the chatroom from the first anonymous profile to a second anonymous profile; receiving information on an SNS activity performed through the second anonymous profile; and providing the information on the SNS activity performed through the second anonymous profile to the chatroom.
    Type: Application
    Filed: December 9, 2021
    Publication date: March 31, 2022
    Inventors: Ji Sun LEE, Hyun Young PARK, Seong Mi LIM, Young Min PARK, Doo Won LEE, Eun Jung KO, Jae Lin LEE, Kwang Hui LIM, Ki Yong SHIM, Sun Ho CHOI, Kwang Hoon CHOI, Hwa Young LEE, Jae Gil LEE, Kyong Rim KIM, Soo Min CHO
  • Publication number: 20220089952
    Abstract: Provided are a silicon nitride film etching composition, a method of etching a silicon nitride film using the same, and a manufacturing method of a semiconductor device. Specifically, a silicon nitride film may be stably etched with a high selection ratio relative to a silicon oxide film, and when the composition is applied to an etching process at a high temperature and a semiconductor manufacturing process, not only no precipitate occurs but also anomalous growth in which the thickness of the silicon oxide film is rather increased does not occur, thereby minimizing defects and reliability reduction.
    Type: Application
    Filed: August 30, 2021
    Publication date: March 24, 2022
    Inventors: Dong Hyun KIM, Hyeon Woo PARK, Sung Jun HONG, Myung Ho LEE, Myung Geun SONG, Hoon Sik KIM, Jae Jung KO, Myong Euy LEE, Jun Hyeok HWANG
  • Publication number: 20220089951
    Abstract: Provided are a silicon nitride film etching composition, a method of etching a silicon nitride film using the same, and a manufacturing method of a semiconductor device. Specifically, a silicon nitride film may be highly selectively etched as compared with a silicon oxide film, and when the composition is applied to an etching process at a high temperature and a semiconductor manufacturing process, not only no precipitate occurs but also anomalous growth in which the thickness of the silicon oxide film is rather increased does not occur, thereby minimizing defects and reliability reduction.
    Type: Application
    Filed: August 30, 2021
    Publication date: March 24, 2022
    Inventors: Dong Hyun KIM, Hyeon Woo PARK, Sung Jun HONG, Myung Ho LEE, Myung Geun SONG, Hoon Sik KIM, Jae Jung KO, Myong Euy LEE, Jun Hyeok HWANG
  • Publication number: 20220089953
    Abstract: Provided are a silicon nitride film etching composition, a method of etching a silicon nitride film using the same, and a manufacturing method of a semiconductor device. Specifically, a silicon nitride film may be highly selectively etched as compared with a silicon oxide film, and when the composition is applied to an etching process at a high temperature and a semiconductor manufacturing process, not only no precipitate occurs but also anomalous growth in which the thickness of the silicon oxide film is rather increased does not occur, thereby minimizing defects and reliability reduction.
    Type: Application
    Filed: August 30, 2021
    Publication date: March 24, 2022
    Inventors: Dong Hyun KIM, Hyeon Woo PARK, Sung Jun HONG, Myung Ho LEE, Myung Geun SONG, Hoon Sik KIM, Jae Jung KO, Myong Euy LEE
  • Patent number: 11250326
    Abstract: Some embodiments provide a method for compiling a neural network (NN) program for an NN inference circuit (NNIC) that includes multiple partial dot product computation circuits (PDPCCs) for computing dot products between weight values and input values. The method receives an NN definition with multiple nodes. The method assigns a group of filters to specific PDPCCs. Each filter is assigned to a different set of the PDPCCs. When a filter does not have enough weight values equal to zero for a first set of PDPCCs to which the filter is assigned to compute dot products for nodes that use the filter, the method divides the filter between the first set and a second set of PDPCCs. The method generates program instructions for instructing the NNIC to execute the NN by using the first and second PDPCCs to compute dot products for the nodes that use the filter.
    Type: Grant
    Filed: December 6, 2018
    Date of Patent: February 15, 2022
    Assignee: PERCEIVE CORPORATION
    Inventors: Jung Ko, Kenneth Duong, Steven L. Teig
  • Patent number: 11222257
    Abstract: Some embodiments provide a neural network inference circuit (NNIC) for executing a neural network. The NNIC includes a first circuit that outputs dot products for computation nodes of a first set of neural network layers, that include dot product computations of sets of weight values with sets of input values. The NNIC also includes a second circuit that outputs values for computation nodes of a second set of neural network layers, that apply a set of calculations that do not include dot products to sets of input values. The NNIC also includes a selection circuit that selects a dot product output from the first circuit when a current layer being processed by the NNIC belongs to the first set of layers, and selects a non-dot product output from the second circuit when the current layer belongs to the second set of layers.
    Type: Grant
    Filed: August 21, 2019
    Date of Patent: January 11, 2022
    Assignee: PERCEIVE CORPORATION
    Inventors: Jung Ko, Kenneth Duong, Steven L. Teig
  • Patent number: 11218436
    Abstract: A method of displaying an interface for providing a social networking service (SNS) through an anonymous profile, performed by a user terminal, includes displaying a first list of at least one anonymous chatroom created by a user account for an instant messaging service (IMS) using a first region on a first page in an interface for the IMS, displaying a second list of at least one anonymous profile created to be interlinked with the user account using a first region on a second page in the interface for the IMS, and displaying, in response to an input of selecting any one anonymous profile in the second list, an interface for providing the SNS through the selected anonymous profile.
    Type: Grant
    Filed: March 24, 2021
    Date of Patent: January 4, 2022
    Assignee: KAKAO CORP.
    Inventors: Ji Sun Lee, Hyun Young Park, Seong Mi Lim, Young Min Park, Doo Won Lee, Eun Jung Ko, Jae Lin Lee, Kwang Hui Lim, Ki Yong Shim, Sun Ho Choi, Kwang Hoon Choi, Hwa Young Lee, Jae Gil Lee, Kyong Rim Kim, Soo Min Cho
  • Patent number: 11210586
    Abstract: Some embodiments provide a method for a neural network inference circuit that executes a neural network including multiple computation nodes at multiple layers. Each computation node of a set of the computation nodes includes a dot product of input values and weight values. The method reads a set of encoded weight data for a set of weight values from a memory of the neural network inference circuit. The method decodes the encoded weight data to generate decoded weight data for the set of weight values. The method stores the decoded weight data in a buffer. The method uses the decoded weight data to execute a set of computation nodes. Each computation node of the set of computation nodes includes a dot product between the set of weight values and a different set of input values.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: December 28, 2021
    Assignee: PERCEIVE CORPORATION
    Inventors: Kenneth Duong, Jung Ko, Steven L. Teig
  • Patent number: 11205115
    Abstract: Some embodiments provide a neural network inference circuit (NNIC) for implementing a neural network that includes multiple computation nodes at multiple layers. Each of a set of the computation nodes includes a dot product of input values and weight values. The NNIC includes multiple dot product core circuits for computing multiple partial dot products and a set of channel circuits connecting the core circuits. The set of channel circuits includes (i) a dot product bus for aggregating the partial dot products to compute dot products for computation nodes of the neural network, (ii) one or more post-processing circuits for performing additional computation operations on the dot products to compute outputs for the computation nodes, and (iii) an output bus for providing the computed outputs of the computation nodes to the core circuits for the core circuits to use as inputs for subsequent computation nodes.
    Type: Grant
    Filed: December 6, 2018
    Date of Patent: December 21, 2021
    Assignee: PERCEIVE CORPORATION
    Inventors: Kenneth Duong, Jung Ko, Steven L. Teig
  • Patent number: 11170289
    Abstract: Some embodiments provide a neural network inference circuit (NNIC) for executing a neural network that includes multiple computation nodes, that include dot products, at multiple layers. The NNIC includes multiple dot product core circuits and a bus, including one or more aggregation circuits, that connects the core circuits. Each core circuit includes (i) a set of memories for storing multiple input values and multiple weight values and (ii) a set of adder tree circuits for computing dot products of sets of input values and sets of weight values stored in the set of memories. For a particular computation node, at least two of the core circuits compute partial dot products using input values and weight values stored in the memories of the respective core circuits and at least one of the aggregation circuits of the bus combines the partial dot products to compute the dot product for the computation node.
    Type: Grant
    Filed: December 6, 2018
    Date of Patent: November 9, 2021
    Assignee: PERCEIVE CORPORATION
    Inventors: Kenneth Duong, Jung Ko, Steven L. Teig