Patents by Inventor Jung Ko

Jung Ko has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12265905
    Abstract: Some embodiments provide a method for a circuit that executes a neural network including multiple nodes. The method loads a set of weight values for a node into a set of weight value buffers, a first set of bits of each input value of a set of input values for the node into a first set of input value buffers, and a second set of bits of each of the input values into a second set of input value buffers. The method computes a first dot product of the weight values and the first set of bits of each input value and a second dot product of the weight values and the second set of bits of each input value. The method shifts the second dot product by a particular number of bits and adds the first dot product with the bit-shifted second dot product to compute a dot product for the node.
    Type: Grant
    Filed: November 9, 2022
    Date of Patent: April 1, 2025
    Assignee: Amazon Technologies, Inc.
    Inventors: Jung Ko, Kenneth Duong, Steven L. Teig
  • Patent number: 12267285
    Abstract: A method for displaying a message in a messenger service by a user terminal is proposed. The method may include receiving the message from a server. The method may also include receiving a mask command for the message from the server when text information extracted from the message satisfies a preset condition. The method may further include displaying a mask message corresponding to the message in a chat room of the messenger service based on the mask command.
    Type: Grant
    Filed: September 8, 2023
    Date of Patent: April 1, 2025
    Assignee: Kakao Corp.
    Inventors: Dae Won Yoon, Ki Yong Shim, Eun Jung Ko, Doo Won Lee, Ji Sun Lee
  • Publication number: 20250103341
    Abstract: Some embodiments provide a neural network inference circuit (NNIC) for executing a neural network that includes multiple computation nodes at multiple layers. The NNIC includes multiple core circuits including memories for storing input values for the computation nodes. The NNIC includes a set of post-processing circuits for computing output values of the computation nodes. The output values for a first layer are for storage in the core circuits as input values for a second layer. The NNIC includes an output bus that connects the post-processing circuits to the core circuits. The output bus is for (i) receiving a set of output values from the post-processing circuits, (ii) transporting the output values of the set to the core circuits based on configuration data specifying a core circuit at which each of the output values is to be stored, and (iii) aligning the output values for storage in the core circuits.
    Type: Application
    Filed: September 4, 2024
    Publication date: March 27, 2025
    Inventors: Kenneth Duong, Jung Ko, Steven L. Teig
  • Patent number: 12217160
    Abstract: Some embodiments provide a method that receives a specification of a neural network for execution by an integrated circuit. The integrated circuit includes a neural network inference circuit for executing the neural network to generate an output based on an input, an input processing circuit for providing the input to the neural network inference circuit, a microprocessor circuit for controlling the neural network inference circuit and the input processing circuit, and a unified memory accessible by the microprocessor circuit, the neural network inference circuit, and the input processing circuit. The method determines usage of the unified memory by the neural network inference circuit while executing the neural network. Based on the determined usage by the neural network inference circuit, the method allocates portions of the unified memory to the microprocessor circuit and input processing circuit.
    Type: Grant
    Filed: May 3, 2021
    Date of Patent: February 4, 2025
    Assignee: Amazon Technologies, Inc.
    Inventors: Jung Ko, Kenneth Duong, Steven L. Teig, Won Rhee
  • Patent number: 12190230
    Abstract: Some embodiments provide a neural network inference circuit (NNIC) for executing a neural network that includes multiple computation nodes at multiple layers. The NNIC includes a set of clusters of core computation circuits and a channel, connecting the core computation circuits, that includes separate segments corresponding to each of the clusters. The NNIC includes a fabric controller circuit, a cluster controller circuit for each of the clusters, and a core controller circuit for each of the core computation circuits. The fabric controller circuit receives high-level neural network instructions from a microprocessor and parses the high-level neural network instructions.
    Type: Grant
    Filed: November 7, 2022
    Date of Patent: January 7, 2025
    Assignee: Amazon Technologies, Inc.
    Inventors: Kenneth Duong, Jung Ko, Steven L. Teig
  • Patent number: 12165043
    Abstract: Some embodiments provide a neural network inference circuit for executing a neural network that includes multiple layers of computation nodes. At least a subset of the layers include non-convolutional layers. The neural network inference circuit includes multiple cores with memories that store input values for the layers. The cores are grouped into multiple clusters. For each cluster, the neural network inference circuit includes a set of processing circuits for receiving input values from the cores of the cluster and executing the computation nodes of the non-convolutional layers.
    Type: Grant
    Filed: October 8, 2023
    Date of Patent: December 10, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Jung Ko, Kenneth Duong, Steven L. Teig
  • Patent number: 12159214
    Abstract: Some embodiments provide a method for executing a neural network. The method writes a first input to a first set of physical memory banks in a unified memory shared by an input processing circuit and a neural network inference circuit that executes the neural network. While the neural network inference circuit is executing the network a first time to generate a first output for the first input, the method writes a second input to a second set of physical memory banks in the unified memory. The neural network inference circuit executes a same set of instructions to read the first input from the first set of memory banks in order to execute the network the first time and to read the second input from the second set of memory banks in order to execute the network a second time to generate a second output for the second input.
    Type: Grant
    Filed: May 3, 2021
    Date of Patent: December 3, 2024
    Assignee: Perceive Corporation
    Inventors: Jung Ko, Kenneth Duong, Steven L. Teig, Won Rhee
  • Publication number: 20240371502
    Abstract: A method of providing information on medical personnel may be provided. The method according to an embodiment may include: receiving surgical information including information on a medical treatment department performing a surgery, information on an operating surgeon, surgical category information, and surgical code information; determining a score for each item that constitutes the surgical information on medical personnel on the basis of information on on-duty records of the medical personnel; determining a medical personnel score corresponding to the surgical information on the basis of a weighted value corresponding to the score for the each item; and providing information on the medical personnel that includes the medical personnel score.
    Type: Application
    Filed: December 27, 2023
    Publication date: November 7, 2024
    Inventors: Jung Hwan MOON, Ihn Seon Lee, Jun Young Choi, Ji Hye Woo, Chae Yeon Park, Ho Jun Seol, Do Hoon Lim, Sei Yeul Oh, Hyun Jung Ko, Mi Ja Ju
  • Publication number: 20240361824
    Abstract: For a neural network inference circuit that executes a neural network including multiple computation nodes at multiple layers for which data is stored in a plurality of memory banks, some embodiments provide a method for dynamically putting memory banks into a sleep mode of operation to conserve power. The method tracks the accesses to individual memory banks and, if a certain number of clock cycles elapse with no access to a particular memory bank, sends a signal to the memory bank indicating that it should operate in a sleep mode. Circuit components involved in dynamic memory sleep, in some embodiments, include a core RAM pipeline, a core RAM sleep controller, a set of core RAM bank select decoders, and a set of core RAM memory bank wrappers.
    Type: Application
    Filed: March 4, 2024
    Publication date: October 31, 2024
    Inventors: Jung Ko, Kenneth Duong, Steven L. Teig
  • Patent number: 12118463
    Abstract: Some embodiments provide a method for a neural network inference circuit that executes a neural network including multiple computation nodes at multiple layers. Each computation node of a set of the computation nodes includes a dot product of input values and weight values. The method reads a set of encoded weight data for a set of weight values from a memory of the neural network inference circuit. The method decodes the encoded weight data to generate decoded weight data for the set of weight values. The method stores the decoded weight data in a buffer. The method uses the decoded weight data to execute a set of computation nodes. Each computation node of the set of computation nodes includes a dot product between the set of weight values and a different set of input values.
    Type: Grant
    Filed: December 14, 2021
    Date of Patent: October 15, 2024
    Assignee: PERCEIVE CORPORATION
    Inventors: Kenneth Duong, Jung Ko, Steven L. Teig
  • Patent number: 12093696
    Abstract: Some embodiments provide a neural network inference circuit (NNIC) for executing a neural network that includes multiple computation nodes at multiple layers. The NNIC includes multiple core circuits including memories for storing input values for the computation nodes. The NNIC includes a set of post-processing circuits for computing output values of the computation nodes. The output values for a first layer are for storage in the core circuits as input values for a second layer. The NNIC includes an output bus that connects the post-processing circuits to the core circuits. The output bus is for (i) receiving a set of output values from the post-processing circuits, (ii) transporting the output values of the set to the core circuits based on configuration data specifying a core circuit at which each of the output values is to be stored, and (iii) aligning the output values for storage in the core circuits.
    Type: Grant
    Filed: August 9, 2019
    Date of Patent: September 17, 2024
    Assignee: PERCEIVE CORPORATION
    Inventors: Kenneth Duong, Jung Ko, Steven L. Teig
  • Patent number: 12084388
    Abstract: A method for preparing a carbide protective layer comprises: (A) mixing a carbide powder, an organic binder, an organic solvent and a sintering aid to form a slurry; (B) spraying the slurry on a surface of a graphite component to form a composite component; (C) subjecting the composite component to a cold isostatic pressing densification process; (D) subjecting the composite component to a constant temperature heat treatment; (E) repeating steps (B)-(D) until a coating is formed on a surface of the composite component; (F) subjecting the coating to a segmented sintering process; (G) obtaining a carbide protective layer used for the surface of the composite component. Accordingly, while the carbide protective layer can be completed by using the wet cold isostatic pressing densification process and the cyclic multiple superimposition method, so that it can improve the corrosion resistance in the silicon carbide crystal growth process environment.
    Type: Grant
    Filed: January 13, 2023
    Date of Patent: September 10, 2024
    Assignee: NATIONAL CHUNG SHAN INSTITUTE OF SCIENCE AND TECHNOLOGY
    Inventors: Chih-Hsing Wang, Cheng-Jung Ko, Chuen-Ming Gee, Chih-Wei Kuo, Hsueh-I Chen, Jun-Bin Huang, Ying-Tsung Chao
  • Publication number: 20240284934
    Abstract: The present invention relates to a method for manufacturing plant-based meat with artificial muscle fiber inserted therein. The method for manufacturing plant-based meat with artificial muscle fiber inserted therein, according to an embodiment of the present invention, comprises: a step of preparing an artificial muscle fiber composition by mixing alginic acid, carrageenan, and glucomannan in distilled water (S100); and a step of injecting the artificial muscle fiber composition into an inner nozzle of a 3D printer having double nozzles in which the inner nozzle is inserted inside an outer nozzle, injecting a vegetable protein composition into the outer nozzle, and then conducting 3D printing (S200).
    Type: Application
    Filed: August 26, 2021
    Publication date: August 29, 2024
    Applicant: BIPPECO
    Inventors: Hyun Jin PARK, Hyun Jung KO
  • Publication number: 20240257770
    Abstract: A display device includes: a timing controlling circuit configured to generate an image data, a data control signal and a gate control signal; a data driving circuit configured to generate a data signal, a stress signal and an anode reset signal using the image data and the data control signal; a gate driving circuit configured to generate a gate1 signal, a gate2 signal, an emission1 signal and an emission2 signal using the gate control signal; and a display panel configured to display an image using the data signal, the gate1 signal, the gate2 signal, the emission1 signal and the emission2 signal, wherein a width of a stress period between a rising timing of the gate2 signal and a rising timing of the emission1 signal is changed according to a luminance band of the image.
    Type: Application
    Filed: January 24, 2024
    Publication date: August 1, 2024
    Inventors: Seung-Ho Jeong, Eun -Jung KO
  • Publication number: 20240239712
    Abstract: A method for preparing a carbide protective layer comprises: (A) mixing a carbide powder, an organic binder, an organic solvent and a sintering aid to form a slurry; (B) spraying the slurry on a surface of a graphite component to form a composite component; (C) subjecting the composite component to a cold isostatic pressing densification process; (D) subjecting the composite component to a constant temperature heat treatment; (E) repeating steps (B)-(D) until a coating is formed on a surface of the composite component; (F) subjecting the coating to a segmented sintering process; (G) obtaining a carbide protective layer used for the surface of the composite component. Accordingly, while the carbide protective layer can be completed by using the wet cold isostatic pressing densification process and the cyclic multiple superimposition method, so that it can improve the corrosion resistance in the silicon carbide crystal growth process environment.
    Type: Application
    Filed: January 13, 2023
    Publication date: July 18, 2024
    Inventors: CHIH-HSING WANG, CHENG-JUNG KO, CHUEN-MING GEE, CHIH-WEI KUO, HSUEH-I CHEN, JUN-BIN HUANG, YING-TSUNG CHAO
  • Publication number: 20240234424
    Abstract: The present disclosure provides chip architectures for FPGAs and other routing implementations that provide for increased memory with high bandwidth, in a reduced size, accessible with reduced latency. Such architectures include a first layer in advanced node and a second layer in legacy node. The first layer includes an active die, active circuitry, and a configurable memory, and the second layer includes a passive die with wiring. The second layer is bonded to the first layer such that the wiring of the second layer interconnects with the active circuitry of the first layer and extends an amount of wiring possible in the first layer.
    Type: Application
    Filed: January 19, 2024
    Publication date: July 11, 2024
    Inventors: Javier A. DeLaCruz, Don Draper, Jung Ko, Steven L. Teig
  • Publication number: 20240187359
    Abstract: Proposed is a method in which a server presents content through a chat room. The method may include acquiring information about a keyword or theme of the content, information about a condition for presenting the content, and information about the chat room, wherein the chat room is related to the keyword or theme and a plurality of users are able to participate in the chat room. The method may also include providing the chat room to a user terminal in association with the information about the keyword or theme of the content. The method may further include searching for the content in a content pool based on the keyword or theme, and when the condition for presenting the content is satisfied, presenting at least some items of the searched content to the user terminal through the chat room.
    Type: Application
    Filed: November 28, 2023
    Publication date: June 6, 2024
    Inventors: Jae Seon LEE, Ga Hee JEONG, Ji Sun LEE, Eun Jung KO, Hye Ryeon LEE, Yeo Seong YOON, Hyun Soo PARK, Sung Jin PARK
  • Patent number: 11987740
    Abstract: Provided are a silicon nitride film etching composition, a method of etching a silicon nitride film using the same, and a manufacturing method of a semiconductor device. Specifically, a silicon nitride film may be highly selectively etched as compared with a silicon oxide film, and when the composition is applied to an etching process at a high temperature and a semiconductor manufacturing process, not only no precipitate occurs but also anomalous growth in which the thickness of the silicon oxide film is rather increased does not occur, thereby minimizing defects and reliability reduction.
    Type: Grant
    Filed: August 30, 2021
    Date of Patent: May 21, 2024
    Assignee: ENF Technology Co., Ltd.
    Inventors: Dong Hyun Kim, Hyeon Woo Park, Sung Jun Hong, Myung Ho Lee, Myung Geun Song, Hoon Sik Kim, Jae Jung Ko, Myong Euy Lee, Jun Hyeok Hwang
  • Publication number: 20240162178
    Abstract: A bonded structure is disclosed. The bonded structure can include a first element that has a first plurality of contact pads. The first plurality of contact pads includes a first contact pad and a second redundant contact pad. The bonded structure can also include a second element directly bonded to the first element without an intervening adhesive. The second element has a second plurality of contact pads. The second plurality of contact pads includes a third contact pad and a fourth redundant contact pad. The first contact pad is configured to connect to the third contact pad. The second contact pad is configured to connect to the fourth contact pad. The bonded structure can include circuitry that has a first state in which an electrical signal is transferred to the first contact pad and a second state in which the electrical signal is transferred to the second contact pad.
    Type: Application
    Filed: January 25, 2024
    Publication date: May 16, 2024
    Inventors: Javier A. DeLaCruz, Belgacem Haba, Jung Ko
  • Patent number: 11939505
    Abstract: Provided are a silicon nitride film etching composition, a method of etching a silicon nitride film using the same, and a manufacturing method of a semiconductor device. Specifically, a silicon nitride film may be stably etched with a high selection ratio relative to a silicon oxide film, and when the composition is applied to an etching process at a high temperature and a semiconductor manufacturing process, not only no precipitate occurs but also anomalous growth in which the thickness of the silicon oxide film is rather increased does not occur, thereby minimizing defects and reliability reduction.
    Type: Grant
    Filed: August 30, 2021
    Date of Patent: March 26, 2024
    Assignee: ENF Technology Co., Ltd.
    Inventors: Dong Hyun Kim, Hyeon Woo Park, Sung Jun Hong, Myung Ho Lee, Myung Geun Song, Hoon Sik Kim, Jae Jung Ko, Myong Euy Lee, Jun Hyeok Hwang