Patents by Inventor Jung Ko
Jung Ko has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12265905Abstract: Some embodiments provide a method for a circuit that executes a neural network including multiple nodes. The method loads a set of weight values for a node into a set of weight value buffers, a first set of bits of each input value of a set of input values for the node into a first set of input value buffers, and a second set of bits of each of the input values into a second set of input value buffers. The method computes a first dot product of the weight values and the first set of bits of each input value and a second dot product of the weight values and the second set of bits of each input value. The method shifts the second dot product by a particular number of bits and adds the first dot product with the bit-shifted second dot product to compute a dot product for the node.Type: GrantFiled: November 9, 2022Date of Patent: April 1, 2025Assignee: Amazon Technologies, Inc.Inventors: Jung Ko, Kenneth Duong, Steven L. Teig
-
Patent number: 12267285Abstract: A method for displaying a message in a messenger service by a user terminal is proposed. The method may include receiving the message from a server. The method may also include receiving a mask command for the message from the server when text information extracted from the message satisfies a preset condition. The method may further include displaying a mask message corresponding to the message in a chat room of the messenger service based on the mask command.Type: GrantFiled: September 8, 2023Date of Patent: April 1, 2025Assignee: Kakao Corp.Inventors: Dae Won Yoon, Ki Yong Shim, Eun Jung Ko, Doo Won Lee, Ji Sun Lee
-
Publication number: 20250103341Abstract: Some embodiments provide a neural network inference circuit (NNIC) for executing a neural network that includes multiple computation nodes at multiple layers. The NNIC includes multiple core circuits including memories for storing input values for the computation nodes. The NNIC includes a set of post-processing circuits for computing output values of the computation nodes. The output values for a first layer are for storage in the core circuits as input values for a second layer. The NNIC includes an output bus that connects the post-processing circuits to the core circuits. The output bus is for (i) receiving a set of output values from the post-processing circuits, (ii) transporting the output values of the set to the core circuits based on configuration data specifying a core circuit at which each of the output values is to be stored, and (iii) aligning the output values for storage in the core circuits.Type: ApplicationFiled: September 4, 2024Publication date: March 27, 2025Inventors: Kenneth Duong, Jung Ko, Steven L. Teig
-
Patent number: 12217160Abstract: Some embodiments provide a method that receives a specification of a neural network for execution by an integrated circuit. The integrated circuit includes a neural network inference circuit for executing the neural network to generate an output based on an input, an input processing circuit for providing the input to the neural network inference circuit, a microprocessor circuit for controlling the neural network inference circuit and the input processing circuit, and a unified memory accessible by the microprocessor circuit, the neural network inference circuit, and the input processing circuit. The method determines usage of the unified memory by the neural network inference circuit while executing the neural network. Based on the determined usage by the neural network inference circuit, the method allocates portions of the unified memory to the microprocessor circuit and input processing circuit.Type: GrantFiled: May 3, 2021Date of Patent: February 4, 2025Assignee: Amazon Technologies, Inc.Inventors: Jung Ko, Kenneth Duong, Steven L. Teig, Won Rhee
-
Patent number: 12190230Abstract: Some embodiments provide a neural network inference circuit (NNIC) for executing a neural network that includes multiple computation nodes at multiple layers. The NNIC includes a set of clusters of core computation circuits and a channel, connecting the core computation circuits, that includes separate segments corresponding to each of the clusters. The NNIC includes a fabric controller circuit, a cluster controller circuit for each of the clusters, and a core controller circuit for each of the core computation circuits. The fabric controller circuit receives high-level neural network instructions from a microprocessor and parses the high-level neural network instructions.Type: GrantFiled: November 7, 2022Date of Patent: January 7, 2025Assignee: Amazon Technologies, Inc.Inventors: Kenneth Duong, Jung Ko, Steven L. Teig
-
Patent number: 12165043Abstract: Some embodiments provide a neural network inference circuit for executing a neural network that includes multiple layers of computation nodes. At least a subset of the layers include non-convolutional layers. The neural network inference circuit includes multiple cores with memories that store input values for the layers. The cores are grouped into multiple clusters. For each cluster, the neural network inference circuit includes a set of processing circuits for receiving input values from the cores of the cluster and executing the computation nodes of the non-convolutional layers.Type: GrantFiled: October 8, 2023Date of Patent: December 10, 2024Assignee: Amazon Technologies, Inc.Inventors: Jung Ko, Kenneth Duong, Steven L. Teig
-
Patent number: 12159214Abstract: Some embodiments provide a method for executing a neural network. The method writes a first input to a first set of physical memory banks in a unified memory shared by an input processing circuit and a neural network inference circuit that executes the neural network. While the neural network inference circuit is executing the network a first time to generate a first output for the first input, the method writes a second input to a second set of physical memory banks in the unified memory. The neural network inference circuit executes a same set of instructions to read the first input from the first set of memory banks in order to execute the network the first time and to read the second input from the second set of memory banks in order to execute the network a second time to generate a second output for the second input.Type: GrantFiled: May 3, 2021Date of Patent: December 3, 2024Assignee: Perceive CorporationInventors: Jung Ko, Kenneth Duong, Steven L. Teig, Won Rhee
-
Publication number: 20240371502Abstract: A method of providing information on medical personnel may be provided. The method according to an embodiment may include: receiving surgical information including information on a medical treatment department performing a surgery, information on an operating surgeon, surgical category information, and surgical code information; determining a score for each item that constitutes the surgical information on medical personnel on the basis of information on on-duty records of the medical personnel; determining a medical personnel score corresponding to the surgical information on the basis of a weighted value corresponding to the score for the each item; and providing information on the medical personnel that includes the medical personnel score.Type: ApplicationFiled: December 27, 2023Publication date: November 7, 2024Inventors: Jung Hwan MOON, Ihn Seon Lee, Jun Young Choi, Ji Hye Woo, Chae Yeon Park, Ho Jun Seol, Do Hoon Lim, Sei Yeul Oh, Hyun Jung Ko, Mi Ja Ju
-
Publication number: 20240361824Abstract: For a neural network inference circuit that executes a neural network including multiple computation nodes at multiple layers for which data is stored in a plurality of memory banks, some embodiments provide a method for dynamically putting memory banks into a sleep mode of operation to conserve power. The method tracks the accesses to individual memory banks and, if a certain number of clock cycles elapse with no access to a particular memory bank, sends a signal to the memory bank indicating that it should operate in a sleep mode. Circuit components involved in dynamic memory sleep, in some embodiments, include a core RAM pipeline, a core RAM sleep controller, a set of core RAM bank select decoders, and a set of core RAM memory bank wrappers.Type: ApplicationFiled: March 4, 2024Publication date: October 31, 2024Inventors: Jung Ko, Kenneth Duong, Steven L. Teig
-
Patent number: 12118463Abstract: Some embodiments provide a method for a neural network inference circuit that executes a neural network including multiple computation nodes at multiple layers. Each computation node of a set of the computation nodes includes a dot product of input values and weight values. The method reads a set of encoded weight data for a set of weight values from a memory of the neural network inference circuit. The method decodes the encoded weight data to generate decoded weight data for the set of weight values. The method stores the decoded weight data in a buffer. The method uses the decoded weight data to execute a set of computation nodes. Each computation node of the set of computation nodes includes a dot product between the set of weight values and a different set of input values.Type: GrantFiled: December 14, 2021Date of Patent: October 15, 2024Assignee: PERCEIVE CORPORATIONInventors: Kenneth Duong, Jung Ko, Steven L. Teig
-
Patent number: 12093696Abstract: Some embodiments provide a neural network inference circuit (NNIC) for executing a neural network that includes multiple computation nodes at multiple layers. The NNIC includes multiple core circuits including memories for storing input values for the computation nodes. The NNIC includes a set of post-processing circuits for computing output values of the computation nodes. The output values for a first layer are for storage in the core circuits as input values for a second layer. The NNIC includes an output bus that connects the post-processing circuits to the core circuits. The output bus is for (i) receiving a set of output values from the post-processing circuits, (ii) transporting the output values of the set to the core circuits based on configuration data specifying a core circuit at which each of the output values is to be stored, and (iii) aligning the output values for storage in the core circuits.Type: GrantFiled: August 9, 2019Date of Patent: September 17, 2024Assignee: PERCEIVE CORPORATIONInventors: Kenneth Duong, Jung Ko, Steven L. Teig
-
Patent number: 12084388Abstract: A method for preparing a carbide protective layer comprises: (A) mixing a carbide powder, an organic binder, an organic solvent and a sintering aid to form a slurry; (B) spraying the slurry on a surface of a graphite component to form a composite component; (C) subjecting the composite component to a cold isostatic pressing densification process; (D) subjecting the composite component to a constant temperature heat treatment; (E) repeating steps (B)-(D) until a coating is formed on a surface of the composite component; (F) subjecting the coating to a segmented sintering process; (G) obtaining a carbide protective layer used for the surface of the composite component. Accordingly, while the carbide protective layer can be completed by using the wet cold isostatic pressing densification process and the cyclic multiple superimposition method, so that it can improve the corrosion resistance in the silicon carbide crystal growth process environment.Type: GrantFiled: January 13, 2023Date of Patent: September 10, 2024Assignee: NATIONAL CHUNG SHAN INSTITUTE OF SCIENCE AND TECHNOLOGYInventors: Chih-Hsing Wang, Cheng-Jung Ko, Chuen-Ming Gee, Chih-Wei Kuo, Hsueh-I Chen, Jun-Bin Huang, Ying-Tsung Chao
-
Publication number: 20240284934Abstract: The present invention relates to a method for manufacturing plant-based meat with artificial muscle fiber inserted therein. The method for manufacturing plant-based meat with artificial muscle fiber inserted therein, according to an embodiment of the present invention, comprises: a step of preparing an artificial muscle fiber composition by mixing alginic acid, carrageenan, and glucomannan in distilled water (S100); and a step of injecting the artificial muscle fiber composition into an inner nozzle of a 3D printer having double nozzles in which the inner nozzle is inserted inside an outer nozzle, injecting a vegetable protein composition into the outer nozzle, and then conducting 3D printing (S200).Type: ApplicationFiled: August 26, 2021Publication date: August 29, 2024Applicant: BIPPECOInventors: Hyun Jin PARK, Hyun Jung KO
-
Publication number: 20240257770Abstract: A display device includes: a timing controlling circuit configured to generate an image data, a data control signal and a gate control signal; a data driving circuit configured to generate a data signal, a stress signal and an anode reset signal using the image data and the data control signal; a gate driving circuit configured to generate a gate1 signal, a gate2 signal, an emission1 signal and an emission2 signal using the gate control signal; and a display panel configured to display an image using the data signal, the gate1 signal, the gate2 signal, the emission1 signal and the emission2 signal, wherein a width of a stress period between a rising timing of the gate2 signal and a rising timing of the emission1 signal is changed according to a luminance band of the image.Type: ApplicationFiled: January 24, 2024Publication date: August 1, 2024Inventors: Seung-Ho Jeong, Eun -Jung KO
-
Publication number: 20240239712Abstract: A method for preparing a carbide protective layer comprises: (A) mixing a carbide powder, an organic binder, an organic solvent and a sintering aid to form a slurry; (B) spraying the slurry on a surface of a graphite component to form a composite component; (C) subjecting the composite component to a cold isostatic pressing densification process; (D) subjecting the composite component to a constant temperature heat treatment; (E) repeating steps (B)-(D) until a coating is formed on a surface of the composite component; (F) subjecting the coating to a segmented sintering process; (G) obtaining a carbide protective layer used for the surface of the composite component. Accordingly, while the carbide protective layer can be completed by using the wet cold isostatic pressing densification process and the cyclic multiple superimposition method, so that it can improve the corrosion resistance in the silicon carbide crystal growth process environment.Type: ApplicationFiled: January 13, 2023Publication date: July 18, 2024Inventors: CHIH-HSING WANG, CHENG-JUNG KO, CHUEN-MING GEE, CHIH-WEI KUO, HSUEH-I CHEN, JUN-BIN HUANG, YING-TSUNG CHAO
-
Publication number: 20240234424Abstract: The present disclosure provides chip architectures for FPGAs and other routing implementations that provide for increased memory with high bandwidth, in a reduced size, accessible with reduced latency. Such architectures include a first layer in advanced node and a second layer in legacy node. The first layer includes an active die, active circuitry, and a configurable memory, and the second layer includes a passive die with wiring. The second layer is bonded to the first layer such that the wiring of the second layer interconnects with the active circuitry of the first layer and extends an amount of wiring possible in the first layer.Type: ApplicationFiled: January 19, 2024Publication date: July 11, 2024Inventors: Javier A. DeLaCruz, Don Draper, Jung Ko, Steven L. Teig
-
Publication number: 20240187359Abstract: Proposed is a method in which a server presents content through a chat room. The method may include acquiring information about a keyword or theme of the content, information about a condition for presenting the content, and information about the chat room, wherein the chat room is related to the keyword or theme and a plurality of users are able to participate in the chat room. The method may also include providing the chat room to a user terminal in association with the information about the keyword or theme of the content. The method may further include searching for the content in a content pool based on the keyword or theme, and when the condition for presenting the content is satisfied, presenting at least some items of the searched content to the user terminal through the chat room.Type: ApplicationFiled: November 28, 2023Publication date: June 6, 2024Inventors: Jae Seon LEE, Ga Hee JEONG, Ji Sun LEE, Eun Jung KO, Hye Ryeon LEE, Yeo Seong YOON, Hyun Soo PARK, Sung Jin PARK
-
Patent number: 11987740Abstract: Provided are a silicon nitride film etching composition, a method of etching a silicon nitride film using the same, and a manufacturing method of a semiconductor device. Specifically, a silicon nitride film may be highly selectively etched as compared with a silicon oxide film, and when the composition is applied to an etching process at a high temperature and a semiconductor manufacturing process, not only no precipitate occurs but also anomalous growth in which the thickness of the silicon oxide film is rather increased does not occur, thereby minimizing defects and reliability reduction.Type: GrantFiled: August 30, 2021Date of Patent: May 21, 2024Assignee: ENF Technology Co., Ltd.Inventors: Dong Hyun Kim, Hyeon Woo Park, Sung Jun Hong, Myung Ho Lee, Myung Geun Song, Hoon Sik Kim, Jae Jung Ko, Myong Euy Lee, Jun Hyeok Hwang
-
Publication number: 20240162178Abstract: A bonded structure is disclosed. The bonded structure can include a first element that has a first plurality of contact pads. The first plurality of contact pads includes a first contact pad and a second redundant contact pad. The bonded structure can also include a second element directly bonded to the first element without an intervening adhesive. The second element has a second plurality of contact pads. The second plurality of contact pads includes a third contact pad and a fourth redundant contact pad. The first contact pad is configured to connect to the third contact pad. The second contact pad is configured to connect to the fourth contact pad. The bonded structure can include circuitry that has a first state in which an electrical signal is transferred to the first contact pad and a second state in which the electrical signal is transferred to the second contact pad.Type: ApplicationFiled: January 25, 2024Publication date: May 16, 2024Inventors: Javier A. DeLaCruz, Belgacem Haba, Jung Ko
-
Patent number: 11939505Abstract: Provided are a silicon nitride film etching composition, a method of etching a silicon nitride film using the same, and a manufacturing method of a semiconductor device. Specifically, a silicon nitride film may be stably etched with a high selection ratio relative to a silicon oxide film, and when the composition is applied to an etching process at a high temperature and a semiconductor manufacturing process, not only no precipitate occurs but also anomalous growth in which the thickness of the silicon oxide film is rather increased does not occur, thereby minimizing defects and reliability reduction.Type: GrantFiled: August 30, 2021Date of Patent: March 26, 2024Assignee: ENF Technology Co., Ltd.Inventors: Dong Hyun Kim, Hyeon Woo Park, Sung Jun Hong, Myung Ho Lee, Myung Geun Song, Hoon Sik Kim, Jae Jung Ko, Myong Euy Lee, Jun Hyeok Hwang