Patents by Inventor Yoonho Park
Yoonho Park has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12142182Abstract: A display device, includes: a display panel; and a driving unit configured to receive image data, analyze the image data, and determine shapes of a plurality of pixel units making up the image, wherein the plurality of pixel units include a first pixel unit including a plurality of first sub-pixels or a second pixel unit including a plurality of second sub-pixels and having a shape different from a shape of the first pixel unit, and wherein the first sub-pixels and the second sub-pixels include a 1-1st color sub-pixel configured to emit a first color, a 1-2nd color sub-pixel configured to emit the first color, a second color sub-pixel configured to emit a second color, the second color being different from the first color, and a third color sub-pixel configured to emit a third color, the third color being different from the first color and the second color.Type: GrantFiled: December 20, 2022Date of Patent: November 12, 2024Assignee: Samsung Display Co., Ltd.Inventors: Tae Young Kim, Jongwoo Park, Yoonho Kim, Ja Eun Lee, Daeyoun Cho, Yoonsuk Choi
-
Patent number: 12073017Abstract: A wearable electronic device may include a display, a camera, at least one first sensor, and at least one processor operatively coupled to the display, the camera, and the at least one first sensor.Type: GrantFiled: November 2, 2022Date of Patent: August 27, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Eunbin Lee, Eunyoung Park, Shinjae Jung, Yoonho Lee, Eunkyung Lee
-
Publication number: 20240273004Abstract: Provided are a computer program product, system, and method for using symbolic execution to validate a hardware configuration with a reference software implementation of processing rules. Symbolic execution is performed of a software model comprising executable code defining logic of a hardware pipeline to produce first symbolic output. Symbolic execution is performed of a reference software implementation of processing rules implemented in the hardware pipeline to produce second symbolic output. The first symbolic output and the second symbolic are compared output to determine a discrepancy between the first and the second symbolic outputs. The discrepancy is reported including report information on a cause of the discrepancy.Type: ApplicationFiled: February 9, 2023Publication date: August 15, 2024Inventors: Yoonho PARK, Nikolas IOANNOU, GUY LADEN, LIRAN SCHOUR, Radu Ioan STOICA, Ian Glen NEAL
-
Patent number: 12052898Abstract: A display device may include a driving transistor disposed in a display area, a test transistor disposed in a peripheral area adjacent to the display area, and a resistance line disposed in the peripheral area, electrically connected to the test transistor and including a metal oxide.Type: GrantFiled: August 9, 2021Date of Patent: July 30, 2024Assignee: Samsung Display Co., Ltd.Inventors: June Hwan Kim, Yoonho Kim, Tae Young Kim, Jongwoo Park
-
Patent number: 11960986Abstract: A neural network accelerator includes an operator that calculates a first operation result based on a first tiled input feature map and first tiled filter data, a quantizer that generates a quantization result by quantizing the first operation result based on a second bit width extended compared with a first bit width of the first tiled input feature map, a compressor that generates a partial sum by compressing the quantization result, and a decompressor that generates a second operation result by decompressing the partial sum, the operator calculates a third operation result based on a second tiled input feature map, second tiled filter data, and the second operation result, and an output feature map is generated based on the third operation result.Type: GrantFiled: September 14, 2022Date of Patent: April 16, 2024Assignee: Samsung Electronics Co., Ltd.Inventors: Seokhyeong Kang, Yesung Kang, Sunghoon Kim, Yoonho Park
-
Patent number: 11676013Abstract: Based on historic job data, a computer processor can predict a configuration of a computer node for running a future computer job. The computer processor can pre-configure the computer node based on the predicted configuration. Responsive to receiving a submission of a job, the computer processor can launch the job on the pre-configured computer node.Type: GrantFiled: December 30, 2019Date of Patent: June 13, 2023Assignee: International Business Machines CorporationInventors: Eun Kyung Lee, Giacomo Domeniconi, Alessandro Morari, Yoonho Park
-
Publication number: 20230004790Abstract: A neural network accelerator includes an operator that calculates a first operation result based on a first tiled input feature map and first tiled filter data, a quantizer that generates a quantization result by quantizing the first operation result based on a second bit width extended compared with a first bit width of the first tiled input feature map, a compressor that generates a partial sum by compressing the quantization result, and a decompressor that generates a second operation result by decompressing the partial sum, the operator calculates a third operation result based on a second tiled input feature map, second tiled filter data, and the second operation result, and an output feature map is generated based on the third operation result.Type: ApplicationFiled: September 14, 2022Publication date: January 5, 2023Applicant: SAMSUNG ELECTRONICS CO., LTD.Inventors: Seokhyeong KANG, Yesung KANG, Sunghoon KIM, Yoonho PARK
-
Patent number: 11475285Abstract: A neural network accelerator includes an operator that calculates a first operation result based on a first tiled input feature map and first tiled filter data, a quantizer that generates a quantization result by quantizing the first operation result based on a second bit width extended compared with a first bit width of the first tiled input feature map, a compressor that generates a partial sum by compressing the quantization result, and a decompressor that generates a second operation result by decompressing the partial sum, the operator calculates a third operation result based on a second tiled input feature map, second tiled filter data, and the second operation result, and an output feature map is generated based on the third operation result.Type: GrantFiled: January 24, 2020Date of Patent: October 18, 2022Assignee: Samsung Electronics Co., Ltd.Inventors: Seokhyeong Kang, Yesung Kang, Sunghoon Kim, Yoonho Park
-
Patent number: 11334398Abstract: An application to run on a hardware processor comprising a plurality of cores may be received. Hardware resource utilization data associated with the application may be obtained. A trained neural network with the hardware resource utilization data associated with the application is run, the trained neural network predicting core temperature associated with running the application on a core of the hardware processor. Based on the core temperature predicted by the trained neural network, the plurality of cores may be controlled to run selective tasks associated with the application.Type: GrantFiled: August 29, 2018Date of Patent: May 17, 2022Assignee: International Business Machines CorporationInventors: Eun Kyung Lee, Bilge Acun, Yoonho Park, Paul W. Coteus
-
Patent number: 11121951Abstract: A method for managing a network queue memory includes receiving sensor information about the network queue memory, predicting a memory failure in the network queue memory based on the sensor information, and outputting a notification through a plurality of nodes forming a network and using the network queue memory, the notification configuring communications between the nodes.Type: GrantFiled: November 19, 2017Date of Patent: September 14, 2021Assignee: International Business Machines CorporationInventors: Carlos H. Andrade Costa, Chen-Yong Cher, Yoonho Park, Bryan S. Rosenburg, Kyung D. Ryu
-
Publication number: 20210201130Abstract: Based on historic job data, a computer processor can predict a configuration of a computer node for running a future computer job. The computer processor can pre-configure the computer node based on the predicted configuration. Responsive to receiving a submission of a job, the computer processor can launch the job on the pre-configured computer node.Type: ApplicationFiled: December 30, 2019Publication date: July 1, 2021Inventors: Eun Kyung Lee, Giacomo Domeniconi, Alessandro Morari, Yoonho Park
-
Patent number: 10956125Abstract: Methods and systems for shuffling data are described. A processor may generate pair data from source data. The processor may insert the pair data into local tuple spaces. In response to a request for a particular key, the processor may determine a presence of the requested key in a global tuple space. The processor may, in response to a presence of the requested key in the global tuple space, update the global tuple space. The update may be based on the pair data among the local tuple spaces including the existing key. The processor may, in response to an absence of the requested key in the global tuple space, insert pair data including the missing key from the local tuple spaces into the global tuple space. The processor may fetch the requested pair data, and may shuffle the fetched data to generate a dataset.Type: GrantFiled: December 21, 2017Date of Patent: March 23, 2021Assignee: International Business Machines CorporationInventors: Carlos Henrique Andrade Costa, Abdullah Kayi, Yoonho Park, Charles Johns
-
Patent number: 10891274Abstract: Methods and systems for shuffling data to generate a dataset are described. A first map module may generate first pair data, and a second map module may generate second pair data, from source data. The first map module may insert the first pair data into a first local tuple space accessible to the first map module. The second map module may insert the second pair data into a second local tuple space accessible to the second map module. A shuffle module may request pair data that includes a particular key. The first and second pair data may be inserted into a global tuple space accessible by the first and second map modules. The shuffle module may identify the requested pair data in the global tuple space, and may fetch the identified pair data from a memory. The shuffle module may shuffle the fetched pair data to generate the dataset.Type: GrantFiled: December 21, 2017Date of Patent: January 12, 2021Assignee: International Business Machines CorporationInventors: Abdullah Kayi, Carlos Henrique Andrade Costa, Yoonho Park, Charles Johns
-
Patent number: 10831252Abstract: Sub-components assembled into a computer are selected based on sub-component power efficiency levels (for example, low, medium, high) and/or anticipated usage of the computer. Multiple units of each type of sub-component (for example, a CPU) are tested to determine a power efficiency level of each unit. Computers in which sub-component efficiency levels are desired to match an overall computer efficiency level, receive sub-component units of corresponding efficiency level. Computers anticipated to run applications that make intensive use of a given type of sub-component receive the given units having a higher efficiency level. Computers anticipated to run applications that make little use of a given type of sub-component receive a physical unit having a lower efficiency level. Computers anticipated to run a wide variety of applications of no particular usage intensity for a given type of sub-component, receive a unit having an average efficiency level.Type: GrantFiled: July 25, 2017Date of Patent: November 10, 2020Assignee: International Business Machines CorporationInventors: Eun Kyung Lee, Bilge Acun, Yoonho Park
-
Patent number: 10761583Abstract: An application to run on a computer node comprising a plurality of hardware components is received. Expected performance of the hardware components is received. A power shifting ratio associated with each of the plurality of hardware components for each phase of the application is determined. Power between the hardware components is dynamically shifted based on the power shifting ratio at different phases of the application.Type: GrantFiled: September 11, 2018Date of Patent: September 1, 2020Assignee: International Business Machines CorporationInventors: Eun Kyung Lee, Bilge Acun, Yoonho Park, Alessandro Morari, Alper Buyuktosunoglu
-
Publication number: 20200242456Abstract: A neural network accelerator includes an operator that calculates a first operation result based on a first tiled input feature map and first tiled filter data, a quantizer that generates a quantization result by quantizing the first operation result based on a second bit width extended compared with a first bit width of the first tiled input feature map, a compressor that generates a partial sum by compressing the quantization result, and a decompressor that generates a second operation result by decompressing the partial sum, the operator calculates a third operation result based on a second tiled input feature map, second tiled filter data, and the second operation result, and an output feature map is generated based on the third operation result.Type: ApplicationFiled: January 24, 2020Publication date: July 30, 2020Applicant: POSTECH Research and Business Development FoundationInventors: Seokhyeong KANG, Yesung KANG, Sunghoon KIM, Yoonho PARK
-
Patent number: 10725834Abstract: Aspects of the present invention disclose a method, computer program product, and system for scheduling an application. The method includes one or more processors receiving a task, the task includes instructions indicating desired nodes to perform the task through programs. The method further includes one or more processors identifying application characteristic information and node characteristic information associated with nodes within a data center composed of nodes. The application characteristic information includes resource utilization information for applications on nodes within the data center. The method further includes one or more processors determining that the nodes reach a threshold level of power consumption. The threshold level is a pre-set maximum amount of power utilized by a node within the data center. The method further includes one or more processors determining a node consuming an amount of power that is below a threshold level of power consumption in the data center.Type: GrantFiled: November 30, 2017Date of Patent: July 28, 2020Assignee: International Business Machines CorporationInventors: Eun Kyung Lee, Bilge Acun, Yoonho Park
-
Patent number: 10713257Abstract: A data-centric reduction method, system, and computer program product include configuring a similarity threshold and a correlation threshold for an entire data set from at least two back-end nodes, reducing the entire data set to a reduced data set from the at least two back-end nodes sent to a front-end node by removing data based on the similarity threshold and the correlation threshold, and after the front-end receives the reduced data set, reconstructing the entire data set from the reduced data set using the similarity threshold and correlation threshold.Type: GrantFiled: September 29, 2017Date of Patent: July 14, 2020Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Eun Kyung Lee, I-Hsin Chung, Yoonho Park
-
Publication number: 20200081513Abstract: An application to run on a computer node comprising a plurality of hardware components is received. Expected performance of the hardware components is received. A power shifting ratio associated with each of the plurality of hardware components for each phase of the application is determined. Power between the hardware components is dynamically shifted based on the power shifting ratio at different phases of the application.Type: ApplicationFiled: September 11, 2018Publication date: March 12, 2020Inventors: Eun Kyung Lee, Bilge Acun, Yoonho Park, Alessandro Morari, Alper Buyuktosunoglu
-
Publication number: 20200073726Abstract: An application to run on a hardware processor comprising a plurality of cores may be received. Hardware resource utilization data associated with the application may be obtained. A trained neural network with the hardware resource utilization data associated with the application is run, the trained neural network predicting core temperature associated with running the application on a core of the hardware processor. Based on the core temperature predicted by the trained neural network, the plurality of cores may be controlled to run selective tasks associated with the application.Type: ApplicationFiled: August 29, 2018Publication date: March 5, 2020Inventors: Eun Kyung Lee, Bilge Acun, Yoonho Park, Paul W. Coteus