Patents by Inventor Jun Woo Jang

Jun Woo Jang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11971823
    Abstract: A computing method and device with data sharing are provided. The method includes loading, by a loader, input data of an input feature map stored in a memory in loading units according to a loading order, storing, by a buffer controller, the loaded input data in a reuse buffer of an address rotationally allocated according to the loading order, and transmitting, by each of a plurality of senders, to an executer respective input data corresponding to each output data of respective convolution operations among the input data stored in the reuse buffer, wherein portions of the transmitted respective input data overlap other.
    Type: Grant
    Filed: May 11, 2021
    Date of Patent: April 30, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Yoojin Kim, Channoh Kim, Hyun Sun Park, Sehwan Lee, Jun-Woo Jang
  • Patent number: 11954574
    Abstract: A neural processor. In some embodiments, the processor includes a first tile, a second tile, a memory, and a bus. The bus may be connected to the memory, the first tile, and the second tile. The first tile may include: a first weight register, a second weight register, an activations buffer, a first multiplier, and a second multiplier. The activations buffer may be configured to include: a first queue connected to the first multiplier and a second queue connected to the second multiplier. The first queue may include a first register and a second register adjacent to the first register, the first register being an output register of the first queue. The first tile may be configured: in a first state: to multiply, in the first multiplier, a first weight by an activation from the output register of the first queue, and in a second state: to multiply, in the first multiplier, the first weight by an activation from the second register of the first queue.
    Type: Grant
    Filed: June 19, 2019
    Date of Patent: April 9, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Ilia Ovsiannikov, Ali Shafiee Ardestani, Joseph H. Hassoun, Lei Wang, Sehwan Lee, JoonHo Song, Jun-Woo Jang, Yibing Michelle Wang, Yuecheng Li
  • Patent number: 11942635
    Abstract: The present invention relates to a positive electrode active material and a lithium secondary battery using a positive electrode containing the positive electrode active material. More particularly, the present invention relates to a positive electrode active material that is able to solve a problem of increased resistance according to an increase in Ni content by forming a charge transport channel in a lithium composite oxide and a lithium secondary battery using a positive electrode containing the positive electrode active material.
    Type: Grant
    Filed: September 28, 2020
    Date of Patent: March 26, 2024
    Assignee: ECOPRO BM CO., LTD.
    Inventors: Moon Ho Choi, Jun Won Suh, Jin Kyeong Yun, Jung Han Lee, Mi Hye Yun, Seung Woo Choi, Gwang Seok Choe, Ye Ri Jang, Joong Ho Bae
  • Patent number: 11939338
    Abstract: The present disclosure relates to a spiropyran composite having improved mechano-sensitivity, a method for manufacturing the same, and a chromic article including the same. Particularly, the spiropyran composite is obtained by bonding spiropyran covalently to a polymer, an inorganic material or a mixture thereof to form spiropyran composite, and impregnating the spiropyran composite with a sensitivity-enhancing agent for a suitable time through a wet infiltration process to form non-polar environment at the inner part of the spiropyran composite, to cause pre-stretch and to increase a change in color or fluorescence in response to force, stress or strain, thereby providing significantly improved mechano-sensitivity. In addition, a wet filtration process is used and no expensive equipment is required to simplify the process. Further, the process can be performed rapidly within several minutes to reduce the processing time.
    Type: Grant
    Filed: April 12, 2021
    Date of Patent: March 26, 2024
    Assignee: Korea Institute of Science and Technology
    Inventors: Jaewoo Kim, Jun Young Jo, Yong Chae Jung, Yong Seok Choi, Han Gyeol Jang, Sungmin Jung, Dong Woo Kim
  • Publication number: 20240013029
    Abstract: A method and apparatus for multi-task processing are disclosed. The method includes obtaining a base output corresponding to a first layer, restoring an input map corresponding to a second layer, obtaining an output map corresponding to the second layer, obtaining a delta output map corresponding to the second layer, and storing the base output map and the delta output map.
    Type: Application
    Filed: June 19, 2023
    Publication date: January 11, 2024
    Inventors: JUN-WOO JANG, Jaekang Shin, Lee-Sup Kim
  • Publication number: 20230351151
    Abstract: A neural processor. In some embodiments, the processor includes a first tile, a second tile, a memory, and a bus. The bus may be connected to the memory, the first tile, and the second tile. The first tile may include: a first weight register, a second weight register, an activations buffer, a first multiplier, and a second multiplier. The activations buffer may be configured to include: a first queue connected to the first multiplier and a second queue connected to the second multiplier. The first queue may include a first register and a second register adjacent to the first register, the first register being an output register of the first queue. The first tile may be configured: in a first state: to multiply, in the first multiplier, a first weight by an activation from the output register of the first queue, and in a second state: to multiply, in the first multiplier, the first weight by an activation from the second register of the first queue.
    Type: Application
    Filed: July 10, 2023
    Publication date: November 2, 2023
    Inventors: Ilia Ovsiannikov, Ali Shafiee Ardestani, Joseph H. Hassoun, Lei Wang, Sehwan Lee, JoonHo Song, Jun-Woo Jang, Yibing Michelle Wang, Yuecheng Li
  • Patent number: 11797461
    Abstract: A data transmission method for a convolution operation, and a convolution operation apparatus including a fetcher that includes a loader, at least one sender, a buffer controller, and a reuse buffer. The method includes loading, by the loader, input data of an input feature map according to a loading order, based on input data stored in the reuse buffer, a shape of a kernel to be used for a convolution operation, and two-dimensional (2D) zero-value information of weights of the kernel; storing, by the buffer controller, the loaded input data in the reuse buffer of an address cyclically assigned according to the loading order; and selecting, by each of the at least one sender, input data corresponding to each output data of a convolution operation among the input data stored in the reuse buffer, based on one-dimensional (1D) zero-value information of the weights, and outputting the selected input data.
    Type: Grant
    Filed: July 6, 2022
    Date of Patent: October 24, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Hyunsun Park, Jun-Woo Jang, Yoojin Kim, Channoh Kim
  • Publication number: 20230325462
    Abstract: A processor-implemented apparatus includes a forward transform module configured to transform input feature maps (IFMs) by performing a forward transform operation in a Winograd convolution (WinConv) domain, multiply and accumulate array (MAA) units configured to multiply the transformed IFMs by transformed kernels and perform a first inverse transform operation based on results of the multiplying, and an inverse transform module configured to generate output feature maps (OFMs) based on a result of the first inverse transform operation.
    Type: Application
    Filed: April 5, 2023
    Publication date: October 12, 2023
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Gopinath Vasanth MAHALE, Pramod Parameshwara UDUPA, Jun-Woo JANG, Kiran Kolar CHANDRASEKHARAN, Sehwan LEE
  • Patent number: 11783161
    Abstract: A neural processor. In some embodiments, the processor includes a first tile, a second tile, a memory, and a bus. The bus may be connected to the memory, the first tile, and the second tile. The first tile may include: a first weight register, a second weight register, an activations buffer, a first multiplier, and a second multiplier. The activations buffer may be configured to include: a first queue connected to the first multiplier and a second queue connected to the second multiplier. The first queue may include a first register and a second register adjacent to the first register, the first register being an output register of the first queue. The first tile may be configured: in a first state: to multiply, in the first multiplier, a first weight by an activation from the output register of the first queue, and in a second state: to multiply, in the first multiplier, the first weight by an activation from the second register of the first queue.
    Type: Grant
    Filed: June 19, 2019
    Date of Patent: October 10, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Ilia Ovsiannikov, Ali Shafiee Ardestani, Joseph H. Hassoun, Lei Wang, Sehwan Lee, JoonHo Song, Jun-Woo Jang, Yibing Michelle Wang, Yuecheng Li
  • Patent number: 11783162
    Abstract: A neural processor. In some embodiments, the processor includes a first tile, a second tile, a memory, and a bus. The bus may be connected to the memory, the first tile, and the second tile. The first tile may include: a first weight register, a second weight register, an activations buffer, a first multiplier, and a second multiplier. The activations buffer may be configured to include: a first queue connected to the first multiplier and a second queue connected to the second multiplier. The first queue may include a first register and a second register adjacent to the first register, the first register being an output register of the first queue. The first tile may be configured: in a first state: to multiply, in the first multiplier, a first weight by an activation from the output register of the first queue, and in a second state: to multiply, in the first multiplier, the first weight by an activation from the second register of the first queue.
    Type: Grant
    Filed: August 27, 2019
    Date of Patent: October 10, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Ilia Ovsiannikov, Ali Shafiee Ardestani, Joseph H. Hassoun, Lei Wang, Sehwan Lee, JoonHo Song, Jun-Woo Jang, Yibing Michelle Wang, Yuecheng Li
  • Patent number: 11775801
    Abstract: A neural processor. In some embodiments, the processor includes a first tile, a second tile, a memory, and a bus. The bus may be connected to the memory, the first tile, and the second tile. The first tile may include: a first weight register, a second weight register, an activations buffer, a first multiplier, and a second multiplier. The activations buffer may be configured to include: a first queue connected to the first multiplier and a second queue connected to the second multiplier. The first queue may include a first register and a second register adjacent to the first register, the first register being an output register of the first queue. The first tile may be configured: in a first state: to multiply, in the first multiplier, a first weight by an activation from the output register of the first queue, and in a second state: to multiply, in the first multiplier, the first weight by an activation from the second register of the first queue.
    Type: Grant
    Filed: August 27, 2019
    Date of Patent: October 3, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Ilia Ovsiannikov, Ali Shafiee Ardestani, Joseph H. Hassoun, Lei Wang, Sehwan Lee, JoonHo Song, Jun-Woo Jang, Yibing Michelle Wang, Yuecheng Li
  • Patent number: 11775802
    Abstract: A neural processor. In some embodiments, the processor includes a first tile, a second tile, a memory, and a bus. The bus may be connected to the memory, the first tile, and the second tile. The first tile may include: a first weight register, a second weight register, an activations buffer, a first multiplier, and a second multiplier. The activations buffer may be configured to include: a first queue connected to the first multiplier and a second queue connected to the second multiplier. The first queue may include a first register and a second register adjacent to the first register, the first register being an output register of the first queue. The first tile may be configured: in a first state: to multiply, in the first multiplier, a first weight by an activation from the output register of the first queue, and in a second state: to multiply, in the first multiplier, the first weight by an activation from the second register of the first queue.
    Type: Grant
    Filed: August 27, 2019
    Date of Patent: October 3, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Ilia Ovsiannikov, Ali Shafiee Ardestani, Joseph H. Hassoun, Lei Wang, Sehwan Lee, JoonHo Song, Jun-Woo Jang, Yibing Michelle Wang, Yuecheng Li
  • Patent number: 11736803
    Abstract: Disclosed herein is a full-screen display device capable of sufficiently securing light transmittance of a sensor area overlapping a sensor unit in a pixel array and minimizing deterioration in perceived image quality of the sensor area. The pixels are arranged in the sensor area overlapping the sensor unit in the pixel array of the full-screen display device such that the number of pixels gradually decreases from the outer periphery toward the center of the sensor area in units of masks, and the area of a transmission portion gradually increases from the outer periphery toward the center of the sensor area in units of masks.
    Type: Grant
    Filed: June 22, 2022
    Date of Patent: August 22, 2023
    Assignee: LG Display Co., Ltd.
    Inventors: Young-Tae Kim, Jun-Woo Jang, Tae-Yong Park, Woong-Jin Seo
  • Publication number: 20230153571
    Abstract: A quantization method of a neural network, and an apparatus for performing the quantization method are provided. The quantization method includes obtaining parameters of the neural network, quantizing the parameters using a quantization scheme in which at least one positive quantization level and at least one negative quantization level symmetric to each other by excluding zero from quantization levels, and outputting the quantized parameters.
    Type: Application
    Filed: August 12, 2022
    Publication date: May 18, 2023
    Applicants: Samsung Electronics Co., Ltd., UNIST (ULSAN NATIONAL INSTITUTE OF SCIENCE AND TECHNOLOGY)
    Inventors: Jun-Woo JANG, Jaewoo PARK, Faaiz ASIM, Jongeun LEE
  • Publication number: 20230130779
    Abstract: A method with neural network compression includes: generating a second neural network by fine-tuning a first neural network, which is pre-trained based on training data, for a predetermined purpose; determining delta weights corresponding to differences between weights of the first neural network and weights of the second neural network; compressing the delta weights; retraining the second neural network updated based on the compressed delta weights and the weights of the first neural network; and encoding and storing the delta weights updated by the retraining of the second neural network.
    Type: Application
    Filed: August 22, 2022
    Publication date: April 27, 2023
    Applicants: Samsung Electronics Co., Ltd., Korea Advanced Institute of Science and Technology
    Inventors: Jun-Woo JANG, Jaekang SHIN, Lee-Sup KIM, Seungkyu CHOI
  • Publication number: 20230131543
    Abstract: A processor-implemented method with multi-task processing includes: obtaining weights of a first neural network; obtaining first delta weights of a second neural network that is fine-tuned from the first neural network, based on a target task; performing an operation of the second neural network on first input data, based on sums of the weights of the first neural network and the first delta weights; obtaining second delta weights of a third neural network that is fine-tuned from the first neural network, based on a change of the target task; replacing the first delta weights with the second delta weights; and performing an operation of the third neural network on second input data, based on sums of the weights of the first neural network and the second delta weights, wherein the first delta weights comprise difference values in the weights of the first neural network and weights of the second neural network, and the second delta weight comprises difference values in the weights of the first neural network an
    Type: Application
    Filed: September 6, 2022
    Publication date: April 27, 2023
    Applicants: Samsung Electronics Co., Ltd., Korea Advanced Institute of Science and Technology
    Inventors: Jun-Woo JANG, Jaekang SHIN, Lee-Sup KIM, Seungkyu CHOI
  • Publication number: 20220383103
    Abstract: A processor-implemented hardware accelerator method includes: receiving input data; loading a lookup table (LUT); determining an address of the LUT by inputting the input data to a comparator; obtaining a value of the LUT corresponding to the input data based on the address; and determining a value of a nonlinear function corresponding to the input data based on the value of the LUT, wherein the LUT is determined based on a weight of a neural network that outputs the value of the nonlinear function.
    Type: Application
    Filed: October 12, 2021
    Publication date: December 1, 2022
    Applicant: Samsung Electronics Co., Ltd.
    Inventors: Junki PARK, Joonsang YU, Jun-Woo JANG
  • Publication number: 20220342833
    Abstract: A data transmission method for a convolution operation, and a convolution operation apparatus including a fetcher that includes a loader, at least one sender, a buffer controller, and a reuse buffer. The method includes loading, by the loader, input data of an input feature map according to a loading order, based on input data stored in the reuse buffer, a shape of a kernel to be used for a convolution operation, and two-dimensional (2D) zero-value information of weights of the kernel; storing, by the buffer controller, the loaded input data in the reuse buffer of an address cyclically assigned according to the loading order; and selecting, by each of the at least one sender, input data corresponding to each output data of a convolution operation among the input data stored in the reuse buffer, based on one-dimensional (1D) zero-value information of the weights, and outputting the selected input data.
    Type: Application
    Filed: July 6, 2022
    Publication date: October 27, 2022
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Hyunsun PARK, Jun-Woo JANG, Yoojin KIM, Channoh KIM
  • Publication number: 20220337748
    Abstract: Disclosed herein is a full-screen display device capable of sufficiently securing light transmittance of a sensor area overlapping a sensor unit in a pixel array and minimizing deterioration in perceived image quality of the sensor area. The pixels are arranged in the sensor area overlapping the sensor unit in the pixel array of the full-screen display device such that the number of pixels gradually decreases from the outer periphery toward the center of the sensor area in units of masks, and the area of a transmission portion gradually increases from the outer periphery toward the center of the sensor area in units of masks.
    Type: Application
    Filed: June 22, 2022
    Publication date: October 20, 2022
    Inventors: Young-Tae KIM, Jun-Woo JANG, Tae-Yong PARK, Woong-Jin SEO
  • Publication number: 20220284274
    Abstract: A neural processing device includes a first memory configured to store universal data, a second memory distinguished from the first memory and having a capacity less than that of the first memory, a bandwidth control path configured to reconfigure a memory bandwidth for memory clients to use one of the first memory and the second memory based on a control signal, and a control logic configured to calculate a target capacity for data of a target client of the memory clients determined based on a layer configuration of an artificial neural network, and generate the control signal to store the data of the target client in the second memory based on a result of comparing the target capacity and the capacity of the second memory.
    Type: Application
    Filed: July 15, 2021
    Publication date: September 8, 2022
    Applicant: Samsung Electronics Co., Ltd.
    Inventors: Jun-Woo JANG, Jinook SONG, Sehwan LEE