Patents by Inventor Jinin SO
Jinin SO has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250252074Abstract: A memory module includes a substrate, an allocator, memory cluster packages and serial communication lanes. The allocator is mounted on the substrate, and communicates with an external device through a CXL interface. The memory cluster packages are mounted on the substrate, and are controlled by the allocator. Each of the memory cluster packages includes memories and a memory controller. The serial communication lanes are disposed on the substrate for communications between the allocator and the memory cluster packages. A first memory cluster package communicates with the allocator through a first serial communication lane and the serial interface. A second memory cluster package communicates with the first memory cluster package through a second serial communication lane and the serial interface.Type: ApplicationFiled: January 10, 2025Publication date: August 7, 2025Applicant: SEOUL NATIONAL UNIVERSITY R&DB FOUNDATIONInventors: Yongsuk Kwon, Jungho Ahn, Sungmin Yun, Kyungsoo Kim, Jinin So, Jonggeon Lee
-
Patent number: 12360900Abstract: A processor includes a processing core configured to process each of a plurality of requests by accessing a corresponding one of a first memory and a second memory, a latency monitor configured to generate first latency information and second latency information, the first latency information comprising a first access latency to the first memory, and the second latency information comprising a second access latency to the second memory, a plurality of cache ways divided into a first partition and a second partition, and a decision engine configured to allocate each of the plurality of cache ways to one of the first partition and the second partition, based on the first latency information and the second latency information.Type: GrantFiled: February 28, 2024Date of Patent: July 15, 2025Assignees: SAMSUNG ELECTRONICS CO., LTD., Daegu Gyeongbuk Institute Of Science And TechnologyInventors: Jin Jung, Daehoon Kim, Hwanjun Lee, Jonggeon Lee, Jinin So
-
Publication number: 20250165153Abstract: A memory includes: a request register configured to receive a first signal including a requester identifier using a first protocol from a host and configured to output a first priority corresponding to the requester identifier; a checker module configured to receive a second signal including a command and a request type from the host and using a second protocol that is different than the first protocol, where the checker module is configured to receive the first priority from the request register, and where the checker module is configured to determine a second priority of the command based on the first priority and the request type; a command generator configured to generate an internal command for memory operation based on the command; and a memory controller configured to schedule the internal command in a command queue based on the second priority.Type: ApplicationFiled: January 17, 2025Publication date: May 22, 2025Inventors: Nayeon Kim, Kyungsoo Kim, Yongsuk Kwon, Jinin So, Kyoungwan Woo
-
Publication number: 20250156078Abstract: An accelerator module includes a plurality of memories and a controller. The controller includes a plurality of memory controllers, a plurality of processing units, and a managing circuit. The plurality of memory controllers and the plurality of memories form a plurality of memory sub-channels. The plurality of processing units perform computational operations on a plurality of data stored in or read from the plurality of memories. The managing circuit redistributes tasks performed by the plurality of processing units or changes connections between the plurality of memory controllers and the plurality of processing units in response to a first memory sub-channel and a first processing unit being in a heavy-workload state.Type: ApplicationFiled: January 16, 2025Publication date: May 15, 2025Inventors: Kyoungwan Woo, Kyungsoo Kim, Yongsuk Kwon, Nayeon Kim, Jinin So
-
Patent number: 12272423Abstract: A method of operating a Near Memory Processing-Dual In-line Memory (NMP-DIMM) system, the method including: determining, by an adaptive latency module of the NMP-DIMM system, a synchronized read latency value for performing a read operation upon receiving a Multi-Purpose Register (MPR) read instruction from a host device communicatively connected with the NMP-DIMM system, wherein the MPR read instruction is received from the host device for training the NMP-DIMM system, wherein the synchronized read latency value is determined based on one or more read latency values associated with one or more memory units of the NMP-DIMM system; and synchronizing, by the adaptive latency module, one or more first type data paths and a second type data path in the NMP-DIMM system based on the determined synchronized read latency value.Type: GrantFiled: October 27, 2022Date of Patent: April 8, 2025Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Sachin Suresh Upadhya, Eldho Pathiyakkara Thombra Mathew, Mayuresh Jyotindra Salelkar, Jinin So, Jonggeon Lee, Kyungsoo Kim
-
Patent number: 12236099Abstract: An accelerator module includes a plurality of memories and a controller. The controller includes a plurality of memory controllers, a plurality of processing units, and a managing circuit. The plurality of memory controllers and the plurality of memories form a plurality of memory sub-channels. The plurality of processing units perform computational operations on a plurality of data stored in or read from the plurality of memories. The managing circuit redistributes tasks performed by the plurality of processing units or changes connections between the plurality of memory controllers and the plurality of processing units in response to a first memory sub-channel and a first processing unit being in a heavy-workload state.Type: GrantFiled: August 25, 2023Date of Patent: February 25, 2025Assignee: Samsung Electronics Co., Ltd.Inventors: Kyoungwan Woo, Kyungsoo Kim, Yongsuk Kwon, Nayeon Kim, Jinin So
-
Patent number: 12236098Abstract: A memory includes: a request register configured to receive a first signal including a requester identifier using a first protocol from a host and configured to output a first priority corresponding to the requester identifier; a checker module configured to receive a second signal including a command and a request type from the host and using a second protocol that is different than the first protocol, where the checker module is configured to receive the first priority from the request register, and where the checker module is configured to determine a second priority of the command based on the first priority and the request type; a command generator configured to generate an internal command for memory operation based on the command; and a memory controller configured to schedule the internal command in a command queue based on the second priority.Type: GrantFiled: May 24, 2023Date of Patent: February 25, 2025Assignee: Samsung Electronics Co., Ltd.Inventors: Nayeon Kim, Kyungsoo Kim, Yongsuk Kwon, Jinin So, Kyoungwan Woo
-
Publication number: 20250060967Abstract: Various example embodiments may include methods of operating a network device, non-transitory computer readable media including computer readable instructions for operating a network device, systems including a network device, and/or a compute express link (CXL) switching device for synchronizing data. A CXL-based system includes a plurality of CXL processing devices configured to perform matrix multiplication calculation based on input vector data and a partial matrix, and output at least one interrupt signal and at least one packet based on results of the matrix multiplication calculation, the at least one packet including output vector data and characteristic data associated with the output vector data, and a CXL switching device configured to, synchronize the output vector data, the synchronizing including performing a calculation operation on the output vector data based on the interrupt signal and the packet, and provide the synchronized vector data to the plurality of CXL processing devices.Type: ApplicationFiled: April 23, 2024Publication date: February 20, 2025Applicant: Samsung Electronics Co., Ltd.Inventors: Younghyun LEE, Jinin SO, Kyungsoo KIM, Sangsu PARK, Jin JUNG, Jeonghyeon CHO
-
Patent number: 12223188Abstract: A memory interface for interfacing with a memory device includes a control circuit configured to determine whether a trigger event has occurred for initializing one or more memory locations in the memory device, and initialize the one or more memory locations in the memory device with pre-defined data upon determining the trigger event has occurred.Type: GrantFiled: May 19, 2022Date of Patent: February 11, 2025Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Raghu Vamsi Krishna Talanki, Archita Khare, Rahul Tarikere Ravikumar, Jinin So, Jonggeon Lee
-
Publication number: 20240394331Abstract: A compute express link (CXL) memory device includes a memory device storing data, and a controller configured to read the data from the memory device based on a first command received through a first protocol, select a calculation engine based on a second command received through a second protocol different from the first protocol, and control the calculation engine to perform a calculation on the read data.Type: ApplicationFiled: March 18, 2024Publication date: November 28, 2024Applicant: SAMSUNG ELECTRONICS CO., LTD.Inventors: Sangsu PARK, Kyungsoo Kim, Nayeon Kim, Jinin So, Kyoungwan Woo, Younghyun Lee, Jong-Geon Lee, Jin Jung, Jeonghyeon Cho
-
Publication number: 20240311302Abstract: A processor includes a processing core configured to process each of a plurality of requests by accessing a corresponding one of a first memory and a second memory, a latency monitor configured to generate first latency information and second latency information, the first latency information comprising a first access latency to the first memory, and the second latency information comprising a second access latency to the second memory, a plurality of cache ways divided into a first partition and a second partition, and a decision engine configured to allocate each of the plurality of cache ways to one of the first partition and the second partition, based on the first latency information and the second latency information.Type: ApplicationFiled: February 28, 2024Publication date: September 19, 2024Applicants: Samsung Electronics Co., Ltd., Daegu Gyeongbuk Institute Of Science And TechnologyInventors: Jin Jung, Daehoon Kim, Hwanjun Lee, Jonggeon Lee, Jinin So
-
Publication number: 20240281402Abstract: A computing system includes an interconnect device, a plurality of memory devices electrically coupled to communicate with the interconnect device, a plurality of host devices electrically coupled to communicate with the interconnect device and configured to generate requests for access to the plurality of memory devices via the interconnect device, and a plurality of congestion monitors. These congestion monitors are configured to generate congestion information by monitoring a congestion degree of signal transfers with respect to at least one of the plurality of memory devices and the interconnect device in real time. The computing system is also configured to control at least one of: a memory region allocation of the plurality of host devices to the plurality of memory devices, and a signal transfer path inside the interconnect device, based on the congestion information.Type: ApplicationFiled: September 5, 2023Publication date: August 22, 2024Inventors: Jin Jung, Younghyun Lee, Yongsuk Kwon, Kyungsoo Kim, Jinin So
-
Publication number: 20240248609Abstract: An accelerator module includes a plurality of memories and a controller. The controller includes a plurality of memory controllers, a plurality of processing units, and a managing circuit. The plurality of memory controllers and the plurality of memories form a plurality of memory sub-channels. The plurality of processing units perform computational operations on a plurality of data stored in or read from the plurality of memories. The managing circuit redistributes tasks performed by the plurality of processing units or changes connections between the plurality of memory controllers and the plurality of processing units in response to a first memory sub-channel and a first processing unit being in a heavy-workload state.Type: ApplicationFiled: August 25, 2023Publication date: July 25, 2024Inventors: Kyoungwan Woo, Kyungsoo Kim, Yongsuk Kwon, Nayeon Kim, Jinin So
-
Publication number: 20240211424Abstract: A memory expander includes memory sub-modules, power management integrated circuits, a controller, and a power controller. The memory sub-modules store data, and each of the memory sub-modules includes one or more memories. The power management integrated circuits independently supply powers to the memory sub-modules, respectively. The controller communicates with an external device through an interface (e.g., compute express link (CXL)), controls operations of the memory sub-modules, and checks whether the memory sub-modules are abnormal. The power controller controls operations of the power management integrated circuits. In response to a first memory sub-module becoming abnormal, the power controller controls a first power management integrated circuit to block a first power supplied to the first memory sub-module.Type: ApplicationFiled: August 11, 2023Publication date: June 27, 2024Inventors: Kyungsoo KIM, Jinin SO, Yongsuk KWON, Jin JUNG
-
Publication number: 20240201858Abstract: A memory includes: a request register configured to receive a first signal including a requester identifier using a first protocol from a host and configured to output a first priority corresponding to the requester identifier; a checker module configured to receive a second signal including a command and a request type from the host and using a second protocol that is different than the first protocol, where the checker module is configured to receive the first priority from the request register, and where the checker module is configured to determine a second priority of the command based on the first priority and the request type; a command generator configured to generate an internal command for memory operation based on the command; and a memory controller configured to schedule the internal command in a command queue based on the second priority.Type: ApplicationFiled: May 24, 2023Publication date: June 20, 2024Inventors: Nayeon Kim, Kyungsoo Kim, Yongsuk Kwon, Jinin So, Kyoungwan Woo
-
Patent number: 11922068Abstract: A Near Memory Processing (NMP) Dual In-line Memory Module (DIMM) is provided that includes random access memory (RAM), a Near-Memory-Processing (NMP) circuit and a first control port. The NMP circuit is for receiving a command from a host system, determining an operation to be performed on the RAM in response to the command, and a location of data within the RAM with respect to the determined operation. The first control port interacts with a second control port of the host system to enable the NMP circuit to exchange control information with the host system in response to the received command.Type: GrantFiled: February 7, 2022Date of Patent: March 5, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Eldho Mathew Pathiyakkara Thombra, Ravi Shankar Venkata Jonnalagadda, Prashant Vishwanath Mahendrakar, Jinin So, Jong-Geon Lee, Vishnu Charan Thummala
-
Publication number: 20230386534Abstract: A method of operating a Near Memory Processing-Dual In-line Memory (NMP-DIMM) system, the method including: determining, by an adaptive latency module of the NMP-DIMM system, a synchronized read latency value for performing a read operation upon receiving a Multi-Purpose Register (MPR) read instruction from a host device communicatively connected with the NMP-DIMM system, wherein the MPR read instruction is received from the host device for training the NMP-DIMM system, wherein the synchronized read latency value is determined based on one or more read latency values associated with one or more memory units of the NMP-DIMM system; and synchronizing, by the adaptive latency module, one or more first type data paths and a second type data path in the NMP-DIMM system based on the determined synchronized read latency value.Type: ApplicationFiled: October 27, 2022Publication date: November 30, 2023Inventors: Sachin Suresh Upadhya, Eldho Pathiyakkara Thombra Mathew, Mayuresh Jyotindra Salelkar, Jinin So, Jonggeon Lee, Kyungsoo Kim
-
Publication number: 20230214138Abstract: A memory interface for interfacing with a memory device includes a control circuit configured to determine whether a trigger event has occurred for initializing one or more memory locations in the memory device, and initialize the one or more memory locations in the memory device with pre-defined data upon determining the trigger event has occurred.Type: ApplicationFiled: May 19, 2022Publication date: July 6, 2023Inventors: Raghu Vamsi Krishna TALANKI, Archita KHARE, Rahul Tarikere RAVIKUMAR, Jinin SO, Jonggeon LEE
-
Publication number: 20230185487Abstract: A Near Memory Processing (NMP) Dual In-line Memory Module (DIMM) is provided that includes random access memory (RAM), a Near-Memory-Processing (NMP) circuit and a first control port. The NMP circuit is for receiving a command from a host system, determining an operation to be performed on the RAM in response to the command, and a location of data within the RAM with respect to the determined operation. The first control port interacts with a second control port of the host system to enable the NMP circuit to exchange control information with the host system in response to the received command.Type: ApplicationFiled: February 7, 2022Publication date: June 15, 2023Inventors: ELDHO MATHEW PATHIYAKKARA THOMBRA, RAVI SHANKAR VENKATA JONNALAGADDA, PRASHANT VISHWANATH MAHENDRAKAR, JININ SO, JONG-GEON LEE, VISHNU CHARAN THUMMALA
-
Patent number: 11620135Abstract: A booting method of a computing system, which includes a memory module including a processing device connected to a plurality of memory devices, including: powering up the computing system; after powering up the computing system, performing first memory training on the plurality of memory devices by the processing device in the memory module, and generating a module ready signal indicating completion of the first memory training; after powering up the computing system, performing a first booting sequence by a host device, the host device executing basic input/output system (BIOS) code of a BIOS memory included in the computing system; waiting for the module ready signal to be received from the memory module in the host device after performing the first booting sequence; and receiving the module ready signal in the host device, and performing a second booting sequence based on the module ready signal.Type: GrantFiled: December 9, 2020Date of Patent: April 4, 2023Assignee: Samsung Electronics Co., Ltd.Inventors: Jonggeon Lee, Kyungsoo Kim, Jinin So, Yongsuk Kwon, Jin Jung, Jeonghyeon Cho