Patents by Inventor Zvi GUZ
Zvi GUZ has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12197388Abstract: A system and method for leveraging a native operating system page cache when using non-block system storage devices is disclosed. A computer may include a processor, memory, and a non-block system storage device. A file system may be stored in memory and running on the processor, which may include a page cache. A key-value file system (KVFS) may reside between the file system and the storage device and may map received file system commands to key-value system commands that may be executed by the storage device. Results of the key-value system commands may be returned to the file system, permitting the operating system to cache data in the page cache.Type: GrantFiled: April 8, 2022Date of Patent: January 14, 2025Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Vikas Sinha, Zvi Guz, Ming Lin
-
Patent number: 12147358Abstract: According to one general aspect, a device may include a host interface circuit configured to communicate with a host device via a data protocol that employs data messages. The device may include a storage element configured to store data in response to a data message. The host interface circuit may be configured to detect when a tunneling command is embedded within the data message; extract a tunneled message address information from the data message; retrieve, via the tunneled message address information, a tunneled message stored in a memory of the host device; and route the tunneled message to an on-board processor and/or data processing logic. The on-board processor and/or data processing logic may be configured to execute one or more instructions in response to the tunneled message.Type: GrantFiled: August 14, 2023Date of Patent: November 19, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Ramdas P. Kachare, Zvi Guz, Son T. Pham, Anahita Shayesteh, Xuebin Yao, Oscar Prem Pinto
-
Publication number: 20240020009Abstract: A system includes a plurality of storage processing accelerators (SPAs), at least one SPA of the plurality of SPAs including a plurality of programmable processors or storage processing engines (SPEs), the plurality of SPEs including n SPEs (n is a natural number greater than zero), where 1st to (n?1) SPEs of the n SPEs are configured to provide an output of the SPE to a next SPE of the n SPEs in a pipeline to be used as an input of the next SPE; and an acceleration platform manager (APM) connected to the plurality of the SPAs and the plurality of SPEs, and configured to control data processing in the plurality of SPAs and the plurality of SPEs.Type: ApplicationFiled: September 20, 2023Publication date: January 18, 2024Inventors: Ramdas P. Kachare, Vijay Balakrishnan, Stephen G. Fischer, Fred Worley, Anahita Shayesteh, Zvi Guz
-
Publication number: 20230393996Abstract: According to one general aspect, a device may include a host interface circuit configured to communicate with a host device via a data protocol that employs data messages. The device may include a storage element configured to store data in response to a data message. The host interface circuit may be configured to detect when a tunneling command is embedded within the data message; extract a tunneled message address information from the data message; retrieve, via the tunneled message address information, a tunneled message stored in a memory of the host device; and route the tunneled message to an on-board processor and/or data processing logic. The on-board processor and/or data processing logic may be configured to execute one or more instructions in response to the tunneled message.Type: ApplicationFiled: August 14, 2023Publication date: December 7, 2023Inventors: Ramdas P. KACHARE, Zvi GUZ, Son T. PHAM, Anahita SHAYESTEH, Xuebin YAO, Oscar Prem PINTO
-
Patent number: 11768601Abstract: A system includes a plurality of storage processing accelerators (SPAs), at least one SPA of the plurality of SPAs including a plurality of programmable processors or storage processing engines (SPEs), the plurality of SPEs including n SPEs (n is a natural number greater than zero), where 1st to (n?1) SPEs of the n SPEs are configured to provide an output of the SPE to a next SPE of the n SPEs in a pipeline to be used as an input of the next SPE; and an acceleration platform manager (APM) connected to the plurality of the SPAs and the plurality of SPEs, and configured to control data processing in the plurality of SPAs and the plurality of SPEs.Type: GrantFiled: June 9, 2021Date of Patent: September 26, 2023Assignee: Samsung Electronics Co., Ltd.Inventors: Ramdas P. Kachare, Vijay Balakrishnan, Stephen G. Fischer, Fred Worley, Anahita Shayesteh, Zvi Guz
-
Patent number: 11726930Abstract: According to one general aspect, a device may include a host interface circuit configured to communicate with a host device via a data protocol that employs data messages. The device may include a storage element configured to store data in response to a data message. The host interface circuit may be configured to detect when a tunneling command is embedded within the data message; extract a tunneled message address information from the data message; retrieve, via the tunneled message address information, a tunneled message stored in a memory of the host device; and route the tunneled message to an on-board processor and/or data processing logic. The on-board processor and/or data processing logic may be configured to execute one or more instructions in response to the tunneled message.Type: GrantFiled: June 3, 2021Date of Patent: August 15, 2023Inventors: Ramdas P. Kachare, Zvi Guz, Son T. Pham, Anahita Shayesteh, Xuebin Yao, Oscar Prem Pinto
-
Publication number: 20220300456Abstract: A system and method for leveraging a native operating system page cache when using non-block system storage devices is disclosed. A computer may include a processor, memory, and a non-block system storage device. A file system may be stored in memory and running on the processor, which may include a page cache. A key-value file system (KVFS) may reside between the file system and the storage device and may map received file system commands to key-value system commands that may be executed by the storage device. Results of the key-value system commands may be returned to the file system, permitting the operating system to cache data in the page cache.Type: ApplicationFiled: April 8, 2022Publication date: September 22, 2022Inventors: Vikas SINHA, Zvi GUZ, Ming LIN
-
Patent number: 11301422Abstract: A system and method for leveraging a native operating system (130) page cache (315) when using non-block system storage devices (120) is disclosed. A computer (105) may include a processor (110), memory (115), and a non-block system storage device (120). A file system (135) may be stored in memory (115) and running on the processor (110), which may include a page cache (315). A key-value file system (KVFS) (145) may reside between the file system (135) and the storage device (120) and may map received file system commands (310) to key-value system commands (330) that may be executed by the storage device (120). Results of the key-value system commands (330) may be returned to the file system (135), permitting the operating system (130) to cache data in the page cache (315).Type: GrantFiled: April 29, 2016Date of Patent: April 12, 2022Inventors: Vikas Sinha, Zvi Guz, Ming Lin
-
Publication number: 20210294761Abstract: According to one general aspect, a device may include a host interface circuit configured to communicate with a host device via a data protocol that employs data messages. The device may include a storage element configured to store data in response to a data message. The host interface circuit may be configured to detect when a tunneling command is embedded within the data message; extract a tunneled message address information from the data message; retrieve, via the tunneled message address information, a tunneled message stored in a memory of the host device; and route the tunneled message to an on-board processor and/or data processing logic. The on-board processor and/or data processing logic may be configured to execute one or more instructions in response to the tunneled message.Type: ApplicationFiled: June 3, 2021Publication date: September 23, 2021Inventors: Ramdas P. KACHARE, Zvi GUZ, Son T. PHAM, Anahita SHAYESTEH, Xuebin YAO, Oscar Prem PINTO
-
Publication number: 20210294494Abstract: A system includes a plurality of storage processing accelerators (SPAs), at least one SPA of the plurality of SPAs including a plurality of programmable processors or storage processing engines (SPEs), the plurality of SPEs including n SPEs (n is a natural number greater than zero), where 1st to (n?1) SPEs of the n SPEs are configured to provide an output of the SPE to a next SPE of the n SPEs in a pipeline to be used as an input of the next SPE; and an acceleration platform manager (APM) connected to the plurality of the SPAs and the plurality of SPEs, and configured to control data processing in the plurality of SPAs and the plurality of SPEs.Type: ApplicationFiled: June 9, 2021Publication date: September 23, 2021Inventors: Ramdas P. Kachare, Vijay Balakrishnan, Stephen G. Fischer, Fred Worley, Anahita Shayesteh, Zvi Guz
-
Patent number: 11112972Abstract: A method includes: receiving, at an acceleration platform manager (APM) from an application service manager (ASM), application function processing information; allocating, by the APM, a first storage processing accelerator (SPA) from a plurality of SPAs, wherein at least one SPA of the plurality of SPAs comprises a plurality of programmable processors or storage processing engines (SPEs), the plurality of SPEs comprising n SPEs, enabling the plurality of SPEs in the first SPA, wherein once enabled, the at least one SPE of the plurality of SPEs in the first SPA is configured to process data based on the application function processing information; determining, by the APM, if data processing is completed by the at least one SPE of the plurality of SPEs in the first SPA; and sending, by the APM, a result of the data processing by the SPEs of the first SPA, to the ASM.Type: GrantFiled: February 6, 2019Date of Patent: September 7, 2021Assignee: Samsung Electronics Co., Ltd.Inventors: Ramdas P. Kachare, Vijay Balakrishnan, Stephen G. Fischer, Fred Worley, Anahita Shayesteh, Zvi Guz
-
Patent number: 11061574Abstract: A system includes a plurality of storage processing accelerators (SPAs), at least one SPA of the plurality of SPAs including a plurality of programmable processors or storage processing engines (SPEs), the plurality of SPEs including n SPEs (n is a natural number greater than zero), where 1st to (n?1) SPEs of the n SPEs are configured to provide an output of the SPE to a next SPE of the n SPEs in a pipeline to be used as an input of the next SPE; and an acceleration platform manager (APM) connected to the plurality of the SPAs and the plurality of SPEs, and configured to control data processing in the plurality of SPAs and the plurality of SPEs.Type: GrantFiled: February 7, 2019Date of Patent: July 13, 2021Assignee: Samsung Electronics Co., Ltd.Inventors: Ramdas P. Kachare, Vijay Balakrishnan, Stephen G. Fischer, Fred Worley, Anahita Shayesteh, Zvi Guz
-
Patent number: 11030129Abstract: According to one general aspect, a device may include a host interface circuit configured to communicate with a host device via a data protocol that employs data messages. The device may include a storage element configured to store data in response to a data message. The host interface circuit may be configured to detect when a tunneling command is embedded within the data message; extract a tunneled message address information from the data message; retrieve, via the tunneled message address information, a tunneled message stored in a memory of the host device; and route the tunneled message to an on-board processor and/or data processing logic. The on-board processor and/or data processing logic may be configured to execute one or more instructions in response to the tunneled message.Type: GrantFiled: February 18, 2020Date of Patent: June 8, 2021Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Ramdas P. Kachare, Zvi Guz, Son T. Pham, Anahita Shayesteh, Xuebin Yao, Oscar Prem Pinto
-
Publication number: 20210089477Abstract: According to one general aspect, a device may include a host interface circuit configured to communicate with a host device via a data protocol that employs data messages. The device may include a storage element configured to store data in response to a data message. The host interface circuit may be configured to detect when a tunneling command is embedded within the data message; extract a tunneled message address information from the data message; retrieve, via the tunneled message address information, a tunneled message stored in a memory of the host device; and route the tunneled message to an on-board processor and/or data processing logic. The on-board processor and/or data processing logic may be configured to execute one or more instructions in response to the tunneled message.Type: ApplicationFiled: February 18, 2020Publication date: March 25, 2021Inventors: Ramdas P. KACHARE, Zvi GUZ, Son T. PHAM, Anahita SHAYESTEH, Xuebin YAO, Oscar Prem PINTO
-
Patent number: 10846155Abstract: A host machine is disclosed. The host machine may include a host processor, a memory, an operating system running on the host processor, and an application running under the operating system on the host processor. The host machine may also include a Peripheral Component Interconnect Express (PCIe) tunnel to a Non-Volatile Memory Express (NVMe) Solid State Drive (SSD) and an RPC capture module which may capture the RPC from the application and deliver a result of the RPC to the application as though from the host processor, where the NVMe SSD may execute the RPC to generate the result.Type: GrantFiled: March 22, 2019Date of Patent: November 24, 2020Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Ramdas P. Kachare, Zvi Guz, Son T. Pham, Anahita Shayesteh, Xuebin Yao, Oscar Prem Pinto
-
Publication number: 20200183583Abstract: A system includes a plurality of storage processing accelerators (SPAs), at least one SPA of the plurality of SPAs including a plurality of programmable processors or storage processing engines (SPEs), the plurality of SPEs including n SPEs (n is a natural number greater than zero), where 1st to (n?1) SPEs of the n SPEs are configured to provide an output of the SPE to a next SPE of the n SPEs in a pipeline to be used as an input of the next SPE; and an acceleration platform manager (APM) connected to the plurality of the SPAs and the plurality of SPEs, and configured to control data processing in the plurality of SPAs and the plurality of SPEs.Type: ApplicationFiled: February 7, 2019Publication date: June 11, 2020Inventors: Ramdas P. Kachare, Vijay Balakrishnan, Stephen G. Fischer, Fred Worley, Anahita Shayesteh, Zvi Guz
-
Publication number: 20200183582Abstract: A method includes: receiving, at an acceleration platform manager (APM) from an application service manager (ASM), application function processing information; allocating, by the APM, a first storage processing accelerator (SPA) from a plurality of SPAs, wherein at least one SPA of the plurality of SPAs comprises a plurality of programmable processors or storage processing engines (SPEs), the plurality of SPEs comprising n SPEs, enabling the plurality of SPEs in the first SPA, wherein once enabled, the at least one SPE of the plurality of SPEs in the first SPA is configured to process data based on the application function processing information; determining, by the APM, if data processing is completed by the at least one SPE of the plurality of SPEs in the first SPA; and sending, by the APM, a result of the data processing by the SPEs of the first SPA, to the ASM.Type: ApplicationFiled: February 6, 2019Publication date: June 11, 2020Inventors: Ramdas P. Kachare, Vijay Balakrishnan, Stephen G. Fischer, Fred Worley, Anahita Shayesteh, Zvi Guz
-
Publication number: 20200117525Abstract: A host machine is disclosed. The host machine may include a host processor, a memory, an operating system running on the host processor, and an application running under the operating system on the host processor. The host machine may also include a Peripheral Component Interconnect Express (PCIe) tunnel to a Non-Volatile Memory Express (NVMe) Solid State Drive (SSD) and an RPC capture module which may capture the RPC from the application and deliver a result of the RPC to the application as though from the host processor, where the NVMe SSD may execute the RPC to generate the result.Type: ApplicationFiled: March 22, 2019Publication date: April 16, 2020Inventors: Ramdas P. KACHARE, Zvi GUZ, Son T. PHAM, Anahita SHAYESTEH, Xuebin YAO, Oscar Prem PINTO
-
Patent number: 10254998Abstract: A distributed storage system can include a storage node (125, 130, 135). The storage node (125, 130, 135) can include a Solid State Drive (SSD) or other storage device that employs garbage collection (140, 145, 150, 155, 160, 165, 225, 230), a device garbage collection monitor (205), a garbage collection coordinator (210), an Input/Output (I/O) redirector (215), and an I/O resynchronizer (220). The device garbage collection monitor (205) can determine whether any storage devices (140, 145, 150, 155, 160, 165, 225, 230) need to perform garbage collection. The garbage collection coordinator (210) can schedule when the storage device (140, 145, 150, 155, 160, 165, 225, 230) can perform garbage collection. The I/O redirector (215) can redirect read requests (905) and write requests (1005) away from the storage device (140, 145, 150, 155, 160, 165, 225, 230) when it is performing garbage collection.Type: GrantFiled: February 17, 2016Date of Patent: April 9, 2019Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Vikas Sinha, Zvi Guz, Gunneswara Rao Marripudi
-
Publication number: 20180032580Abstract: A method of managing a database, the method including determining whether a deterministic threshold has occurred, determining whether a random threshold has occurred, and initiating a maintenance process on the database when either the deterministic threshold or the random threshold has occurred.Type: ApplicationFiled: September 2, 2016Publication date: February 1, 2018Inventor: Zvi Guz