Patents by Inventor Zvika GUZ

Zvika GUZ has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10963394
    Abstract: A controller of a data storage device includes: a host interface providing an interface to a host computer; a flash translation layer (FTL) translating a logical block address (LBA) to a physical block address (PBA) associated with an input/output (I/O) request; a flash interface providing an interface to flash media to access data stored on the flash media; and one or more deep neural network (DNN) modules for predicting an I/O access pattern of the host computer. The one or more DNN modules provide one or more prediction outputs to the FTL that are associated with one or more past I/O requests and a current I/O request received from the host computer, and the one or more prediction outputs include at least one predicted I/O request following the current I/O request. The FTL prefetches data stored in the flash media that is associated with the at least one predicted I/O request.
    Type: Grant
    Filed: June 19, 2018
    Date of Patent: March 30, 2021
    Inventors: Ramdas P. Kachare, Sompong Paul Olarig, Vikas Sinha, Zvika Guz
  • Publication number: 20190317901
    Abstract: A controller of a data storage device includes: a host interface providing an interface to a host computer; a flash translation layer (FTL) translating a logical block address (LBA) to a physical block address (PBA) associated with an input/output (I/O) request; a flash interface providing an interface to flash media to access data stored on the flash media; and one or more deep neural network (DNN) modules for predicting an I/O access pattern of the host computer. The one or more DNN modules provide one or more prediction outputs to the FTL that are associated with one or more past I/O requests and a current I/O request received from the host computer, and the one or more prediction outputs include at least one predicted I/O request following the current I/O request. The FTL prefetches data stored in the flash media that is associated with the at least one predicted I/O request.
    Type: Application
    Filed: June 19, 2018
    Publication date: October 17, 2019
    Inventors: Ramdas P. KACHARE, Sompong Paul OLARIG, Vikas SINHA, Zvika GUZ
  • Patent number: 9836277
    Abstract: A Processing-In-Memory (PIM) model in which computations related to the POPCOUNT and logical bitwise operations are implemented within a memory module and not within a host Central Processing Unit (CPU). The in-memory executions thus eliminate the need to shift data from large bit vectors throughout the entire system. By off-loading the processing of these operations to the memory, the redundant data transfers over the memory-CPU interface are greatly reduced, thereby improving system performance and energy efficiency. A controller and a dedicated register in the logic die of the memory module operate to interface with the host and provide in-memory executions of popcounting and logical bitwise operations requested by the host. The PIM model of the present disclosure thus frees up the CPU for other tasks because many real-time analytics tasks can now be executed within a PIM-enabled memory itself. The memory module may be a Three Dimensional Stack (3DS) memory or any other semiconductor memory.
    Type: Grant
    Filed: April 15, 2015
    Date of Patent: December 5, 2017
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Zvika Guz, Liang Yin
  • Publication number: 20150163324
    Abstract: A request management subsystem is configured to establish service classes for clients that issue requests for a shared resource on a computer system. The subsystem also is configured to determine the state of the system with respect to bandwidth, current latency, frequency and voltage levels, among other characteristics. Further, the subsystem is configured to evaluate the requirements of each client with respect to latency sensitivity and required bandwidth, among other characteristics. Finally, the subsystem is configured to schedule access to shared resources, based on the priority class of each client, the demands of the application, and the state of the system. With this approach, the subsystem may enable all clients to perform optimally or, alternatively, may cause all clients to experience an equal reduction in performance.
    Type: Application
    Filed: December 9, 2013
    Publication date: June 11, 2015
    Applicant: NVIDIA CORPORATION
    Inventors: Evgeny BOLOTIN, Zvika GUZ, Adwait JOG, Stephen William KECKLER, Michael Allen PARKER