Patents by Inventor Amit Golander

Amit Golander has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190196735
    Abstract: Method, system and product for direct access to de-duplicated data units in memory-based file systems. The method comprising: updating a page entry in a page table of a process to include a direct access pointer to a de-duplicated data unit retained by the memory-based file system, wherein the page entry is set to be write protected; detecting a page fault occurring due to the process performing a store instruction to the de-duplicated data unit; and in response to said detecting: allocating a new data unit; copying content of the de-duplicated data unit to the new data unit; and replacing the direct access pointer to the de-duplicated data unit with a direct access pointer to the new data unit.
    Type: Application
    Filed: February 27, 2019
    Publication date: June 27, 2019
    Applicant: NETAPP, INC.
    Inventors: Amit Golander, Yigal Korman, Boaz Harrosh
  • Patent number: 10254990
    Abstract: Method, system and product for direct access to de-duplicated data units in memory-based file systems. The method comprising: updating a page entry in a page table of a process to include a direct access pointer to a de-duplicated data unit retained by the memory-based file system, wherein the page entry is set to be write protected; detecting a page fault occurring due to the process performing a store instruction to the de-duplicated data unit; and in response to said detecting: allocating a new data unit; copying content of the de-duplicated data unit to the new data unit; and replacing the direct access pointer to the de-duplicated data unit with a direct access pointer to the new data unit.
    Type: Grant
    Filed: May 13, 2016
    Date of Patent: April 9, 2019
    Assignee: NETAPP, INC.
    Inventors: Amit Golander, Yigal Korman, Boaz Harrosh
  • Patent number: 10140029
    Abstract: Managing pages in a memory based file system by maintaining a memory into two lists, an Lr list and an Lf list, moving pages from the Lr list to the Lf list based on a repeated access pattern, and moving a page out of the Lr list or the Lf list arbitrarily, thereby enabling the two lists to re-grow according to current workload.
    Type: Grant
    Filed: December 10, 2014
    Date of Patent: November 27, 2018
    Assignee: NETAPP, INC.
    Inventors: Amit Golander, Boaz Harrosh, Sagi Manole, Omer Caspi
  • Publication number: 20180322152
    Abstract: A method of negotiating memory record ownership between network nodes, comprising: storing in a memory of a first network node a subset of a plurality of memory records and one of a plurality of file system segments of a file system mapping the memory records; receiving a request from a second network node to access a memory record of the memory records subset; identifying the memory record by using the file system segment; deciding, by a placement algorithm, whether to relocate the memory record, from the memory records subset to a second subset of the plurality of memory records stored in a memory of the second network node; when a relocation is not decided, providing remote access of the memory record via a network to the second network node; and when a relocation is decided, relocating the memory record via the network for management by the second network node.
    Type: Application
    Filed: July 19, 2018
    Publication date: November 8, 2018
    Applicant: NETAPP, INC.
    Inventor: Amit Golander
  • Patent number: 10031933
    Abstract: A method of negotiating memory record ownership between network nodes, comprising: storing in a memory of a first network node a subset of a plurality of memory records and one of a plurality of file system segments of a file system mapping the memory records; receiving a request from a second network node to access a memory record of the memory records subset; identifying the memory record by using the file system segment; deciding, by a placement algorithm, whether to relocate the memory record, from the memory records subset to a second subset of the plurality of memory records stored in a memory of the second network node; when a relocation is not decided, providing remote access of the memory record via a network to the second network node; and when a relocation is decided, relocating the memory record via the network for management by the second network node.
    Type: Grant
    Filed: March 2, 2015
    Date of Patent: July 24, 2018
    Assignee: NETAPP, INC.
    Inventor: Amit Golander
  • Patent number: 10003645
    Abstract: Logical mirroring of an initiator server running a memory aware file system to a multi-tiered target server by receiving at a first tier of the target server data that was modified at the initiator server and retaining at the first tier of the target server a first subset of the data and moving to a second tier of the target server a second subset of the data to efficiently utilize the multi-tiered target server.
    Type: Grant
    Filed: December 15, 2015
    Date of Patent: June 19, 2018
    Assignee: NETAPP, INC.
    Inventors: Amit Golander, Yigal Korman, Sagi Manole, Boaz Harrosh
  • Patent number: 9936017
    Abstract: A method and system for logical mirroring between nodes includes maintaining a log of a state modifying operation received at a memory-based file system of an initiator node; writing attributes of the state modifying operation from the memory-based file system to a target node memory, and using the written attributes to process the state modifying operation at the target node according to the order represented by the log, to obtain logical mirroring between the initiator node and the target node.
    Type: Grant
    Filed: October 12, 2015
    Date of Patent: April 3, 2018
    Assignee: NETAPP, INC.
    Inventors: Amit Golander, Yigal Korman
  • Patent number: 9851919
    Abstract: Methods for data placement in a memory-based file system are described, including copying a user data unit from a second storage type device to a first storage type device based on an access request to the file system, the first storage type device being a faster access device than the second storage type device, referencing the user data unit in the first storage type device by a byte addressable memory pointer, and using the byte addressable memory pointer to copy the user data unit from the first storage type device to the second storage type device based on data access pattern.
    Type: Grant
    Filed: December 31, 2014
    Date of Patent: December 26, 2017
    Assignee: NETAPP, INC.
    Inventors: Amit Golander, Boaz Harrosh
  • Patent number: 9678670
    Abstract: A method and system for compute element state replication is provided. The method includes transforming at least a subset of metadata of a source compute element from a memory tier of the source compute element to a block representation; within a destination compute element, mounting the block representation; reverse transforming the metadata to a memory tier of the destination compute element; and using the reverse transformed metadata to operate the destination compute element.
    Type: Grant
    Filed: June 29, 2015
    Date of Patent: June 13, 2017
    Assignee: PLEXISTOR LTD.
    Inventors: Amit Golander, Sagi Manole
  • Publication number: 20170160980
    Abstract: A method, apparatus and product for accelerating concurrent access to a file in a memory-based file system. The method comprising receiving a request issued by a program, for accessing a file stored in a memory-based file system; and subject to the request being associated with data modification of data within the file, and subject to the modification not necessitating change in a structure of a data structure used for content lookup for the file, acquiring a lock to the file to the program, wherein the lock is acquired in a shared mode.
    Type: Application
    Filed: March 30, 2016
    Publication date: June 8, 2017
    Inventors: Amit Golander, Sagi Manole, Boaz Harrosh
  • Publication number: 20170160979
    Abstract: Method, system and product for direct access to de-duplicated data units in memory-based file systems. The method comprising: updating a page entry in a page table of a process to include a direct access pointer to a de-duplicated data unit retained by the memory-based file system, wherein the page entry is set to be write protected; detecting a page fault occurring due to the process performing a store instruction to the de-duplicated data unit; and in response to said detecting: allocating a new data unit; copying content of the de-duplicated data unit to the new data unit; and replacing the direct access pointer to the de-duplicated data unit with a direct access pointer to the new data unit.
    Type: Application
    Filed: May 13, 2016
    Publication date: June 8, 2017
    Inventors: Amit Golander, Yigal Korman, Boaz Harrosh
  • Patent number: 9298460
    Abstract: Systems and methods are disclosed for enhancing the throughput of a processor by minimizing the number of transfers of data associated with data transfer between a register file and a memory stack. The register file used by a processor running an application is partitioned into a number of blocks. A subset of the blocks of the register file is defined in an application binary interface enabling the subset to be pre-allocated and exposed to the application binary interface. Optionally, blocks other than the subset are not exposed to the application binary interface so that the data relating to application function switch or a context switch is not transferred between the unexposed blocks and a memory stack.
    Type: Grant
    Filed: November 29, 2011
    Date of Patent: March 29, 2016
    Assignee: International Business Machines Corporation
    Inventors: Revital Eres, Amit Golander, Nadav Levison, Sagi Manole, Ayal Zaks
  • Patent number: 8918588
    Abstract: Techniques for replacing one or more blocks in a cache, the one or more blocks being associated with a plurality of data streams, are provided. The one or more blocks in the cache are grouped into one or more groups, each corresponding to one of the plurality of data streams. One or more incoming blocks are received. To free space, the one or more blocks of the one or more groups in the cache are invalidated in accordance with at least one of an inactivity of a given data stream corresponding to the one or more groups and a length of the one or more groups. The one or more incoming blocks are stored in the cache. A number of data streams maintained within the cache is maximized.
    Type: Grant
    Filed: April 7, 2009
    Date of Patent: December 23, 2014
    Assignee: International Business Machines Corporation
    Inventors: Brian Bass, Giora Biran, Hubertus Franke, Amit Golander, Hao Yu
  • Patent number: 8850095
    Abstract: A novel and useful cost effective mechanism for detecting the livelock/starvation of transactions in a ring shaped interconnect that utilizes minimal logic resources. Rather than monitor all transactions concurrently in the ring, the mechanism monitors only a single transaction in the ring. A sampling point is located at a point in the ring which contains a set of N latches. If the monitored transaction is not being starved, it is released and the detection logic moves on the next candidate transaction in round robin fashion. If the monitored transaction passes the sampling point a threshold number of times, it is deemed to be starved and a starvation prevention handling procedure is activated. By traversing the entire ring a single transaction at a time, all starving transactions will eventually be detected with an upper limit on the detection time of O(N2).
    Type: Grant
    Filed: February 8, 2011
    Date of Patent: September 30, 2014
    Assignee: International Business Machines Corporation
    Inventors: Amit Golander, Omer Heymann, Nadav Levison, Eric F. Robinson
  • Patent number: 8839256
    Abstract: A novel and useful system and method of improving the utilization of a special purpose accelerator in a system incorporating a general purpose processor. In some embodiments, the current queue status of the special purpose accelerator is periodically monitored using a background monitoring process/thread and the current queue status is stored in a shared memory. A shim redirection layer added a priori to a library function task determines at runtime and in user space whether to execute the library function task on the special purpose accelerator or the general purpose processor. At runtime, using the shim redirection layer and based on the current queue status, it is determined whether to execute the library function task on the special purpose accelerator or on the general purpose processor.
    Type: Grant
    Filed: June 9, 2010
    Date of Patent: September 16, 2014
    Assignee: International Business Machines Corporation
    Inventors: Heather D. Achilles, Giora Biran, Amit Golander, Nancy A. Greco
  • Patent number: 8838544
    Abstract: A novel and useful system and method of fast history compression in a pipelined architecture with both speculation and low-penalty misprediction recovery. The method of the present invention speculates that a current input byte does not continue an earlier string, but either starts a new string or represents a literal (no match). As previous bytes are checked if they start a string, the method of the present invention detects if speculation for the current byte is correct. If speculation is not correct, then various methods of recovery are employed, depending on the repeating string length.
    Type: Grant
    Filed: September 23, 2009
    Date of Patent: September 16, 2014
    Assignee: International Business Machines Corporation
    Inventors: Giora Biran, Amit Golander
  • Patent number: 8806292
    Abstract: A hybrid mechanism whereby hardware acceleration is combined with software such that the compression rate achieved is significantly increased while maintaining the original compression ratio (e.g., using full DHT and not SHT or an approximation). The compression acceleration mechanism is applicable to a hardware accelerator tightly coupled with the general purpose processor. The compression task is divided and parallelized between hardware and software wherein each compression task is split into two acceleration requests: a first request that performs SHT encoding using hardware acceleration and provides post-LZ frequency statistics; and a second request that performs SHT decoding and DHT encoding using the DHT generated in software.
    Type: Grant
    Filed: December 15, 2011
    Date of Patent: August 12, 2014
    Assignee: International Business Machines Corporation
    Inventors: Giora Biran, Amit Golander, Kiyoshi Nishino, Nobuyoshi Tanaka
  • Patent number: 8610604
    Abstract: A system and method of selecting a predefined Huffman dictionary from a bank of dictionaries. The dictionary selection mechanism of the present invention effectively breaks the built-in tradeoff between compression ratio and compression rate for both hardware and software compression implementations. A mechanism is provided for automatically creating a predefined Huffman dictionary for a set of input files. The dictionary selection mechanism achieves high compression rate and ratio leveraging predefined Huffman dictionaries and provides a mechanism for dynamically speculating which predefined dictionary to select per input data block, thereby achieving close to a dynamic Huffman ratio at a static Huffman rate. In addition, a feedback loop is used to monitor the ongoing performance of the preset currently selected for use by the hardware accelerator. If the current preset is not optimal it is replaced with an optimal preset.
    Type: Grant
    Filed: November 24, 2011
    Date of Patent: December 17, 2013
    Assignee: International Business Machines Corporation
    Inventors: Lior Glass, Giora Biran, Amit Golander
  • Patent number: 8610606
    Abstract: A system and method of selecting a predefined Huffman dictionary from a bank of dictionaries. The dictionary selection mechanism of the present invention effectively breaks the built-in tradeoff between compression ratio and compression rate for both hardware and software compression implementations. A mechanism is provided for automatically creating a predefined Huffman dictionary for a set of input files. The dictionary selection mechanism achieves high compression rate and ratio leveraging predefined Huffman dictionaries and provides a mechanism for dynamically speculating which predefined dictionary to select per input data block, thereby achieving close to a dynamic Huffman ratio at a static Huffman rate. In addition, a feedback loop is used to monitor the ongoing performance of the preset currently selected for use by the hardware accelerator. If the current preset is not optimal it is replaced with an optimal preset.
    Type: Grant
    Filed: November 24, 2011
    Date of Patent: December 17, 2013
    Assignee: International Business Machines Corporation
    Inventors: Lior Glass, Giora Biran, Amit Golander
  • Publication number: 20130321180
    Abstract: A system and method of accelerating dynamic Huffman decompaction within the inflate algorithm. To improve the performance of a decompression engine during the inflate/decompression process, Huffman trees decompacted a priori are used thus eliminating the requirement of decompacting the DHT for each input stream. The Huffman tree in the input stream is matched prior to decompaction. If a match is found, the stored decompacted Huffman tree is used which reduces the required decompression time.
    Type: Application
    Filed: May 31, 2012
    Publication date: December 5, 2013
    Applicant: International Business Machines Corporation
    Inventors: Giora Biran, Amit Golander, Shai Ishaya Tahar