Patents Examined by Marwan Ayash
  • Patent number: 10223291
    Abstract: A computing device comprises: a memory; a processor; an interpreter; and a Memory Management Unit. The interpreter is for controlling the processor to execute a program comprising at least one first instruction in a format that is not native to the processor and at least one second instruction in machine code that is native to the processor. The Memory Management Unit is adapted to control access by the processor to the memory and possibly also to peripherals when the at least one second instruction is executed.
    Type: Grant
    Filed: May 15, 2010
    Date of Patent: March 5, 2019
    Assignee: NXP B.V.
    Inventors: Ernst Haselsteiner, Christian Kirchstaetter
  • Patent number: 10210100
    Abstract: A system and method are disclosed for an event lock storage device. The storage device includes a user partition and an event partition (which may be associated with an event). The storage device receives data from a host device, and stores the data in the user partition. In response to receiving an indication of an event, the storage device may designate the data as part of the event partition. The event partition may include a set of access rules that is different from the user partition, such as more restrictive rules for modification or deletion of a file containing the data.
    Type: Grant
    Filed: April 14, 2015
    Date of Patent: February 19, 2019
    Assignee: SANDISK TECHNOLOGIES LLC
    Inventors: Filip Verhaeghe, Bsa Chung, Samuel Yu, Michael Lavrentiev
  • Patent number: 10200472
    Abstract: Generally, this disclosure provides systems, devices, methods and computer readable media for improved coordination between sender and receiver nodes in a one-sided memory access to a PGAS in a distributed computing environment. The system may include a transceiver module configured to receive a message over a network, the message comprising a data portion and a data size indicator and an offset handler module configured to calculate a destination address from a base address of a memory buffer and an offset counter. The transceiver module may further be configured to write the data portion to the memory buffer at the destination address; and the offset handler module may further be configured to update the offset counter based on the data size indicator.
    Type: Grant
    Filed: December 24, 2014
    Date of Patent: February 5, 2019
    Assignee: Intel Corporation
    Inventors: Mario Flajslik, James Dinan
  • Patent number: 10191775
    Abstract: The present invention discloses a method for optimizing the throughput of hardware accelerators (HWAs) in a computerized abstraction system, by utilizing the maximal data input bandwidth to the said HWAs. The method is comprised of the following steps: dynamically obtaining the quantities and properties of HWAs and storage units within the computerized abstraction system dynamically allocating cache memory space per each of the HWAs, according to the said obtained quantities and properties, to minimize the time required for reading data from storage instances to the said HWA dynamically allocating spoolers per each of the HWAs, according to the said obtained quantities and properties, to buffer the input data and ensure a continuous flow of input data, in the target HWA's maximal input bandwidth.
    Type: Grant
    Filed: December 20, 2016
    Date of Patent: January 29, 2019
    Assignee: SQREAM TECHNOLOGIES LTD.
    Inventors: Ori Brostovsky, Omid Vahdaty, Eli Klatis, Tal Zelig, Jake Wheat, Razi Shoshani
  • Patent number: 10157002
    Abstract: A method begins by a processing module determining a priority access level of an encoded data slice stored on a memory device. The method continues with the processing module determining an end-of-life memory level for the memory device. The method continues with the processing module determining whether to migrate the encoded data slice from the memory device based on the priority access level and the end-of-life memory level. The method continues with the processing module identifying another memory device. The method continues with the processing module facilitating migration of the encoded data slice to another memory device.
    Type: Grant
    Filed: August 5, 2011
    Date of Patent: December 18, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Gary W. Grube, Jason K. Resch, Timothy W. Markison, Ilya Volvovski, Manish Motwani
  • Patent number: 10114562
    Abstract: In a multi-plane non-volatile memory, good blocks of different planes are linked for parallel operation for storing long host writes. Where bad blocks in one or more planes result in unlinked blocks, the unlinked blocks are configured for individual operation to store short host writes and/or memory system management data. Unlinked blocks may be configured as Single Level Cell (SLC) blocks while linked blocks may be configured as SLC blocks or Multi Level Cell (MLC) blocks.
    Type: Grant
    Filed: September 16, 2014
    Date of Patent: October 30, 2018
    Assignee: SanDisk Technologies LLC
    Inventors: Narendhiran Chinnaanangur Ravimohan, Muralitharan Jayaraman, Abhijeet Manohar, Alan Bennett
  • Patent number: 10108507
    Abstract: A method, system, and computer program product for receiving a request to roll an image to a point in time by reading data from a journal, applying data from the journal to create a asynchronous copy on write image at the requested point in time, creating a virtual image data structure, and allowing writes to be cached in a journal based replication appliance.
    Type: Grant
    Filed: March 31, 2011
    Date of Patent: October 23, 2018
    Assignee: EMC IP Holding Company
    Inventor: Assaf Natanzon
  • Patent number: 10102127
    Abstract: Managing access to a cache memory includes dividing said cache memory into multiple of cache areas, each cache area having multiple entries; and providing at least one separate lock attribute for each cache area such that only a processor thread having possession of the lock attribute corresponding to a particular cache area can update that cache area.
    Type: Grant
    Filed: November 24, 2015
    Date of Patent: October 16, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES
    Inventors: Xiao Jun Dai, Subhendu Das, Zhi Gan, Zhang Yue
  • Patent number: 10102128
    Abstract: Managing access to a cache memory includes dividing said cache memory into multiple of cache areas, each cache area having multiple entries; and providing at least one separate lock attribute for each cache area such that only a processor thread having possession of the lock attribute corresponding to a particular cache area can update that cache area.
    Type: Grant
    Filed: November 24, 2015
    Date of Patent: October 16, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Xiao Jun Dai, Subhendu Das, Zhi Gan, Zhang Yue
  • Patent number: 10089277
    Abstract: A method and system are provided for configurable computation and data processing. A logical processor includes an array of logic elements. The processor may be a combinatorial circuit that can be applied to modify computational aspects of an array of reconfigurable circuits. A memory stores a plurality of instructions, each instruction including an instruction-fetch data portion and an output data transfer data portion. One or more memory controllers are coupled to the memory and receive instructions and/or output data from the memory. A back buffer is coupled with the memory controller and receives instructions from the memory controller. The back buffer sequentially asserts each received instruction upon one or more memory controllers. The memory controllers transfer data received from the memory to a target, such as an array of reconfigurable logic circuits that are optionally coupled to the memory, the back buffer, and one or more additional memory controllers.
    Type: Grant
    Filed: November 21, 2011
    Date of Patent: October 2, 2018
    Inventor: Robert Keith Mykland
  • Patent number: 10089041
    Abstract: A method for data storage includes, in a storage device that communicates with a host over a storage interface for executing a storage command in a memory of the storage device, estimating an expected data under-run between fetching data for the storage command from the memory and sending the data over the storage interface. A data size to be prefetched from the memory, in order to complete uninterrupted execution of the storage command, is calculated in the storage device based on the estimated data under-run. The storage command is executed in the memory while prefetching from the memory data of at least the calculated data size.
    Type: Grant
    Filed: February 3, 2016
    Date of Patent: October 2, 2018
    Assignee: Apple Inc.
    Inventor: Arie Peled
  • Patent number: 10091172
    Abstract: A network memory system is disclosed. The network memory system comprises a first appliance configured to encrypt first data, and store the encrypted first data in a first memory device. The first appliance also determines whether the first data is available in a second appliance and transmits a store instruction comprising the first data based on the determination that the first data does not exist in the second appliance. The second appliance is configured to receive the store instruction from the first appliance comprising the first data, encrypt the first data, and store the encrypted first data in a second memory device. The second appliance is further configured to receive a retrieve instruction comprising a location indicator indicating where the encrypted first data is stored, process the retrieve instruction to obtain encrypted response data, and decrypt the encrypted response data.
    Type: Grant
    Filed: May 6, 2016
    Date of Patent: October 2, 2018
    Assignee: Silver Peak Systems, Inc.
    Inventor: David Anthony Hughes
  • Patent number: 10037334
    Abstract: A system for memory management for Virtual Machines (VMs), including a host computer system running a host operating system (OS); at least two Virtual Machines (VMs) running on the host computer system, wherein each of the VMs has a Guest OS supporting a guest file system with execution-in-place that allows code execution without an intermediate buffer cache; a hypervisor configured to control the VMs; and a thin provisioning block device configured to store shared pages and formed of at least one delta file. The hypervisor is configured to receive a page fault, and to read the shared pages from the thin provisioning block device. The Guest OS executes the file that is stored on the thin provisioning block device.
    Type: Grant
    Filed: December 26, 2016
    Date of Patent: July 31, 2018
    Assignee: Parallels International GmbH
    Inventors: Denis Lunev, Alexey Kobets
  • Patent number: 9959209
    Abstract: A data storage device is disclosed comprising a non-volatile memory. A command rate profile is initialized, wherein the command rate profile defines a limit on a number of access commands received from a host as a function of an internal parameter of the data storage device. The command rate profile is adjusted in response to a change in operating mode.
    Type: Grant
    Filed: March 23, 2010
    Date of Patent: May 1, 2018
    Assignee: WESTERN DIGITAL TECHNOLOGIES, INC.
    Inventors: Scott E. Burton, Kenny T. Coker, Robert M. Fallone
  • Patent number: 9940262
    Abstract: A system and method for efficiently indicating branch target addresses. A semiconductor chip predecodes instructions of a computer program prior to installing the instructions in an instruction cache. In response to determining a particular instruction is a control flow instruction with a displacement relative to a program counter address (PC), the chip replaces a portion of the PC relative displacement in the particular instruction with a subset of a target address. The subset of the target address is an untranslated physical subset of the full target address. When the recoded particular instruction is fetched and decoded, the remaining portion of the PC relative displacement is added to a virtual portion of the PC used to fetch the particular instruction. The result is concatenated with the portion of the target address embedded in the fetched particular instruction to form a full target address.
    Type: Grant
    Filed: September 19, 2014
    Date of Patent: April 10, 2018
    Assignee: Apple Inc.
    Inventors: Shyam Sundar, Richard F. Russo, Ronald P. Hall, Conrado Blasco
  • Patent number: 9923967
    Abstract: A storage control system adapted to operate as a remote copy pair by communicating between a primary and a secondary of the remote copy pair comprises a selector for selecting writes to be placed in a batch based on one or more criteria, a sequence number requester for requesting a sequence number for the batch, and a sequence number granter for granting a sequence number for the batch. The storage control system also comprises a batch transmitter for transmitting the batch to the secondary, a permission receiver for receiving a permission to write the batch from the secondary, and a write component responsive to the permission receiver to write the batch to completion, wherein the secondary is responsive to the completion to grant a further permission to write for a further batch.
    Type: Grant
    Filed: November 30, 2015
    Date of Patent: March 20, 2018
    Assignee: International Business Machines Corporation
    Inventors: Dale Burr, Henry Esmond Butterworth
  • Patent number: 9898406
    Abstract: A disk drive is disclosed that varies its caching policy for caching data in non-volatile solid-state memory as the memory degrades. As the non-volatile memory degrades, the caching policy can be varied such that the non-volatile memory is used more as a read cache and less as a write cache. Performance improvements and slower degradation of the non-volatile memory can thereby be attained.
    Type: Grant
    Filed: February 10, 2016
    Date of Patent: February 20, 2018
    Assignee: WESTERN DIGITAL TECHNOLOGIES, INC.
    Inventor: Robert L. Horn
  • Patent number: 9892047
    Abstract: A cache memory including: a plurality of parallel input ports configured to receive, in parallel, memory access requests wherein each parallel input port is operable to receive a memory access request for any one of a plurality of processing units; and a plurality of cache blocks wherein each cache block is configured to receive memory access requests from a unique one of the plurality of input ports such that there is a one-to-one mapping between the plurality of parallel input ports and the plurality of cache blocks and wherein each of the plurality of cache blocks is configured to serve a unique portion of an address space of the memory.
    Type: Grant
    Filed: September 17, 2009
    Date of Patent: February 13, 2018
    Assignee: Provenance Asset Group LLC
    Inventors: Jari Nikara, Eero Aho, Kimmo Kuusilinna
  • Patent number: 9824007
    Abstract: Systems, methods and/or devices are used to enable enhancing data integrity to protect against returning old versions of data. In one aspect, the method includes (1) receiving a write request from a host that specifies write data for a set of logical block addresses in a logical address space of the host, (2) mapping the set of logical block addresses to a set of physical addresses corresponding to physical pages of the storage device, and (3) performing one or more operations for each logical block specified by the set of logical block addresses, including: (a) generating metadata for the logical block, the metadata including a version number for the logical block, (b) storing the metadata, including the version number, in a header of a physical page in which the logical block is stored, and (c) storing the version number in a version data structure.
    Type: Grant
    Filed: February 24, 2015
    Date of Patent: November 21, 2017
    Assignee: SanDisk Technologies LLC
    Inventors: Girish B. Desai, William L. Guthrie
  • Patent number: 9817752
    Abstract: Systems, methods and/or devices are used to enhance data integrity to protect against returning old versions of data. In one aspect, a method includes (1) receiving a write request from a host that specifies write data for a set of logical block addresses, (2) mapping, using a mapping table, the set of logical block addresses to a set of physical addresses, where the mapping table includes a plurality of subsets, and (3) performing operations for each subset of the mapping table that includes at least one entry corresponding to a logical block specified by the set of logical block addresses, including: (a) generating metadata for the subset, the metadata including a version number for the subset, (b) calculating a Cyclic Redundancy Check (CRC) checksum for the subset, and (c) storing the version number for the subset and the CRC checksum for the subset in a version data structure.
    Type: Grant
    Filed: February 24, 2015
    Date of Patent: November 14, 2017
    Assignee: SanDisk Technologies LLC
    Inventors: Girish B. Desai, William L. Guthrie