Patents Examined by Ryan Dare
  • Patent number: 10672492
    Abstract: A data sampling circuit module, a data sampling method and a memory storage device are provided. The method includes: receiving a differential signal and generating an input data stream according to the differential signal; sampling a clock signal according to a plurality of turning points of the input data stream and outputting a sampling signal; and outputting a bit data stream corresponding to the input data stream according to the sampling signal.
    Type: Grant
    Filed: March 6, 2015
    Date of Patent: June 2, 2020
    Assignee: PHISON ELECTRONICS CORP.
    Inventor: Chih-Ming Chen
  • Patent number: 10649691
    Abstract: An example of storage system obtains a reference request of a reference request data block that is included in the content and is stored in the medium area. The storage system determines a number of gaps among addresses, in the medium area, of a plurality of data blocks continuous in the content including the reference request data block. The storage system determines, based on the number of gaps, whether or not defrag based on the plurality of data blocks is valid. The storage system writes, when the defrag is determined to be valid, the plurality of data blocks read from the medium area to the memory area, into continuous address areas of the medium area.
    Type: Grant
    Filed: March 27, 2014
    Date of Patent: May 12, 2020
    Assignee: HITACHI, LTD.
    Inventors: Mitsuo Hayasaka, Ken Nomura, Keiichi Matsuzawa, Hitoshi Kamei
  • Patent number: 10649902
    Abstract: Reducing translation latency within a memory management unit (MMU) using external caching structures including requesting, by the MMU on a node, page table entry (PTE) data and coherent ownership of the PTE data from a page table in memory; receiving, by the MMU, the PTE data, a source flag, and an indication that the MMU has coherent ownership of the PTE data, wherein the source flag identifies a source location of the PTE data; performing a lateral cast out to a local high-level cache on the node in response to determining that the source flag indicates that the source location of the PTE data is external to the node; and directing at least one subsequent request for the PTE data to the local high-level cache.
    Type: Grant
    Filed: November 21, 2017
    Date of Patent: May 12, 2020
    Assignee: International Business Machines Corporation
    Inventors: Guy L. Guthrie, Jody B. Joyner, Ronald N. Kalla, Michael S. Siegel, Jeffrey A. Stuecheli, Charles D. Wait, Frederick J. Ziegler
  • Patent number: 10645032
    Abstract: A packet processing block. The block comprises an input for receiving data in a packet header vector, where the vector comprises data values representing information for a packet. The block also comprises circuitry for performing packet match operations in response to at least a portion of the packet header vector and data stored in a match table and circuitry for performing one or more actions in response to a match detected by the circuitry for performing packet match operations. The one or more actions comprise modifying the data values representing information for a packet. The block also comprises at least one stateful memory comprising stateful memory data values. The one or more actions includes various stateful actions for reading stateful memory, modifying data values representing information for a packet, as a function of the stateful memory data values; and storing modified stateful memory data value back into the stateful memory.
    Type: Grant
    Filed: February 28, 2014
    Date of Patent: May 5, 2020
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Patrick W. Bosshart, Hun-Seok Kim
  • Patent number: 10642709
    Abstract: A method for refining multithread software executed on a processor chip of a computer system. The envisaged processor chip has at least one processor core and a memory cache coupled to the processor core and configured to cache at least some data read from memory. The method includes, in logic distinct from the processor core and coupled to the memory cache, observing a sequence of operations of the memory cache and encoding a sequenced data stream that traces the sequence of operations observed.
    Type: Grant
    Filed: April 19, 2011
    Date of Patent: May 5, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Susan Carrie, Vijay Balakrishnan
  • Patent number: 10635330
    Abstract: A method performed by a mapping driver executing on a DSS includes (a) receiving a data storage command that identifies a portion of storage of the DSS having a given size to which the data storage command is directed, (b) generating a plurality of derived data storage (DDS) instructions from the received data storage command, each DDS instruction of the plurality of DDS instructions identifying a respective sub-portion of the portion to which that DDS instruction is directed, each sub-portion having a respective sub-portion size smaller than the given size, and (c) issuing each DDS instruction separately to a data storage coordination driver also executing on the DSS, the data storage coordination driver being configured to cause each DDS instruction to be performed with respect to storage of the DSS. An apparatus, system, and computer program product for performing a similar method are also provided.
    Type: Grant
    Filed: December 29, 2016
    Date of Patent: April 28, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: Milind M. Koli, Timothy C. Ng, Xiangqing Yang
  • Patent number: 10628326
    Abstract: The present disclosure includes apparatuses and methods for logical to physical mapping. A number of embodiments include a logical to physical (L2P) update table, a L2P table cache, and a controller. The controller may be configured to cause a list of updates to be applied to an L2P table to be stored in the L2P update table.
    Type: Grant
    Filed: August 21, 2017
    Date of Patent: April 21, 2020
    Assignee: Micron Technology, Inc.
    Inventor: Jonathan M. Haswell
  • Patent number: 10613984
    Abstract: Various embodiments provide for a system that prefetches data from a main memory to a cache and then evicts unused data to a lower level cache. The prefetching system will prefetch data from a main memory to a cache, and data that is not immediately useable or is part of a data set which is too large to fit in the cache can be tagged for eviction to a lower level cache, which keeps the data available with a shorter latency than if the data had to be loaded from main memory again. This lowers the cost of prefetching useable data too far ahead and prevents cache trashing.
    Type: Grant
    Filed: April 19, 2018
    Date of Patent: April 7, 2020
    Assignee: AMPERE COMPUTING LLC
    Inventor: Kjeld Svendsen
  • Patent number: 10613777
    Abstract: Aspects of the disclosure relate to ensuring information security in data transfers by utilizing decoy data. A computing platform may receive, from a data source computing device, a source data collection for a secure physical-storage-media data transfer and may identify one or more transmission parameters associated with the secure physical-storage-media data transfer. Subsequently, the computing platform may generate decoy data and may produce a secure dataset for the secure physical-storage-media data transfer by combining the decoy data with the source data collection received from the data source computing device. Then, the computing platform may encrypt the secure dataset based on the one or more transmission parameters to produce an encrypted dataset for the secure physical-storage-media data transfer.
    Type: Grant
    Filed: July 17, 2017
    Date of Patent: April 7, 2020
    Assignee: Bank of America Corporation
    Inventors: Manu Kurian, Sorin N. Cismas
  • Patent number: 10606513
    Abstract: A Memory Device (MD) includes a configurable Non-Volatile Memory (NVM) including a first memory array and a second memory array. The configurable NVM stores temporary data designated for volatile storage by a Central Processing Unit (CPU) and persistent data designated for non-volatile storage by the CPU. An address is associated with a first location in the first memory array and with a second location in the second memory array. In performing a command to write data for the address, it is determined whether to write the data in the second location based on a volatility mode set for the MD. According to another aspect, a CPU designates a memory page in a virtual memory space as volatile or non-volatile based on data allocated to the memory page, and defines the volatility mode for the MD based on whether the memory page is designated as volatile or non-volatile.
    Type: Grant
    Filed: December 6, 2017
    Date of Patent: March 31, 2020
    Assignee: Western Digital Technologies, Inc.
    Inventors: Viacheslav Dubeyko, Luis Cargnini
  • Patent number: 10579303
    Abstract: Aspects of the present disclosure involve an apparatus including a port interface coupled with a data bus to receive memory transaction commands, and a command queue coupled with the port interface. Additional aspects include methods of operating such an apparatus, and electronic design automation (EDA) devices to generate design files associated with such an apparatus. The command queue includes a plurality of memory entries to store memory transaction commands, a placement logic module to combine a received memory transaction command with a memory transaction command previously stored in one of the plurality of memory entries of the command queue, and a selection logic module to determine an order to transmit memory transaction commands stored in the plurality of memory entries and transmit the stored memory transaction commands according to the determined order to a memory interface.
    Type: Grant
    Filed: August 26, 2016
    Date of Patent: March 3, 2020
    Assignee: Candace Design Systems, Inc.
    Inventors: Xiaofei Li, Ying Li, Zhehong Qian, Buying Du
  • Patent number: 10579556
    Abstract: A method, computer program product, and system includes a processing circuit(s) allocating a page of system memory address space to a device. The allocating includes the processing circuits(s) obtaining base address registers of the device in a bus and determining a portion of the page of the system memory address space to allocate to the base address registers. The processing circuits(s) sorts the base address registers, in a descending order, according to their alignments and adds sizes of the sorted base address registers to determine the portion of the page. The processing circuit(s) determines a remainder of the page: a difference between a size of the page and the portion of the page. The processing circuit(s) requests a virtual resource of a size equal to the remainder and allocates the page to the sorted base address registers and to the virtual resource.
    Type: Grant
    Filed: August 21, 2017
    Date of Patent: March 3, 2020
    Assignee: International Business Machines Corporation
    Inventors: Bo Qun Bq Feng, Zhong Li, Xian Dong Meng, Yong Ji Jx Xie
  • Patent number: 10558364
    Abstract: A module manages memory in a computer. The module monitors usage of a primary memory associated with the computer. The primary memory stores memory blocks in a ready state. In response to primary memory usage by the memory blocks in the ready state exceeding a ready state threshold, the module compresses at least some of the memory blocks in the ready state to form memory blocks in a ready and compressed state. In response to primary memory usage by the memory blocks in the ready and compressed state exceeding a release threshold, the module releases at least some of the memory blocks in the ready and compressed state. In response to primary memory usage by the memory blocks in the compressed state exceeding a compressed threshold, the module transfers at least some memory blocks in the compressed state to a secondary memory associated with the computer.
    Type: Grant
    Filed: October 16, 2017
    Date of Patent: February 11, 2020
    Assignee: Alteryx, Inc.
    Inventors: Edward P. Harding, Jr., Adam David Riley, Christopher H. Kingsley
  • Patent number: 10545695
    Abstract: A memory device may comprise circuitry to adjust between latency and throughput in transferring information through a memory port, wherein the circuitry may be capable of configuring individual partitions or individual sectors as high-throughput storage or low-latency storage.
    Type: Grant
    Filed: February 16, 2017
    Date of Patent: January 28, 2020
    Assignee: Micron Technology, Inc.
    Inventors: Samuel D. Post, Eric Anderson
  • Patent number: 10521121
    Abstract: Provided are an apparatus, system and method for apparatus, system and method for throttling an acceptance rate for adding host Input/Output (I/O) commands to a buffer in a non-volatile memory storage device. Information is maintained on an input rate at which I/O commands are being added to the buffer and information is maintained on an output rate at which I/O commands are processed from the buffer to apply to execute against the non-volatile memory. A determination is made of a current level of available space in the buffer and an acceptance rate at which I/O commands are added to the buffer from the host system to process based on the input rate, the output rate, the current level of available space, and an available space threshold for the buffer to maintain the buffer at the available space threshold. I/O commands are added to the buffer to process based on the acceptance rate. The I/O commands are accessed from the buffer to process to execute against the non-volatile memory.
    Type: Grant
    Filed: December 29, 2016
    Date of Patent: December 31, 2019
    Assignee: INTEL CORPORATION
    Inventors: David B. Carlton, Xin Guo, Yu Du
  • Patent number: 10521155
    Abstract: Examples disclosed herein relate, in one aspect, to a non-transitory machine-readable storage medium encoded with instructions executable by a processor of a computing device to cause the computing device to start a virtual machine process. The virtual machine process may obtain access to a shared memory segment accessible by a monitoring process, execute an application, and store in the shared memory segment management data associated with the application's execution.
    Type: Grant
    Filed: September 29, 2015
    Date of Patent: December 31, 2019
    Assignee: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
    Inventor: Thirusenthilanda Arasu
  • Patent number: 10521339
    Abstract: A method for writing data to a memory module, the method may include determining to write a representation of a data unit to a retired group of memory cells; searching for a selected retired group of memory cells that can store a representation of the data unit without being erased; and writing the representation of the data unit to the selected retired group of memory cells.
    Type: Grant
    Filed: February 27, 2014
    Date of Patent: December 31, 2019
    Assignee: Technion Research and Development Foundation LTD.
    Inventors: Yitzhak Birk, Amit Berman
  • Patent number: 10522212
    Abstract: The present disclosure includes apparatuses and methods for shift decisions. An example apparatus includes a memory device. The memory device includes an array of memory cells and sensing circuitry coupled to the array via a plurality of sense lines. The sensing circuitry includes a sense amplifier and a compute component coupled to a sense line and configured to implement logical operations and a decision component configured to implement a shift of data based on a determined functionality of a memory cell in the array.
    Type: Grant
    Filed: March 4, 2016
    Date of Patent: December 31, 2019
    Assignee: Micron Technology, Inc.
    Inventor: Glen E. Hush
  • Patent number: 10503651
    Abstract: A data storage device includes a media cache and a main data store optimized for sequential reads and organized into bands. When the data storage device receives a read request from a host computing system, the requested data may be fragmented across the media cache and the main data store, causing constrained read throughput. Band rewrite operations to improve read throughput are selected based on a hit tracking list including a hit counter associated with each band on the main data store. The hit counter tracks the number of times a host computing system has requested data in logical block addresses corresponding to the various bands. The data storage device may select bands for band rewrite operations based on the number of hits in the associated hit tracking counters.
    Type: Grant
    Filed: December 29, 2016
    Date of Patent: December 10, 2019
    Assignee: SEAGATE TECHNOLOGY LLC
    Inventors: CheeHou Peng, ThanZaw Thein, WenXiang Xie, PohSeng Lim
  • Patent number: 10503423
    Abstract: In response to a request for accessing a file stored in a storage system, data objects associated with the file are retrieved from a storage device of the storage system. The data objects of the file are cached in a cache memory. An access sequence of the cached data objects within the file is determined based on metadata of the file, where the access sequence represents a sequential order in time of accessing the cached data objects within the file. In response to a request for cache space reclamation, one or more cached data objects are identified whose next access is a farthest in time from a data object currently being accessed amongst the cached data objects based on the access sequence of the data objects. The identified data objects are evicted from the cache memory whose next access is a farthest amongst the cached data objects.
    Type: Grant
    Filed: May 17, 2017
    Date of Patent: December 10, 2019
    Assignee: EMC IP Holding Company LLC
    Inventors: Frederick Douglis, Windsor W. Hsu, Hangwei Qian