Patents Examined by Ryan Dare
-
Patent number: 10672492Abstract: A data sampling circuit module, a data sampling method and a memory storage device are provided. The method includes: receiving a differential signal and generating an input data stream according to the differential signal; sampling a clock signal according to a plurality of turning points of the input data stream and outputting a sampling signal; and outputting a bit data stream corresponding to the input data stream according to the sampling signal.Type: GrantFiled: March 6, 2015Date of Patent: June 2, 2020Assignee: PHISON ELECTRONICS CORP.Inventor: Chih-Ming Chen
-
Patent number: 10649691Abstract: An example of storage system obtains a reference request of a reference request data block that is included in the content and is stored in the medium area. The storage system determines a number of gaps among addresses, in the medium area, of a plurality of data blocks continuous in the content including the reference request data block. The storage system determines, based on the number of gaps, whether or not defrag based on the plurality of data blocks is valid. The storage system writes, when the defrag is determined to be valid, the plurality of data blocks read from the medium area to the memory area, into continuous address areas of the medium area.Type: GrantFiled: March 27, 2014Date of Patent: May 12, 2020Assignee: HITACHI, LTD.Inventors: Mitsuo Hayasaka, Ken Nomura, Keiichi Matsuzawa, Hitoshi Kamei
-
Patent number: 10649902Abstract: Reducing translation latency within a memory management unit (MMU) using external caching structures including requesting, by the MMU on a node, page table entry (PTE) data and coherent ownership of the PTE data from a page table in memory; receiving, by the MMU, the PTE data, a source flag, and an indication that the MMU has coherent ownership of the PTE data, wherein the source flag identifies a source location of the PTE data; performing a lateral cast out to a local high-level cache on the node in response to determining that the source flag indicates that the source location of the PTE data is external to the node; and directing at least one subsequent request for the PTE data to the local high-level cache.Type: GrantFiled: November 21, 2017Date of Patent: May 12, 2020Assignee: International Business Machines CorporationInventors: Guy L. Guthrie, Jody B. Joyner, Ronald N. Kalla, Michael S. Siegel, Jeffrey A. Stuecheli, Charles D. Wait, Frederick J. Ziegler
-
Patent number: 10645032Abstract: A packet processing block. The block comprises an input for receiving data in a packet header vector, where the vector comprises data values representing information for a packet. The block also comprises circuitry for performing packet match operations in response to at least a portion of the packet header vector and data stored in a match table and circuitry for performing one or more actions in response to a match detected by the circuitry for performing packet match operations. The one or more actions comprise modifying the data values representing information for a packet. The block also comprises at least one stateful memory comprising stateful memory data values. The one or more actions includes various stateful actions for reading stateful memory, modifying data values representing information for a packet, as a function of the stateful memory data values; and storing modified stateful memory data value back into the stateful memory.Type: GrantFiled: February 28, 2014Date of Patent: May 5, 2020Assignee: TEXAS INSTRUMENTS INCORPORATEDInventors: Patrick W. Bosshart, Hun-Seok Kim
-
Patent number: 10642709Abstract: A method for refining multithread software executed on a processor chip of a computer system. The envisaged processor chip has at least one processor core and a memory cache coupled to the processor core and configured to cache at least some data read from memory. The method includes, in logic distinct from the processor core and coupled to the memory cache, observing a sequence of operations of the memory cache and encoding a sequenced data stream that traces the sequence of operations observed.Type: GrantFiled: April 19, 2011Date of Patent: May 5, 2020Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Susan Carrie, Vijay Balakrishnan
-
Patent number: 10635330Abstract: A method performed by a mapping driver executing on a DSS includes (a) receiving a data storage command that identifies a portion of storage of the DSS having a given size to which the data storage command is directed, (b) generating a plurality of derived data storage (DDS) instructions from the received data storage command, each DDS instruction of the plurality of DDS instructions identifying a respective sub-portion of the portion to which that DDS instruction is directed, each sub-portion having a respective sub-portion size smaller than the given size, and (c) issuing each DDS instruction separately to a data storage coordination driver also executing on the DSS, the data storage coordination driver being configured to cause each DDS instruction to be performed with respect to storage of the DSS. An apparatus, system, and computer program product for performing a similar method are also provided.Type: GrantFiled: December 29, 2016Date of Patent: April 28, 2020Assignee: EMC IP Holding Company LLCInventors: Milind M. Koli, Timothy C. Ng, Xiangqing Yang
-
Patent number: 10628326Abstract: The present disclosure includes apparatuses and methods for logical to physical mapping. A number of embodiments include a logical to physical (L2P) update table, a L2P table cache, and a controller. The controller may be configured to cause a list of updates to be applied to an L2P table to be stored in the L2P update table.Type: GrantFiled: August 21, 2017Date of Patent: April 21, 2020Assignee: Micron Technology, Inc.Inventor: Jonathan M. Haswell
-
Patent number: 10613984Abstract: Various embodiments provide for a system that prefetches data from a main memory to a cache and then evicts unused data to a lower level cache. The prefetching system will prefetch data from a main memory to a cache, and data that is not immediately useable or is part of a data set which is too large to fit in the cache can be tagged for eviction to a lower level cache, which keeps the data available with a shorter latency than if the data had to be loaded from main memory again. This lowers the cost of prefetching useable data too far ahead and prevents cache trashing.Type: GrantFiled: April 19, 2018Date of Patent: April 7, 2020Assignee: AMPERE COMPUTING LLCInventor: Kjeld Svendsen
-
Patent number: 10613777Abstract: Aspects of the disclosure relate to ensuring information security in data transfers by utilizing decoy data. A computing platform may receive, from a data source computing device, a source data collection for a secure physical-storage-media data transfer and may identify one or more transmission parameters associated with the secure physical-storage-media data transfer. Subsequently, the computing platform may generate decoy data and may produce a secure dataset for the secure physical-storage-media data transfer by combining the decoy data with the source data collection received from the data source computing device. Then, the computing platform may encrypt the secure dataset based on the one or more transmission parameters to produce an encrypted dataset for the secure physical-storage-media data transfer.Type: GrantFiled: July 17, 2017Date of Patent: April 7, 2020Assignee: Bank of America CorporationInventors: Manu Kurian, Sorin N. Cismas
-
Patent number: 10606513Abstract: A Memory Device (MD) includes a configurable Non-Volatile Memory (NVM) including a first memory array and a second memory array. The configurable NVM stores temporary data designated for volatile storage by a Central Processing Unit (CPU) and persistent data designated for non-volatile storage by the CPU. An address is associated with a first location in the first memory array and with a second location in the second memory array. In performing a command to write data for the address, it is determined whether to write the data in the second location based on a volatility mode set for the MD. According to another aspect, a CPU designates a memory page in a virtual memory space as volatile or non-volatile based on data allocated to the memory page, and defines the volatility mode for the MD based on whether the memory page is designated as volatile or non-volatile.Type: GrantFiled: December 6, 2017Date of Patent: March 31, 2020Assignee: Western Digital Technologies, Inc.Inventors: Viacheslav Dubeyko, Luis Cargnini
-
Patent number: 10579303Abstract: Aspects of the present disclosure involve an apparatus including a port interface coupled with a data bus to receive memory transaction commands, and a command queue coupled with the port interface. Additional aspects include methods of operating such an apparatus, and electronic design automation (EDA) devices to generate design files associated with such an apparatus. The command queue includes a plurality of memory entries to store memory transaction commands, a placement logic module to combine a received memory transaction command with a memory transaction command previously stored in one of the plurality of memory entries of the command queue, and a selection logic module to determine an order to transmit memory transaction commands stored in the plurality of memory entries and transmit the stored memory transaction commands according to the determined order to a memory interface.Type: GrantFiled: August 26, 2016Date of Patent: March 3, 2020Assignee: Candace Design Systems, Inc.Inventors: Xiaofei Li, Ying Li, Zhehong Qian, Buying Du
-
Patent number: 10579556Abstract: A method, computer program product, and system includes a processing circuit(s) allocating a page of system memory address space to a device. The allocating includes the processing circuits(s) obtaining base address registers of the device in a bus and determining a portion of the page of the system memory address space to allocate to the base address registers. The processing circuits(s) sorts the base address registers, in a descending order, according to their alignments and adds sizes of the sorted base address registers to determine the portion of the page. The processing circuit(s) determines a remainder of the page: a difference between a size of the page and the portion of the page. The processing circuit(s) requests a virtual resource of a size equal to the remainder and allocates the page to the sorted base address registers and to the virtual resource.Type: GrantFiled: August 21, 2017Date of Patent: March 3, 2020Assignee: International Business Machines CorporationInventors: Bo Qun Bq Feng, Zhong Li, Xian Dong Meng, Yong Ji Jx Xie
-
Patent number: 10558364Abstract: A module manages memory in a computer. The module monitors usage of a primary memory associated with the computer. The primary memory stores memory blocks in a ready state. In response to primary memory usage by the memory blocks in the ready state exceeding a ready state threshold, the module compresses at least some of the memory blocks in the ready state to form memory blocks in a ready and compressed state. In response to primary memory usage by the memory blocks in the ready and compressed state exceeding a release threshold, the module releases at least some of the memory blocks in the ready and compressed state. In response to primary memory usage by the memory blocks in the compressed state exceeding a compressed threshold, the module transfers at least some memory blocks in the compressed state to a secondary memory associated with the computer.Type: GrantFiled: October 16, 2017Date of Patent: February 11, 2020Assignee: Alteryx, Inc.Inventors: Edward P. Harding, Jr., Adam David Riley, Christopher H. Kingsley
-
Patent number: 10545695Abstract: A memory device may comprise circuitry to adjust between latency and throughput in transferring information through a memory port, wherein the circuitry may be capable of configuring individual partitions or individual sectors as high-throughput storage or low-latency storage.Type: GrantFiled: February 16, 2017Date of Patent: January 28, 2020Assignee: Micron Technology, Inc.Inventors: Samuel D. Post, Eric Anderson
-
Patent number: 10521121Abstract: Provided are an apparatus, system and method for apparatus, system and method for throttling an acceptance rate for adding host Input/Output (I/O) commands to a buffer in a non-volatile memory storage device. Information is maintained on an input rate at which I/O commands are being added to the buffer and information is maintained on an output rate at which I/O commands are processed from the buffer to apply to execute against the non-volatile memory. A determination is made of a current level of available space in the buffer and an acceptance rate at which I/O commands are added to the buffer from the host system to process based on the input rate, the output rate, the current level of available space, and an available space threshold for the buffer to maintain the buffer at the available space threshold. I/O commands are added to the buffer to process based on the acceptance rate. The I/O commands are accessed from the buffer to process to execute against the non-volatile memory.Type: GrantFiled: December 29, 2016Date of Patent: December 31, 2019Assignee: INTEL CORPORATIONInventors: David B. Carlton, Xin Guo, Yu Du
-
Patent number: 10521155Abstract: Examples disclosed herein relate, in one aspect, to a non-transitory machine-readable storage medium encoded with instructions executable by a processor of a computing device to cause the computing device to start a virtual machine process. The virtual machine process may obtain access to a shared memory segment accessible by a monitoring process, execute an application, and store in the shared memory segment management data associated with the application's execution.Type: GrantFiled: September 29, 2015Date of Patent: December 31, 2019Assignee: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LPInventor: Thirusenthilanda Arasu
-
Patent number: 10521339Abstract: A method for writing data to a memory module, the method may include determining to write a representation of a data unit to a retired group of memory cells; searching for a selected retired group of memory cells that can store a representation of the data unit without being erased; and writing the representation of the data unit to the selected retired group of memory cells.Type: GrantFiled: February 27, 2014Date of Patent: December 31, 2019Assignee: Technion Research and Development Foundation LTD.Inventors: Yitzhak Birk, Amit Berman
-
Patent number: 10522212Abstract: The present disclosure includes apparatuses and methods for shift decisions. An example apparatus includes a memory device. The memory device includes an array of memory cells and sensing circuitry coupled to the array via a plurality of sense lines. The sensing circuitry includes a sense amplifier and a compute component coupled to a sense line and configured to implement logical operations and a decision component configured to implement a shift of data based on a determined functionality of a memory cell in the array.Type: GrantFiled: March 4, 2016Date of Patent: December 31, 2019Assignee: Micron Technology, Inc.Inventor: Glen E. Hush
-
Patent number: 10503651Abstract: A data storage device includes a media cache and a main data store optimized for sequential reads and organized into bands. When the data storage device receives a read request from a host computing system, the requested data may be fragmented across the media cache and the main data store, causing constrained read throughput. Band rewrite operations to improve read throughput are selected based on a hit tracking list including a hit counter associated with each band on the main data store. The hit counter tracks the number of times a host computing system has requested data in logical block addresses corresponding to the various bands. The data storage device may select bands for band rewrite operations based on the number of hits in the associated hit tracking counters.Type: GrantFiled: December 29, 2016Date of Patent: December 10, 2019Assignee: SEAGATE TECHNOLOGY LLCInventors: CheeHou Peng, ThanZaw Thein, WenXiang Xie, PohSeng Lim
-
Patent number: 10503423Abstract: In response to a request for accessing a file stored in a storage system, data objects associated with the file are retrieved from a storage device of the storage system. The data objects of the file are cached in a cache memory. An access sequence of the cached data objects within the file is determined based on metadata of the file, where the access sequence represents a sequential order in time of accessing the cached data objects within the file. In response to a request for cache space reclamation, one or more cached data objects are identified whose next access is a farthest in time from a data object currently being accessed amongst the cached data objects based on the access sequence of the data objects. The identified data objects are evicted from the cache memory whose next access is a farthest amongst the cached data objects.Type: GrantFiled: May 17, 2017Date of Patent: December 10, 2019Assignee: EMC IP Holding Company LLCInventors: Frederick Douglis, Windsor W. Hsu, Hangwei Qian