Memory Access Pipelining Patents (Class 711/169)
  • Patent number: 9158677
    Abstract: A storage controller is provided that contains multiple processors. In some embodiments, the storage controller is coupled to a flash memory module having multiple flash memory groups, each flash memory group corresponding to a distinct flash port in the storage controller, each flash port comprising an associated processor. Each processor handles a portion of one or more host commands, including reads and writes, allowing multiple parallel pipelines to handle one or more host commands simultaneously.
    Type: Grant
    Filed: May 3, 2013
    Date of Patent: October 13, 2015
    Assignee: SANDISK ENTERPRISE IP LLC
    Inventors: Aaron K. Olbrich, Douglas A. Prins
  • Patent number: 9152480
    Abstract: Embodiments of the present disclosure provide a method for storing application data and a terminal device, and relate to the field of communications, so that installation of an application having a default specified path and storage of data that is generated after the application is run are enabled to be located in the same storage space. The method includes: receiving an instruction for running a local application, wherein the instruction is triggered by a user; determining an actual path of a storage space in which the application is installed; running the application, and acquiring data that is generated after the application is run; and storing the data that is generated after the application is run in the actual path of the storage space in which the application is installed. The embodiments of the present disclosure are applied to use of a mobile phone.
    Type: Grant
    Filed: December 27, 2013
    Date of Patent: October 6, 2015
    Assignee: Huawei Device Co., Ltd.
    Inventor: Lei Chen
  • Patent number: 9135008
    Abstract: A device and a method for performing bitwise manipulation is provided. Multiple bitwise logic circuits are coupled to an instruction decoder, a register array and a rotator. Each bitwise logic circuit includes input multiplexers connected to an output multiplexer. The instruction decoder receives a bit manipulation instruction and sends to each corresponding input multiplexer a control signal based on a type of the instruction. Each input multiplexer of each bitwise logic circuit receives a control signal, a constant signal that has a value that is indifferent to the value of the mask, and a mask affected signal that has a value that is responsive to a value of an associated mask bit. Each input multiplexer selects between the constant signal and the mask affected signal based on the control signal, and outputs a selected signal.
    Type: Grant
    Filed: September 24, 2009
    Date of Patent: September 15, 2015
    Assignee: Freescale Semiconductor, Inc.
    Inventors: Evgeni Ginzburg, Keren Guy, Adi Katz
  • Patent number: 9092346
    Abstract: In accordance with embodiments disclosed herein, there are provided methods, systems, mechanisms, techniques, and apparatuses for implementing a speculative cache modification design. For example, in one embodiment, such means may include an integrated circuit having a data bus; a cache communicably interfaced with the data bus; a pipeline communicably interfaced with the data bus, in which the pipeline is to receive a store instruction corresponding to a cache line to be written to cache; caching logic to perform a speculative cache write of the cache line into the cache before the store instruction retires from the pipeline; and cache line validation logic to determine if the cache line written into the cache is valid or invalid, in which the cache line validation logic is to invalidate the cache line speculatively written into the cache when determined invalid and further in which the store instruction is allowed to retire from the pipeline when the cache line is determined to be valid.
    Type: Grant
    Filed: December 22, 2011
    Date of Patent: July 28, 2015
    Assignee: Intel Corporation
    Inventor: James E. McCormick, Jr.
  • Patent number: 9092340
    Abstract: A method and system for achieving die parallelism through block interleaving includes non-volatile memory having a multiple non-volatile memory dies, where each die has a cache storage area and a main storage area. A controller is configured to receive data and write sequentially addressed data to the cache storage area of a first die. The controller, after writing sequentially addressed data to the cache storage area of the first die equal to a block of the main storage area of the first die, writes additional data to a cache storage area of a next die until sequentially addressed data is written into the cache area of the next die equal to a block of the main storage area. The cache storage area may be copied to the main storage area on the first die while the cache storage area is written to on the next die.
    Type: Grant
    Filed: December 18, 2009
    Date of Patent: July 28, 2015
    Assignee: SanDisk Technologies Inc.
    Inventors: Steven Sprouse, Chris Avila, Jianmin Huang
  • Patent number: 9047092
    Abstract: A load store pipeline 18 includes an issue queue 20 and load store circuitry 24. The load store circuitry 24 includes the plurality of access slot circuits 26 to 40. Dependency tracking circuitry 42, 44, 46, 48 serves to track a freeable number of access slot circuits 26 to 42 corresponding to the sum of access slot circuits that are empty and those processing data access instructions which have not bypassed any preceding data access instructions within the program execution order.
    Type: Grant
    Filed: December 21, 2012
    Date of Patent: June 2, 2015
    Assignee: ARM Limited
    Inventors: Mélanie Emanuelle Lucie Teyssier, Philippe Pierre Maurice Luc, Albin Pierick Tonnerre
  • Patent number: 9043518
    Abstract: Apparatuses and methods of calibrating a memory interface are described. Calibrating a memory interface can include loading and outputting units of a first data pattern into and from at least a portion of a register to generate a first read capture window. Units of a second data pattern can be loaded into and output from at least the portion of the register to generate a second read capture window. One of the first read capture window and the second read capture window can be selected and a data capture point for the memory interface can be calibrated according to the selected read capture window.
    Type: Grant
    Filed: January 27, 2014
    Date of Patent: May 26, 2015
    Assignee: Micron Technology, Inc.
    Inventor: Terry M. Grunzke
  • Patent number: 9037827
    Abstract: A system and method for scheduling read and write operations among a plurality of solid-state storage devices. A computer system comprises client computers and data storage arrays coupled to one another via a network. A data storage array utilizes solid-state drives and Flash memory cells for data storage. A storage controller within a data storage array comprises an I/O scheduler. The data storage controller is configured to receive requests targeted to the data storage medium, said requests including a first type of operation and a second type of operation. The controller is further configured to schedule requests of the first type for immediate processing by said plurality of storage devices, and queue requests of the second type for later processing by the plurality of storage devices. Operations of the first type may correspond to operations with an expected relatively low latency, and operations of the second type may correspond to operations with an expected relatively high latency.
    Type: Grant
    Filed: January 21, 2014
    Date of Patent: May 19, 2015
    Assignee: Pure Storage, Inc.
    Inventors: John Colgrove, John Hayes, Bo Hong, Feng Wang, Ethan Miller, Craig Harmer
  • Patent number: 9036718
    Abstract: Embodiments provide access to a memory over a high speed serial link at slower speeds than the high speed serial links regular operation. An embodiment may comprise a memory apparatus with a differential receiver coupled to a protocol recognition circuit, a low speed receiving circuit that has a first receiver coupled with a first input of the differential receiver and a second receiver coupled with a second input of the differential receiver, wherein the low speed receiving circuit is coupled with the protocol recognition circuit, allowing the first and second receivers to access the protocol recognition block at a different frequency than the differential receiver.
    Type: Grant
    Filed: December 18, 2013
    Date of Patent: May 19, 2015
    Assignee: Intel Corporation
    Inventors: David J. Zimmerman, Michael W. Williams
  • Patent number: 9026746
    Abstract: A signal control device includes: a dual port RAM from or to which data signals are read and written at predetermined operation timings by first and second CPUs connected to two ports, respectively; an address collision detection unit detecting collision between addresses in which the first and second CPUs respectively read and write the data signal from and to the dual port RAM; a first storage unit storing the data signal read by the first CPU; a second storage unit storing the data signal read from the address in which the second CPU writes the data signal to the dual port RAM when the collision between the addresses is detected; and a switching unit switching a reading source outputting the data signal to the port to which the first CPU is connected and outputting the read data signal to the first CPU entering a readable state.
    Type: Grant
    Filed: April 11, 2011
    Date of Patent: May 5, 2015
    Assignee: Sony Corporation
    Inventor: Shinjiro Tanaka
  • Publication number: 20150100749
    Abstract: Methods and systems may provide for receiving a request to perform an atomic operation and adding the atomic operation to an execution pipeline of an arithmetic logic unit (ALU) for one or more pending atomic operations if the one or more pending atomic operations are associated with a memory location identified in the request. Additionally, at least a portion of the execution pipeline may bypass the memory location. In one example, adding the atomic operation to the execution pipeline includes populating a linked list with a modification associated with the atomic operation, wherein the linked list is dedicated to the memory location.
    Type: Application
    Filed: October 3, 2013
    Publication date: April 9, 2015
    Inventors: Altug Koker, JAYAKRISHNA P. S, Pattabhiraman K
  • Patent number: 8990588
    Abstract: A storage system in which a storage control apparatus writes data in each of divided areas defined by division of one or more storage areas in one or more storage devices, after encryption of the data with an encryption key unique to each divided area. When the storage control apparatus receives, from a management apparatus, designation of one or more of the divided areas allocated as one or more physical storage areas for a virtual storage area to be invalidated and an instruction to invalidate data stored in the one or more of the divided areas, the storage control apparatus invalidates one or more encryption keys associated with the designated one or more of the divided areas. In addition, the storage control apparatus may further overwrite at least part of the designated one or more of the divided areas with initialization data for data erasion.
    Type: Grant
    Filed: September 5, 2012
    Date of Patent: March 24, 2015
    Assignee: Fujitsu Limited
    Inventor: Masaru Shimmitsu
  • Patent number: 8977833
    Abstract: According to one embodiment, a memory system has a data transfer device which includes a first command generating unit, a second command generating unit, a first storage unit, a second storage unit, and a nonvolatile memory managing unit. The first command generator generates a first command for reading out data from a nonvolatile memory to a host apparatus. The second command generator generates a second command for internal processing of the memory system associated with a temporary memory and the nonvolatile memory. The first memory has a queue structure configured to store the first command. The second memory has a queue structure configured to store the second command. The memory manager is configured to read out the first command stored in the first memory in priority to the second command stored in the second memory and to transmit read-out command to the nonvolatile memory.
    Type: Grant
    Filed: April 20, 2012
    Date of Patent: March 10, 2015
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Takashi Kazui, Norikazu Yoshida
  • Patent number: 8972650
    Abstract: Systems and methods are disclosed for increasing efficiency of read operations by selectively adding pages from a pagelist to a batch, such that when the batch is executed as a read operation, each page in the batch can be concurrently accessed. The pagelist can include all the pages associated a read command received, for example, from a file system. Although the pages associated with the read command may have an original read order sequence, embodiments according to this invention re-order this original read order sequence by selectively adding pages to a batch. A page is added to the batch if it does not collide with any other page already added to the batch. A page collides with another page if neither page can be accessed simultaneously. One or more batches can be constructed in this manner until the pagelist is empty.
    Type: Grant
    Filed: January 28, 2011
    Date of Patent: March 3, 2015
    Assignee: Apple Inc.
    Inventors: Daniel J. Post, Matthew Byom
  • Patent number: 8959288
    Abstract: Cache lines are identified that provide incorrect data for read requests. The cache lines are invalidated before the incorrect data causes processing failure conditions. The cache lines providing incorrect data may be detected according to a number of the same read requests to the same cache lines. The cache lines may also be identified according to an amount of time between the same read requests to the same cache lines. The same read requests to the same cache lines may be identified according to associated start addresses and address lengths.
    Type: Grant
    Filed: August 3, 2010
    Date of Patent: February 17, 2015
    Assignee: Violin Memory, Inc.
    Inventors: Erik de la Iglesia, Som Sikdar, Sivaram Dommeti, Garry Knox
  • Patent number: 8954681
    Abstract: A command processing pipeline is coupled to a shared cache. The command processing pipeline comprises (i) a first command processing stage configured to sequentially receive and process first and second cache commands, and (ii) a second command processing stage coupled to the first command processing stage. The first and the second command processing stages are two consecutive command processing stages of the command processing pipeline. The first and second command processing stages may access different groups of cache resources, and the first and second cache commands may be processed during consecutive clock cycles of a clock signal. Processing of the second cache command may be performed independently of an outcome of processing the first cache command by the first command processing stage. A third command processing stage may write data associated with the first cache command to one of a valid memory and a data memory included in the cache.
    Type: Grant
    Filed: December 7, 2012
    Date of Patent: February 10, 2015
    Assignee: Marvell Israel (M.I.S.L) Ltd.
    Inventors: Tarek Rohana, Gil Stoler
  • Patent number: 8947918
    Abstract: According to one embodiment, a semiconductor memory device includes a memory cell array, a buffer configured to hold data input to an input/output circuit and to hold data read from the memory cell array, and a controller configured to receive a first command and an address from the outside and to read data, in response to the first command, from a memory cell group coupled to a selected word line designated by the address to the buffer. The controller receives a second command which is input after the first command and indicates a last command of a group of commands including write commands and/or read commands, and starts a write operation from the buffer to the memory cell array in response to the second command.
    Type: Grant
    Filed: August 29, 2013
    Date of Patent: February 3, 2015
    Inventor: Katsuyuki Fujita
  • Patent number: 8943279
    Abstract: Systems and methods providing a versioning feature in a storage system may allow the versioning feature to be toggled on and/or off during operation. Access operations targeting data objects stored in the system (e.g., delete and store type operations) may behave differently depending on whether versioning is (or has ever been) enabled for the storage system or a storage bucket thereof, or is not (or has never been) enabled for the storage system or storage bucket. For example, if versioning is off or suspended, a store operation may overwrite existing data. However, if versioning is enabled, a store type operation may create and store a new, unique object. If versioning has never been enabled, a delete operation may delete a stored object. However, if versioning has ever been enabled, a delete operation may create a new, unique delete marker object and may or may not delete any objects or data.
    Type: Grant
    Filed: March 14, 2014
    Date of Patent: January 27, 2015
    Assignee: Amazon Technologies, Inc.
    Inventors: Jason G. McHugh, Praveen Kumar Gattu, Michael A. Ten-Pow, Derek Ernest Denny-Brown, II
  • Publication number: 20150019832
    Abstract: A semiconductor device includes a pipeline latch unit including a plurality of write pipelines, and suitable for latching data, and a control unit suitable for controlling at least one write pipeline of the write pipelines based on an idle signal.
    Type: Application
    Filed: December 13, 2013
    Publication date: January 15, 2015
    Applicant: SK hynix Inc.
    Inventor: Sung-Hwa OK
  • Patent number: 8914580
    Abstract: In some embodiments, a cache may include a tag array and a data array, as well as circuitry that detects whether accesses to the cache are sequential (e.g., occupying the same cache line). For example, a cache may include a tag array and a data array that stores data, such as multiple bundles of instructions per cache line. During operation, it may be determined that successive cache requests are sequential and do not cross a cache line boundary. Responsively, various cache operations may be inhibited to conserve power. For example, access to the tag array and/or data array, or portions thereof, may be inhibited.
    Type: Grant
    Filed: August 23, 2010
    Date of Patent: December 16, 2014
    Assignee: Apple Inc.
    Inventors: Rajat Goel, Ian D. Kountanis
  • Patent number: 8914592
    Abstract: According to one embodiment, a data storage apparatus includes a write command module, a read command module, and a controller. The write command module is configured to process a write command for writing data to the nonvolatile memories for a plurality of channels, respectively. The read command module is configured to process a read command usually and to process a read command for read modify write (RMW) operation. The controller is configured to control the read command module, causing to execute the read command for the RMW operation, prior to the normal read command, thereby to execute a flush command, and to control the write command module, causing to execute a write flush process that includes the processing of a write command for the RMW operation after the read command for the RMW operation has been executed.
    Type: Grant
    Filed: November 17, 2011
    Date of Patent: December 16, 2014
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Akinori Harasawa, Tohru Fukuda
  • Patent number: 8909892
    Abstract: Embodiments of the invention enable fast context switching of application specific processors having functional units with an architecturally visible state. In example embodiments, a processor allocates memory space to store two process control blocks for two active tasks to be performed by the processor comprising one or more custom functional units having a respective processing state not accessible by the processor. A memory controller stores the processing state of the custom functional units currently running a first active task, in a first process control block, in response to a preemptive task switch requirement. The memory controller loads a second processing state of the custom functional units for a second active task, from a second process control block in the memory, in response to the preemptive task switch requirement. The processor may then perform the second active task, based on the second processing state loaded into the custom functional units.
    Type: Grant
    Filed: June 15, 2012
    Date of Patent: December 9, 2014
    Assignee: Nokia Corporation
    Inventors: Tommi Juhani Zetterman, Harri Hirvola
  • Patent number: 8904115
    Abstract: Parallel pipelines are used to access a shared memory. The shared memory is accessed via a first pipeline by a processor to access cached data from the shared memory. The shared memory is accessed via a second pipeline by a memory access unit to access the shared memory. A first set of tags is maintained for use by the first pipeline to control access to the cache memory, while a second set of tags is maintained for use by the second pipeline to access the shared memory. Arbitrating for access to the cache memory for a transaction request in the first pipeline and for a transaction request in the second pipeline is performed after each pipeline has checked its respective set of tags.
    Type: Grant
    Filed: August 18, 2011
    Date of Patent: December 2, 2014
    Assignee: Texas Instruments Incorporated
    Inventors: Abhijeet Ashok Chachad, Raguram Damodaran, Jonathan (Son) Hung Tran, Timothy David Anderson, Sanjive Agarwala
  • Patent number: 8898671
    Abstract: Provide is a processor that can maintain a dependency relationship between a plurality of instructions and one read instruction. The processor comprises: a setting unit configured to set, when an instruction that exists at a location ensuring that writing into a memory area has been completed is executed, usage information indicating whether writing into the memory area has been completed such that the usage information indicates that writing into a memory area during execution of one thread has been completed; and a control unit configured to (i) perform execution of a read instruction to read data stored in the memory area when the usage information indicates that writing into the memory area during execution of the one thread has been completed, and (ii) suppress execution of the read instruction when the usage information indicates that writing into the memory area during execution of the one thread has not been completed.
    Type: Grant
    Filed: July 6, 2011
    Date of Patent: November 25, 2014
    Assignee: Panasonic Corporation
    Inventor: Hiroyuki Morishita
  • Patent number: 8892841
    Abstract: In one embodiment, a processor may be configured to write ECC granular stores into the data cache, while non-ECC granular stores may be merged with cache data in a memory request buffer. In one embodiment, a processor may be configured to detect that a victim block writeback hits one or more stores in a memory request buffer (or vice versa) and may convert the victim block writeback to a fill. In one embodiment, a processor may speculatively issue stores that are subsequent to a load from a load/store queue, but prevent the update for the stores in response to a snoop hit on the load.
    Type: Grant
    Filed: July 9, 2012
    Date of Patent: November 18, 2014
    Assignee: Apple Inc.
    Inventors: Ramesh Gunna, Po-Yung Chang, Sudarshan Kadambi
  • Patent number: 8885203
    Abstract: An optical reading device has an optical reading unit having optical elements disposed in a line that reads a medium; a storage unit having a ring buffer formed in the storage space; and a control unit that writes scanned data read by the optical reading unit to the ring buffer, reads the scanned data written to the ring buffer, and transfers the scanned data that was read. The control unit also manages positions in the ring buffer for writing and reading the scanned data using a write pointer denoting the position for writing the scanned data to the ring buffer, and a read pointer denoting the position of scanned data that has not been read.
    Type: Grant
    Filed: March 9, 2012
    Date of Patent: November 11, 2014
    Assignee: Seiko Epson Corporation
    Inventor: Kenji Asada
  • Patent number: 8874822
    Abstract: Described herein are method and apparatus for scheduling access requests for a multi-bank low-latency random read memory (LLRRM) device within a storage system. The LLRRM device comprising a plurality of memory banks, each bank being simultaneously and independently accessible. A queuing layer residing in storage system may allocate a plurality of request-queuing data structures (“queues”), each queue being assigned to a memory bank. The queuing layer may receive access requests for memory banks in the LLRRM device and store each received access request in the queue assigned to the requested memory bank. The queuing layer may then send, to the LLRRM device for processing, an access request from each request-queuing data structure in successive order. As such, requests sent to the LLRRM device will comprise requests that will be applied to each memory bank in successive order as well, thereby reducing access latencies of the LLRRM device.
    Type: Grant
    Filed: July 15, 2013
    Date of Patent: October 28, 2014
    Assignee: NetApp, Inc.
    Inventors: George Totolos, Jr., Nhiem T. Nguyen
  • Patent number: 8856579
    Abstract: Methods and systems for calibrating parameters for communication between a controller and a memory device. A memory controller may be configured to calibrate one or more of the read latency and/or the latency window of a memory controller such that a data signal and a data strobe signal are received by the memory controller within the latency window of the memory controller.
    Type: Grant
    Filed: March 15, 2010
    Date of Patent: October 7, 2014
    Assignee: International Business Machines Corporation
    Inventors: Kevin C. Gower, Kyu-hyoun Kim
  • Patent number: 8850129
    Abstract: A system and computer implemented method for storing of data in the memory of a computer system in order at a fast rate is provided. The method includes launching a first store to memory. A wait counter is initiated. A second store to memory is speculatively launched when the wait counter expires. The second store to memory is cancelled when the second store achieves coherency prior to the first store to memory.
    Type: Grant
    Filed: June 24, 2010
    Date of Patent: September 30, 2014
    Assignee: International Business Machines Corporation
    Inventors: Norbert Hagspiel, Matthias Klein, Ulrich Mayer, Robert J. Sonnelitter, III, Gary E. Strait, Hanno Ulrich
  • Patent number: 8850150
    Abstract: A computing device and method for managing security of a memory or storage device without the need for administer privileges. To access the secure memory, a host provides a data block containing a control command and authentication data to the memory device. The memory device includes a controller for controlling access to a secure memory in the memory device. The memory device identifies the control command in the data block, authenticates the control command bused on the authentication data, and executes the control command to allow the host device to access the secure memory.
    Type: Grant
    Filed: July 20, 2012
    Date of Patent: September 30, 2014
    Assignee: STEC, Inc.
    Inventor: Mehran Ramezani
  • Publication number: 20140258667
    Abstract: A processor is configured to evaluate memory operation bonding criteria to selectively identify memory operation bonding opportunities within a memory access plan. Memory operations are combined in response to the memory operation bonding opportunities to form a revised memory access plan with accelerated memory access.
    Type: Application
    Filed: March 7, 2013
    Publication date: September 11, 2014
    Applicant: MIPS TECHNOLOGIES, INC.
    Inventor: Ranganathan Sudhakar
  • Patent number: 8825966
    Abstract: An arrangement of memory devices and a controller is based on an interface with a reduced pin count relative to a known memory device and controller arrangement. Facilitating the reduced pin count interface are some operations performed by the controller. The controller determines a width for a Data bus while assigning a target device address to each of the memory devices.
    Type: Grant
    Filed: February 2, 2012
    Date of Patent: September 2, 2014
    Assignee: MOSAID Technologies Incorporated
    Inventor: Peter Gillingham
  • Publication number: 20140237175
    Abstract: A parallel processing computing system includes an ordered set of m memory banks and a processor core. The ordered set of m memory banks includes a first and a last memory bank, wherein m is an integer greater than 1. The processor core implements n virtual processors, a pipeline having p ordered stages, including a memory operation stage, and a virtual processor selector function.
    Type: Application
    Filed: April 25, 2014
    Publication date: August 21, 2014
    Applicant: COGNITIVE ELECTRONICS, INC.
    Inventors: Andrew C. FELCH, Richard H. GRANGER
  • Patent number: 8788781
    Abstract: Methods, systems and computer program products for providing a sequencer that schedules job descriptors are described. The sequencer can manage the scheduling of the job descriptors for execution based on the availability of their respective segments and channels. For example, the sequencer can check the status of the segments, and identify one or more segments that are in busy or full state, or one or more segments that are in non-busy or empty state. Based on the status check, the sequencer can execute job descriptors out of order, and in particular, give priorities to job descriptors whose associated segments are available over job descriptors whose associated segments are in busy or full state.
    Type: Grant
    Filed: December 20, 2011
    Date of Patent: July 22, 2014
    Assignee: Marvell World Trade Ltd.
    Inventors: Chi Kong Lee, Siu-Hung Fred Au, Jungil Park, Hyunsuk Shin
  • Patent number: 8775741
    Abstract: A storage control system includes a prefetch controller that identifies memory regions for prefetching according to temporal memory access patterns. The memory access patterns identify a number of sequential memory accesses within different time ranges and a highest number of memory accesses to the different memory regions within a predetermine time period.
    Type: Grant
    Filed: January 8, 2010
    Date of Patent: July 8, 2014
    Assignee: Violin Memory Inc.
    Inventor: Erik de la Iglesia
  • Patent number: 8775717
    Abstract: A controller designed for use with a flash memory storage module, including a crossbar switch designed to connect a plurality of internal processors with various internal resources, including a plurality of internal memories. The memories contain work lists for the processors. In one embodiment, the processors communicate by using the crossbar switch to place tasks on the work lists of other processors.
    Type: Grant
    Filed: April 8, 2008
    Date of Patent: July 8, 2014
    Assignee: Sandisk Enterprise IP LLC
    Inventors: Douglas A. Prins, Aaron K. Olbrich
  • Patent number: 8769232
    Abstract: A non-volatile semiconductor memory module is disclosed comprising a memory device and memory controller operably coupled to the memory device, wherein the memory controller is operable to receive a host command, split the host command into one or more chunks comprising a first chunk comprising at least one logical block address (LBA), and check the first chunk against an active chunk coherency list comprising one or more active chunks to determine whether the first chunk is an independent chunk, and ready to be submitted for access to the memory device, or a dependent chunk, and deferred access to the memory device until an associated dependency is cleared.
    Type: Grant
    Filed: April 6, 2011
    Date of Patent: July 1, 2014
    Assignee: Western Digital Technologies, Inc.
    Inventors: Dominic S. Suryabudi, Mei-Man L. Syu
  • Patent number: 8762620
    Abstract: A storage controller containing multiple processors. The processors are divided into groups, each of which handles a different stage of a pipelined process of performing host reads and writes. In one embodiment, the storage controller operates with a flash memory module, and includes multiple parallel pipelines that allow plural host commands to be handled simultaneously.
    Type: Grant
    Filed: April 8, 2008
    Date of Patent: June 24, 2014
    Assignee: Sandisk Enterprise IP LLC
    Inventors: Douglas A. Prins, Aaron K. Olbrich
  • Patent number: 8751754
    Abstract: Embodiments of the present invention provide memory systems having a plurality of memory devices sharing an interface for the transmission of read data. A controller can identify consecutive read requests sent to different memory devices. To avoid data contention on the interface, for example, the controller can be configured to delay the time until read data corresponding to the second read request is placed on the interface.
    Type: Grant
    Filed: July 31, 2013
    Date of Patent: June 10, 2014
    Assignee: Micron Technology, Inc.
    Inventors: Paul A. LaBerge, James B. Johnson
  • Patent number: 8751755
    Abstract: A volatile memory associated with a mass storage controller and a flash memory module. The volatile memory includes a number of tables containing information related to the flash memory storage, including a table storing physical flash memory addresses and a plurality of tables containing metadata.
    Type: Grant
    Filed: April 8, 2008
    Date of Patent: June 10, 2014
    Assignee: Sandisk Enterprise IP LLC
    Inventors: Douglas A. Prins, Aaron K. Olbrich
  • Patent number: 8745352
    Abstract: Reducing contentions between processes or tasks that are trying to access shared resources is described herein. According to embodiments of the invention, a method of writing a set of data associated with a task to a memory resource is provided. The method includes calculating the amount of memory required to write said data to the memory resource and updating an expected end marker to reflect the amount of memory required to write the data to the memory resource. A flag is then set to an incomplete state, and the data is written to the memory resource. The flag can be set to a complete state and an end marker is updated. The end marker indicates the end of the data stored in the memory resource.
    Type: Grant
    Filed: December 30, 2011
    Date of Patent: June 3, 2014
    Assignee: Sybase, Inc.
    Inventors: Ameya Sakhalkar, Anunay Tiwari, Daniel Alan Wood, Kantikiran Krishna Pasupuleti
  • Patent number: 8745312
    Abstract: A non-volatile memory may include a plurality of map blocks for storing a plurality of map units, the map units representing mapping information between physical addresses and logical addresses. A storage device may include such a non-volatile memory. A method of mapping such a non-volatile memory may include writing historical information regarding locations of valid map units among the map units included in map blocks previously allocated among the map blocks when a new map block among the map blocks is allocated, the valid map units representing valid mapping information, and constructing a map table including all of the valid mapping information based on the historical information and a result of searching a map block recently allocated among the map blocks.
    Type: Grant
    Filed: February 21, 2008
    Date of Patent: June 3, 2014
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Eun-Jin Yun, Hye-Young Kim, Young-Joon Choi, Dong-Gi Lee, Jin-Hyuk Kim
  • Patent number: 8738837
    Abstract: The present techniques provide systems and methods of controlling access to more than one open page in a memory component, such as a memory bank. Several components may request access to the memory banks. A controller can receive the requests and open or close the pages in the memory bank in response to the requests. In some embodiments, the controller assigns priority to some components requesting access, and assigns a specific page in a memory bank to the priority component. Further, additional available pages in the same memory bank may also be opened by other priority components, or by components with lower priorities. The controller may conserve power, or may increase the efficiency of processing transactions between components and the memory bank by closing pages after time outs, after transactions are complete, or in response to a number of requests received by masters.
    Type: Grant
    Filed: January 25, 2013
    Date of Patent: May 27, 2014
    Assignee: Micron Technology, Inc.
    Inventor: Robert Walker
  • Patent number: 8738841
    Abstract: A storage controller connected to a flash memory storage module, the controller and module including multiple sets of buffers. The buffers are part of one or more pipelines through which data is moved between the storage module and one or more hosts.
    Type: Grant
    Filed: April 8, 2008
    Date of Patent: May 27, 2014
    Assignee: Sandisk Enterprise IP LLC.
    Inventors: Aaron K. Olbrich, Douglas A. Prins
  • Patent number: 8713277
    Abstract: In an embodiment, a system includes a memory controller, processors and corresponding caches. The system may include sources of uncertainty that prevent the precise scheduling of data forwarding for a load operation that misses in the processor caches. The memory controller may provide an early response that indicates that data should be provided in a subsequent clock cycle. An interface unit between the memory controller and the caches/processors may predict a delay from a currently-received early response to the corresponding data, and may speculatively prepare to forward the data assuming that it will be available as predicted. The interface unit may monitor the delays between the early response and the forwarding of the data, or at least the portion of the delay that may vary. Based on the measured delays, the interface unit may modify the subsequently predicted delays.
    Type: Grant
    Filed: June 1, 2010
    Date of Patent: April 29, 2014
    Assignee: Apple Inc.
    Inventors: Brian P. Lilly, Jason M. Kassoff, Hao Chen
  • Patent number: 8707132
    Abstract: An information processing apparatus comprising: a reception unit adapted to receive a packet containing first data to be stored in a storage unit, a first address indicating an address of second data held in the storage unit, and a second address indicating an address at which the first data is to be written in the storage unit; an access unit adapted to read out the second data from the storage unit based on the first address, and write the first data in the storage unit based on the second address; and a transmission unit adapted to replace the first data of the packet received by the reception unit with the second data read out by the access unit, and transmit the packet.
    Type: Grant
    Filed: July 1, 2011
    Date of Patent: April 22, 2014
    Assignee: Canon Kabushiki Kaisha
    Inventors: Akio Nakagawa, Hisashi Ishikawa
  • Patent number: 8700874
    Abstract: A method performed in a memory controller for maintaining segmented counters split into primary and secondary memories, the primary memory faster. Events occur that require incrementing one of the segmented counters and the memory controller responds by incrementing a corresponding primary part in the primary memory. Each time a primary part is rolling over the memory controller determines that a secondary part should be updated. Also, the memory controller periodically determines that the secondary part of a segmented counter should be opportunistically updated. The opportunistic update is based on a probability function and a random number. The secondary part includes at least all of bits of the segmented counter not in the primary part and is stored in the secondary memory. Each time an update to the secondary part occurs, both the secondary part and primary part of the segmented counter must be updated.
    Type: Grant
    Filed: September 24, 2010
    Date of Patent: April 15, 2014
    Assignee: Telefonaktiebolaget L M Ericsson (Publ)
    Inventors: Edmund G. Chen, Brian Alleyne, Robert Hathaway, Ranjit J. Rozario, Todd D. Basso
  • Publication number: 20140095825
    Abstract: An operating method of a semiconductor device may comprise determining whether a read request is pending, setting a delay interval in accordance with a density of requests if there is no read request pending; and processing a write request after the delay interval.
    Type: Application
    Filed: August 28, 2013
    Publication date: April 3, 2014
    Applicant: SK hynix Inc.
    Inventors: Young-Suk MOON, Yong-Kee KWON, Hong-Sik KIM
  • Publication number: 20140095824
    Abstract: A semiconductor device comprises: a read queue configured to store one or more read requests to a semiconductor memory device; a write queue configured to store one or more write requests to the semiconductor memory device; and a dispatch block configured to determine a scheduling order of the one or more read requests and the one or more write requests and switch to the read queue or to the write queue if a request exists in a Row Hit state in the read queue or in the write queue.
    Type: Application
    Filed: June 7, 2013
    Publication date: April 3, 2014
    Inventors: Young-Suk MOON, Yong-Kee KWON, Hong-Sik KIM
  • Patent number: 8688926
    Abstract: A solid state storage device includes an interface system configured to communicate with an external host system over an aggregated multi-channel interface to receive data for storage by the solid state storage device. The solid state storage device also includes a storage processing system configured to communicate with the interface system to receive the data, process the data against storage allocation information to parallelize the data among a plurality of solid state memory subsystems, and transfer the parallelized data. The interface system is configured to receive the parallelized data, apportion the parallelized data among the plurality of solid state memory subsystems, and transfer the parallelized data for storage in the plurality of solid state memory subsystems, where each of the plurality of solid state memory subsystems is configured to receive the associated portion of the parallelized data and store the associated portion on a solid state storage medium.
    Type: Grant
    Filed: October 10, 2011
    Date of Patent: April 1, 2014
    Assignee: Liqid Inc.
    Inventors: Jason Breakstone, Alok Gupta, Himanshu Desai, Angelo Campos