Cache Bypassing Patents (Class 711/138)
  • Patent number: 10318445
    Abstract: A processing system in a dispersed storage network is configured to access write sequence information corresponding to a write sequence; determine whether to elevate a priority level of the write sequence; when the processing system determines to elevate the priority level of the write sequence, elevate the priority level of the write sequence; determine whether to lower the priority level of the write sequence; and when the processing system determines to lower the priority level of the write sequence, the processing system lowers the priority level of the write sequence.
    Type: Grant
    Filed: September 2, 2016
    Date of Patent: June 11, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Greg R. Dhuse
  • Patent number: 10261793
    Abstract: A particular method includes receiving, at a processor, an instruction and an address of the instruction. The method also includes preventing execution of the instruction based at least in part on determining that the address is within a range of addresses.
    Type: Grant
    Filed: December 16, 2011
    Date of Patent: April 16, 2019
    Assignee: International Business Machines Corporation
    Inventors: Mark J. Hickey, Adam J. Muff, Matthew R. Tubbs, Charles D. Wait
  • Patent number: 10248567
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed to maintain cache coherency. Examples disclosed herein involve, in response to receiving, from a direct memory access controller, an interrupt associated with a direct memory access operation, handling the interrupt based on a parameter of the direct memory access operation, wherein the direct memory access controller is to execute the direct memory access operation.
    Type: Grant
    Filed: June 16, 2014
    Date of Patent: April 2, 2019
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Senthil Kumar Ramakrishnan, Eugene Cohen
  • Patent number: 10120872
    Abstract: Several embodiments include a data cache system that implements a data cache and processes content requests for data items that may be in the data cache. The data cache system can receive a content request for at least one data item. The data cache system can update a karma score associated an originator entity of the data item. The originator entity can be a user account that uploaded the data item. When wiping the data cache for more storage space, the data cache system can determine whether to discard the data items based on a cache priority that is computed based, at least partially, on the karma score.
    Type: Grant
    Filed: December 28, 2015
    Date of Patent: November 6, 2018
    Assignee: Facebook, Inc.
    Inventors: Neeraj Choubey, Fraidun Akhi, Georgiy Yakovlev, Ray Joseph Tong
  • Patent number: 10090028
    Abstract: Provided is a memory control technique for avoiding that the issue of a refresh command and the issue of a calibration command are arranged in succession. The memory control circuit issues a refresh command to make a request for a refresh operation based on a set refresh cycle, and issues a calibration command to make a request for a calibrating operation based on a set calibration cycle, for which the control function of suppressing the issue of the calibration command only for a given time after the issue of the refresh command, and suppressing the issue of the refresh command only for a given time after the issue of the calibration command is adopted.
    Type: Grant
    Filed: May 3, 2017
    Date of Patent: October 2, 2018
    Assignee: RENESAS ELECTRONICS CORPORATION
    Inventors: Junkei Sato, Nobuhiko Honda
  • Patent number: 10089239
    Abstract: Provided are methods, systems, and apparatus for managing and controlling memory caches, in particular, system level caches outside of those closest to the CPU. The processes and representative hardware structures that implement the processes are designed to allow for detailed control over the behavior of such system level caches. Caching policies are developed based on policy identifiers, where a policy identifier corresponds to a collection of parameters that control the behavior of a set of cache management structures. For a given cache, one policy identifier is stored in each line of the cache.
    Type: Grant
    Filed: May 26, 2016
    Date of Patent: October 2, 2018
    Assignee: Google LLC
    Inventors: Allan D. Knies, Shinye Shiu, Chih-Chung Chang, Vyacheslav Vladimirovich Malyugin, Santhosh Rao
  • Patent number: 10084847
    Abstract: Methods and systems for generating and reusing dynamic web content involve, for example, automatically generating client-side code on a server at run time, and automatically downloading the client-side code to the client side at run time. The client-side code is executed on the client side to become a widget with dynamic behavior attributes displayed as a component of a web page on a display screen of a client-side computing device. Dynamic behavior of the client-side code may triggered via an event handler mechanism wherein properties of the client-side code are dynamically changed without affecting any other content on the web page. The widget may be redisplayed on a subsequent occasion with a change in the widget without regenerating the client-side code.
    Type: Grant
    Filed: December 20, 2017
    Date of Patent: September 25, 2018
    Assignee: CITICORP CREDIT SERVICES, INC. (USA)
    Inventors: France Law-How-Hung, Ramadurai V. Ram
  • Patent number: 9996457
    Abstract: Systems and methods are disclosed for efficient buffering for a system having non-volatile memory (“NVM”). In some embodiments, a control circuitry of a system can use heuristics to determine whether to perform buffering of one or more write commands received from a file system. In other embodiments, the control circuitry can minimize read energy and buffering overhead by efficiently re-ordering write commands in a queue along page-aligned boundaries of a buffer. In further embodiments, the control circuitry can optimally combine write commands from a buffer with write commands from a queue. After combining the commands, the control circuitry can dispatch the commands in a single transaction.
    Type: Grant
    Filed: June 22, 2017
    Date of Patent: June 12, 2018
    Assignee: APPLE INC.
    Inventors: Daniel J. Post, Nir Jacob Wakrat
  • Patent number: 9996401
    Abstract: A task processing method and virtual machine are disclosed. The method includes selecting an idle resource for a task; creating a global variable snapshot for a global variable; executing the task, in private memory space in the selected idle resource; after the execution of the task is complete, acquiring a new global variable snapshot corresponding to the global variable, and acquiring an updated global variable according to a local global variable snapshot and the new global variable snapshot; and determining whether a synchronization variable of a to-be-executed task in a task synchronization waiting queue includes the current updated global variable, and if the synchronization variable of the to-be-executed task in the task synchronization waiting queue includes the current updated global variable, putting the task into a task execution waiting queue.
    Type: Grant
    Filed: June 12, 2015
    Date of Patent: June 12, 2018
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Lin Gu, Zhiqiang Ma, Zhonghua Sheng, Liufei Wen
  • Patent number: 9965395
    Abstract: The level one memory controller maintains a local copy of the cacheability bit of each memory attribute register. The level two memory controller is the initiator of all configuration read/write requests from the CPU. Whenever a configuration write is made to a memory attribute register, the level one memory controller updates its local copy of the memory attribute register.
    Type: Grant
    Filed: October 5, 2015
    Date of Patent: May 8, 2018
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Raguram Damodaran, Joseph Raymond Michael Zbiciak, Naveen Bhoria
  • Patent number: 9946537
    Abstract: Embodiments of the present invention provide an approach for integrated development environment (IDE)-based repository searching (e.g., for library elements such as classes and/or functions) in a networked computing environment. In a typical embodiment, a first program code file is received from a first integrated development environment (IDE). The first program file may be associated with a set of attributes as stored in an annotation, header, or the like. Regardless, the first program file may be parsed and indexed into a repository based on the set of attributes. A search request may then be received from a second IDE. Based on the search request and the set of attributes, a matching program code file may then be identified as stored in the repository. Once identified, the matching program code file may be transmitted/communicated to the second IDE to fulfill the search request.
    Type: Grant
    Filed: May 6, 2016
    Date of Patent: April 17, 2018
    Assignee: International Business Machines Corporation
    Inventors: Swaminathan Balasubramanian, Rick A. Hamilton, II, Brian M. O'Connell, Keith R. Walker
  • Patent number: 9904476
    Abstract: Techniques are described for a memory device. In various embodiments, a scheduler/controller is configured to manage data as it read to or written from a memory. A memory is partitioned into a group of sub-blocks, a parity block is associated with the sub-blocks, and the sub-blocks are accessed to read data as needed. A pending write buffer is added to a group of memory sub-blocks. Such a buffer may be sized to be equal to the group of memory sub-blocks. The pending write buffer handles collisions for write accesses to the same block.
    Type: Grant
    Filed: August 27, 2010
    Date of Patent: February 27, 2018
    Assignee: Cisco Technology, Inc.
    Inventors: Wei-Jen Huang, Chih-Tsung Huang, Sachin Agarwal, Sha Ma
  • Patent number: 9906579
    Abstract: Methods and systems for generating and reusing dynamic web content involve, for example, automatically generating client-side code on a server at run time, and automatically downloading the client-side code to the client side at run time. The client-side code is executed on the client side to become a widget with dynamic behavior attributes displayed as a component of a web page on a display screen of a client-side computing device. Dynamic behavior of the client-side code may triggered via an event handler mechanism wherein properties of the client-side code are dynamically changed without affecting any other content on the web page. The widget may be redisplayed on a subsequent occasion with a change in the widget without regenerating the client-side code.
    Type: Grant
    Filed: November 22, 2015
    Date of Patent: February 27, 2018
    Assignee: Citicorp Credit Services, Inc. (USA)
    Inventors: France Law-How-Hung, Ramadurai V. Ram
  • Patent number: 9866529
    Abstract: The systems and methods of the present solution are directed to providing Entity Tag persistency by a device intermediary to a client and a plurality of servers. An intermediary device between a client and one or more back-end servers can receive an entity requested by the client from an origin server that provides the requested content. The intermediary device can encode the back-end server information onto an ETag of the entity, cache the entity with the encoded ETag and serve the entity with the encoded ETag to the client. In this way, when the client attempts to validate the entity by sending a request including the encoded ETag to the intermediary device, the intermediary device decodes the encoded ETag to extract the identity of the backend server and sends the request to validate the entity to the identified server that originally sent the entity that included the requested content.
    Type: Grant
    Filed: April 4, 2014
    Date of Patent: January 9, 2018
    Assignee: Citrix Systems, Inc.
    Inventors: Krishna Khanal, Ashwin Jagadish, Saravana Annamalaisami
  • Patent number: 9772948
    Abstract: A new segment of data is copied to a volatile, primary cache based on a host data read access request. The primary cache mirrors a first portion of a non-volatile main storage criterion is determined for movement of data from the primary cache to a non-volatile, secondary cache that mirrors a second portion of the main storage. The criterion gives higher priority to segments having addresses not yet selected for reading by the host. In response to the new segment of data being copied to the primary cache, a selected segment of data is copied from the primary cache to the secondary cache in response to the selected segment satisfying the criterion.
    Type: Grant
    Filed: July 6, 2012
    Date of Patent: September 26, 2017
    Assignee: SEAGATE TECHNOLOGY LLC
    Inventors: James David Sawin, Luke W. Friendshuh, Sumanth Jannyavula Venkata, Ryan James Goss, Mark Allen Gaertner
  • Patent number: 9767026
    Abstract: In one embodiment, a conflict detection logic is configured to receive a plurality of memory requests from an arbiter of a coherent fabric of a system on a chip (SoC). The conflict detection logic includes snoop filter logic to downgrade a first snooped memory request for a first address to an unsnooped memory request when an indicator associated with the first address indicates that the coherent fabric has control of the first address. Other embodiments are described and claimed.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: September 19, 2017
    Assignee: Intel Corporation
    Inventors: Jose S. Niell, Daniel F. Cutter, James D. Allen, Deepak Limaye, Shadi T. Khasawneh
  • Patent number: 9747212
    Abstract: Execution of a store instruction to modify an instruction at a memory location identified by a memory address is requested. A cache controller stores the memory address and the modified data in an associative memory coupled to a data cache and an instruction cache. In addition, the modified data is stored in a second level cache without invalidating the memory location associated with the instruction cache.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: August 29, 2017
    Assignee: International Business Machines Corporation
    Inventors: Wen-Tzer Thomas Chen, Jr., Robert H. Bell, Jr., Bradly G. Frey
  • Patent number: 9715461
    Abstract: According to an embodiment, a cache memory control circuit includes: a hit determination section; a refill processing section; a search section configured to determine a refill candidate way by searching for the way candidate for a refill process from a plurality of ways based on an LRU algorithm when the hit determination section detects a cache miss; a binary tree information section configured to store binary tree information for the LRU algorithm; a conflict detection section; and a control section. The control section updates the binary tree information in the binary tree information section by using way information of the way where the refill process is being executed when the conflict detection section determines that the way where the refill process is being executed and the refill candidate way match each other.
    Type: Grant
    Filed: September 4, 2014
    Date of Patent: July 25, 2017
    Assignee: Kabushiki Kaisha Toshiba
    Inventor: Toru Sano
  • Patent number: 9665297
    Abstract: A processor core is supported by an upper level cache and a lower level cache that receives, from an interconnect fabric, a write injection request requesting injection of a partial cache line of data into a target cache line identified by a target real address. In response to receipt of the write injection request, a determination is made that the upper level cache is a highest point of coherency for the target real address. In response to the determination, the upper level cache and lower level cache collaborate to transfer the target cache line from the upper level cache to the lower level cache. The lower level cache updates the target cache line by merging the partial cache of data into the target cache line and storing the updated target cache line in the lower level cache.
    Type: Grant
    Filed: October 25, 2016
    Date of Patent: May 30, 2017
    Assignee: International Business Machines Corporation
    Inventors: Luis E. De La Torre, Bernard C. Drerup, Sanjeev Ghai, Guy L. Guthrie, Alexander M. Taft, Derek E. Williams
  • Patent number: 9639336
    Abstract: One embodiment of the present invention sets forth a technique for reducing the number of assembly instructions included in a computer program. The technique involves receiving a directed acyclic graph (DAG) that includes a plurality of nodes, where each node includes an assembly instruction of the computer program, hierarchically parsing the plurality of nodes to identify at least two assembly instructions that are vectorizable and can be replaced by a single vectorized assembly instruction, and replacing the at least two assembly instructions with the single vectorized assembly instruction.
    Type: Grant
    Filed: October 25, 2012
    Date of Patent: May 2, 2017
    Assignee: NVIDIA Corporation
    Inventors: Vinod Grover, Manjunath Kudlur, Michael Murphy
  • Patent number: 9547360
    Abstract: A main memory system includes a main memory device including a first memory device implemented with a volatile memory and a second memory device implemented with a non-volatile memory, the main memory system being configured such that, when entering a sleep mode, the memory device reads a portion of data stored in the first memory device to store the read data in the second memory device, and, after the portion of data is read, the first memory device and the second memory device are powered off.
    Type: Grant
    Filed: November 27, 2013
    Date of Patent: January 17, 2017
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Young-Jin Park, Won-Seok Lee, Il-Guy Jung
  • Patent number: 9513837
    Abstract: Techniques for allocation of storage volumes are described. Response times of a primary storage may be monitored to determine if the primary storage is input/output limited. A performance assist storage volume may be allocated and data replicated between the primary storage and the performance assist storage volume. Input/output requests may be distributed between the primary storage and the performance assist storage volume.
    Type: Grant
    Filed: October 12, 2011
    Date of Patent: December 6, 2016
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Aaron L Jenkins, Paul Miller, Chiung-Sheng Wu
  • Patent number: 9507647
    Abstract: In a multiprocessor system, a conflict checking mechanism is implemented in the L2 cache memory. Different versions of speculative writes are maintained in different ways of the cache. A record of speculative writes is maintained in the cache directory. Conflict checking occurs as part of directory lookup. Speculative versions that do not conflict are aggregated into an aggregated version in a different way of the cache. Speculative memory access requests do not go to main memory.
    Type: Grant
    Filed: January 18, 2011
    Date of Patent: November 29, 2016
    Assignee: GLOBALFOUNDRIES INC.
    Inventors: Matthias A. Blumrich, Luis H. Ceze, Dong Chen, Alan Gara, Phlip Heidelberger, Martin Ohmacht, Burkhard Steinmacher-Burow, Xiaotong Zhuang
  • Patent number: 9448960
    Abstract: A novel readdressing circuit is provided for supporting data communications over a data line and a clock line between at least one master device and multiple slave devices. For example, the master device and the multiple slave devices may be configured to communicate over an I2C bus including the data line and the clock line. The readdressing circuit has a data input node for receiving a data signal transferred over the data line and including an address word produced by the master device, and a data output node coupled to the multiple slave devices. The readdressing circuit also includes an address generator and an address transmit detections circuit. The address generator is configured for storing a multi-bit fixed offset value. The address generator is responsive to the address word at the data input node for generating multiple unique addresses for the multiple slave devices.
    Type: Grant
    Filed: March 14, 2013
    Date of Patent: September 20, 2016
    Assignee: LINEAR TECHNOLOGY CORPORATION
    Inventor: Daniel James Eddleman
  • Patent number: 9286148
    Abstract: In a data processing system, a switch of the data processing system receives a request to push a message referenced by an instruction of a sending thread to a receiving thread. In response to receiving the request, the switch determines whether the sending thread is authorized to push the message to the receiving thread by attempting to access an entry of a data structure of the switch utilizing a key derived from at least one identifier of the sending thread. In response to access to the entry being successful, content of the entry is utilized to determine an address of a mailbox of the receiving thread, and the switch pushes the message to the mailbox of the receiving thread. In response to access to the entry not being successful, the switch refrains from pushing the message to the mailbox of the receiving thread.
    Type: Grant
    Filed: December 22, 2014
    Date of Patent: March 15, 2016
    Assignee: International Business Machines Corporation
    Inventors: Lakshminarayana B. Arimilli, Bernard C. Drerup, John D. Irish, Charles F. Marino, William J. Starke
  • Patent number: 9251063
    Abstract: A storage system including a storage device which includes media for storing data from a host computer, a medium controller for controlling the media, a plurality of channel controllers for connecting to the host computer through a channel and a cache memory for temporarily storing data from the host computer, wherein the media have a restriction on a number of writing times. The storage device includes a bus for directly transferring data from the medium controller to the channel controller.
    Type: Grant
    Filed: December 6, 2013
    Date of Patent: February 2, 2016
    Assignee: Hitachi, Ltd.
    Inventors: Shuji Nakamura, Kazuhisa Fujimoto, Akira Fujibayashi
  • Patent number: 9235517
    Abstract: A method, system and memory controller for implementing dynamic enabling and disabling of cache based upon workload in a computer system. Predefined sets of information are monitored while the cache is enabled to identify a change in workload, and selectively disabling the cache responsive to a first identified predefined workload. Monitoring predefined information to identify a second predefined workload while the cache is disabled, and selectively enabling the cache responsive to said identified second predefined workload.
    Type: Grant
    Filed: August 12, 2013
    Date of Patent: January 12, 2016
    Assignee: GLOBALFOUNDRIES Inc.
    Inventors: Clark A. Anderson, Adrian C. Gerhard, David Navarro
  • Patent number: 9183145
    Abstract: Described embodiments provide a method of coherently storing data in a network processor having a plurality of processing modules and a shared memory. A control processor sends an atomic update request to a configuration controller. The atomic update request corresponds to data stored in the shared memory, the data also stored in a local pipeline cache corresponding to a client processing module. The configuration controller sends the atomic update request to the client processing modules. Each client processing module determines presence of an active access operation of a cache line in the local cache corresponding to the data of the atomic update request. If the active access operation of the cache line is absent, the client processing module writes the cache line from the local cache to shared memory, clears a valid indicator corresponding to the cache line and updates the data corresponding to the atomic update request.
    Type: Grant
    Filed: July 27, 2011
    Date of Patent: November 10, 2015
    Assignee: Intel Corporation
    Inventors: David P. Sonnier, David A. Brown, Charles Edward Peet, Jr.
  • Patent number: 9170638
    Abstract: A method and apparatus are described for reducing power consumption in a processor. A micro-operation is selected for execution, and a destination physical register tag of the selected micro-operation is compared to a plurality of source physical register tags of micro-operations dependent upon the selected micro-operation. If there is a match between the destination physical register tag and one of the source physical register tags, a corresponding physical register file (PRF) read operation is disabled. The comparison may be performed by a wakeup content-addressable memory (CAM) of a scheduler. The wakeup CAM may send a read control signal to the PRF to disable the read operation. Disabling the corresponding PRF read operation may include shutting off power in the PRF and related logic.
    Type: Grant
    Filed: December 16, 2010
    Date of Patent: October 27, 2015
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Ganesh Venkataramanan, Emil Talpes
  • Patent number: 9164907
    Abstract: An information processing apparatus included in a parallel computer system has a memory that holds data and a processor including a cache memory that holds a part of the data held on the memory and a processor core that performs arithmetic operations using the data held on the memory or the cache memory. Moreover, the information processing apparatus has a communication device that determines whether data received from a different information processing apparatus is data that the processor core waits for. When the communication device determines that the received data is data that the processor core waits for, the communication device stores the received data on the cache memory. When the communication device determines that the received data is data that the processor core does not wait for, the communication device stores the received data on the memory.
    Type: Grant
    Filed: October 7, 2013
    Date of Patent: October 20, 2015
    Assignee: FUJITSU LIMITED
    Inventors: Yuichiro Ajima, Tomohiro Inoue, Shinya Hiramoto
  • Patent number: 9135171
    Abstract: Page data of a virtual machine is represented for efficient save and restore operations. One form of representation applies to each page with an easily identifiable pattern. The page is described, saved, and restored in terms of metadata reflective of the pattern rather than a complete page of data reflecting the pattern. During a save or restore operation, however, the metadata of the page is represented, but not the page data. Another form of representation applies to each page sharing a canonical instance of a complex pattern that is instantiated in memory during execution, and explicitly saved and restored. Each page sharing the canonical page is saved and restored as a metadata reference, without the need to actually save redundant copies of the page data.
    Type: Grant
    Filed: July 13, 2010
    Date of Patent: September 15, 2015
    Assignee: VMware, Inc.
    Inventors: Yury Baskakov, Alexander Thomas Garthwaite, Jesse Pool, Carl A. Waldspurger, Rajesh Venkatasubramanian, Ishan Banerjee
  • Patent number: 9104564
    Abstract: A computer implemented method for early data delivery prior to error detection completion in a memory system includes receiving a frame of a multi-frame data block at a memory control unit interface. A controller writes the frame to a buffer control block in a memory controller nest domain. The frame is read from the buffer control block by a cache subsystem interface in a system domain prior to completion of error detection of the multi-frame data block. Error detection is performed on the frame by an error detector in the memory controller nest domain. Based on detecting an error in the frame, an intercept signal is sent from the memory controller nest domain to a correction pipeline in the system domain. The intercept signal indicates that error correction is needed prior to writing data in the frame to a cache subsystem.
    Type: Grant
    Filed: September 30, 2014
    Date of Patent: August 11, 2015
    Assignee: International Business Machines Corporation
    Inventors: Glenn D. Gilda, Mark R. Hodges, Vesselina K. Papazova, Patrick J. Meaney
  • Patent number: 9104623
    Abstract: A storage system according to certain embodiments includes a client-side repository (CSR). The CSR may communicate with a client at a higher data transfer rate than the rate used for communication between the client and secondary storage. During copy operations, for instance, some or all of the data being backed up or otherwise copied to secondary storage is stored in the CSR. During restore operations, copies of the data stored in the CSR is accessed from the CSR instead of from secondary storage, improving performance. Remaining data blocks not stored in the CSR can be restored from secondary storage.
    Type: Grant
    Filed: December 13, 2011
    Date of Patent: August 11, 2015
    Assignee: Commvault Systems, Inc.
    Inventors: Manoj Kumar Vijayan Retnamma, Deepak Raghunath Attarde, Hetalkumar N. Joshi
  • Patent number: 9092330
    Abstract: Embodiments relate to early data delivery prior to error detection completion in a memory system. One aspect is a system that includes a cache subsystem interface with a correction pipeline in a system domain. The system includes a memory control unit interface in a memory controller nest domain and a buffer control block providing an asynchronous boundary layer between the system domain and the memory controller nest domain. A controller is configured to receive a frame of a multi-frame data block and write the frame to the buffer control block. The frame is read by the cache subsystem interface prior to completion of error detection of the multi-frame data block. Error detection is performed on the frame in the memory controller nest domain. Based on detecting an error in the frame, an intercept signal is sent from the memory controller nest domain to the correction pipeline in the system domain.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: July 28, 2015
    Assignee: International Business Machines Corporation
    Inventors: Glenn D. Gilda, Mark R. Hodges, Vesselina K. Papazova, Patrick J. Meaney
  • Patent number: 9075719
    Abstract: A storage system is migrated without stopping service provision by a host computer. By this means, in a migration-source storage system, data of the cache memory is destaged, and, next, data received from the host computer is directly written in a logical unit by bypassing the cache memory. On the other hand, in a migration-destination storage system, communication with the migration-source storage system is performed to set setting information of a logical unit of the migration object into a logical unit management table and set a writing mode for the cache memory to a cache-bypass mode. After that, the migration-source storage system blocks a path to the host computer. The migration-destination storage system receives a report of the path block from the migration-source storage system and then opens a path between the own system and the host computer.
    Type: Grant
    Filed: February 10, 2012
    Date of Patent: July 7, 2015
    Assignee: Hitachi, Ltd.
    Inventors: Mika Teranishi, Hiroji Shibuya, Shunji Murayama, Toshio Kimura, Kazushige Nagamatsu
  • Patent number: 9075729
    Abstract: An embodiment of the present invention is a storage system including a plurality of non-volatile storage devices for storing user data, and a controller for controlling data transfer between the plurality of non-volatile storage devices and a host. The controller includes a processor core circuit, a processor cache, and a primary storage device including a cache area for temporarily storing user data. The processor core circuit ascertains contents of a command received from the host. The processor core circuit ascertains a retention storage device of data to be transferred in the storage system in operations responsive to the command. The processor core circuit determines whether to transfer the data via the processor cache in the storage system, based on a type of the command and the ascertained retention storage device.
    Type: Grant
    Filed: May 16, 2012
    Date of Patent: July 7, 2015
    Assignee: Hitachi, Ltd.
    Inventors: Naoya Okada, Masanori Takada, Hiroshi Hirayama
  • Patent number: 9075894
    Abstract: A caching device is configured to determine whether an object received or currently stored at the caching device should be (or continue to be) cached at the caching device, even if the object is otherwise cacheable. If so, the object is cached (or retained) at the caching device, otherwise, it is not. The determination as to whether or not the object should be cached or, if already cached, retained at the caching device may be made on the basis of a worthiness determination which evaluates the object on the basis of one or more parameters or attributes of the object, which worthiness may be one part of an overall value determination for the object.
    Type: Grant
    Filed: October 31, 2011
    Date of Patent: July 7, 2015
    Assignee: Blue Coat Systems, Inc.
    Inventors: Kevin Porter, Eric Maki, Marcin Lukasz Lizon, Marsha Groves
  • Patent number: 9058283
    Abstract: A first cache arrangement including an input configured to receive a memory request from a second cache arrangement; a first cache memory for storing data; an output configured to provide a response to the memory request for the second cache arrangement; and a first cache controller; the first cache controller configured such that for the response to the memory request output by the output, the cache memory includes no allocation for data associated with the memory request.
    Type: Grant
    Filed: July 27, 2012
    Date of Patent: June 16, 2015
    Assignee: STMICROELECTRONICS (RESEARCH & DEVELOPMENT) LIMITED
    Inventors: Stuart Ryan, Andrew Michael Jones
  • Publication number: 20150149732
    Abstract: The present disclosure relates to techniques for system and methods for software-based management of protected data-blocks insertion into the memory cache mechanism of a computerized device. In particular the disclosure relates to preventing protected data blocks from being altered and evicted from the CPU cache coupled with buffered software execution. The technique is based upon identifying at least one conflicting data-block having a memory mapping indication to a designated memory cache-line and preventing the conflicting data-block from being cached. Functional characteristics of the software product of a vendor, such as gaming or video, may be partially encrypted to allow for protected and functional operability and avoid hacking and malicious usage of non-licensed user.
    Type: Application
    Filed: November 24, 2013
    Publication date: May 28, 2015
    Inventors: MICHAEL KIPERBERG, AMIT RESH, NEZER ZAIDENBERG
  • Patent number: 9043558
    Abstract: Embodiments of the current invention permit a user to allocate cache memory to main memory more efficiently. The processor or a user allocates the cache memory and associates the cache memory to the main memory location, but suppresses or bypassing reading the main memory data into the cache memory. Some embodiments of the present invention permit the user to specify how many cache lines are allocated at a given time. Further, embodiments of the present invention may initialize the cache memory to a specified pattern. The cache memory may be zeroed or set to some desired pattern, such as all ones. Alternatively, a user may determine the initialization pattern through the processor.
    Type: Grant
    Filed: October 17, 2014
    Date of Patent: May 26, 2015
    Assignee: EMULEX CORPORATION
    Inventors: Steven Gerard LeMire, Vuong Cao Nguyen
  • Publication number: 20150134916
    Abstract: A cache filter is described. More specifically, some implementations include techniques for classification of memory requests including calculating a probability that one or more memory regions are associated with a particular memory request, selecting one or more regions of the memory to receive memory requests based on the probability associated with the one or more regions, receiving one or more memory requests, determining that at least one of the memory requests is associated with one of the one or more selected regions of the memory, and providing the at least one memory request to the memory.
    Type: Application
    Filed: November 12, 2013
    Publication date: May 14, 2015
    Applicant: NVIDIA Corporation
    Inventors: Sudnya Padalikar, Gregory Fredrick Diamos
  • Patent number: 9020490
    Abstract: A method and caching server for enabling caching of a portion of a media file in a User Equipment (UE) in a mobile telecommunications network. The caching server selects the media file and determines a size of the portion to be cached in the UE. The size may be determined depending on radio network conditions for the UE and/or characteristics of the media file. The caching server sends an instruction to the UE to cache the determined size of the portion of the media file in the UE.
    Type: Grant
    Filed: December 5, 2012
    Date of Patent: April 28, 2015
    Assignee: Telefonaktiebolaget L M Ericsson (Publ)
    Inventors: Andras Valkó, Catalin Meirosu, Zoltán Turányi
  • Patent number: 9009439
    Abstract: Data records of a data set can be stored in a plurality of main part fragments retained in on-disk storage. A size of the data set can be compared to an available size of main system memory. All of the plurality of main part fragments can be fully loaded into the main system memory when the available size of the main system memory is larger than the size of the data set. Alternatively, one or more of the of main part fragments can be paged into the main system memory on demand in response to a data access request when the available size of the main system memory is smaller than the size of the data set and the data access request can be satisfied by providing access to a subset of the main part fragments, or access can be provided directly to the on-disk main part fragments when the data access request involves random access for projection in the data set and the available size of the main system memory is smaller than the size of the data set.
    Type: Grant
    Filed: March 12, 2013
    Date of Patent: April 14, 2015
    Assignee: SAP SE
    Inventors: Ivan Schreter, Dirk Thomsen, Colin Florendo, Blaine French
  • Patent number: 8996817
    Abstract: A memory access system may be used to relay data between an electronic device and external memory. The memory access system may include write buffers which may receive and write information from the electronic device to the external memory. The memory access system may also include read buffers which may gather data from the external memory and send it to a main processing component of the electronic device for processing. The memory access system may be configured so that the main processing component of the electronic device may gather data from the write buffers of the memory access system when a condition is satisfied.
    Type: Grant
    Filed: July 12, 2012
    Date of Patent: March 31, 2015
    Assignee: Harman International Industries, Inc.
    Inventor: Kirk I. Bushen
  • Patent number: 8996818
    Abstract: Some embodiments include a computing device with a control circuit that handles memory requests. The control circuit checks one or more conditions to determine when a memory request should be bypassed to a main memory instead of sending the memory request to a cache memory. When the memory request should be bypassed to a main memory, the control circuit sends the memory request to the main memory. Otherwise, the control circuit sends the memory request to the cache memory.
    Type: Grant
    Filed: December 9, 2012
    Date of Patent: March 31, 2015
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Jaewoong Sim, Gabriel H. Loh
  • Patent number: 8996816
    Abstract: A method and apparatus for selectively bypassing a cache in a processor of a computing device are disclosed. A mechanism to provide visibility to transactions on the core to a cache interface (e.g., an L3 cache interface) in a trace controller buffer (TCB) for debugging purposes, by causing selected transactions, which would otherwise be satisfied by the cache, to bypass the cache and be presented to the memory system where they may be logged in the TCB is described. In an embodiment of the invention, there is provided a method for providing processing core request visibility comprising bypassing a higher level cache in response to a processing core request, capturing the processing core request in a TCB, providing a mask to filter the processing core request, and returning a transaction response to a requesting processing core.
    Type: Grant
    Filed: November 8, 2010
    Date of Patent: March 31, 2015
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Greggory D. Donley, Benjamin Tsien, Vydhyanathan Kalyanasundharam, Patrick N. Conway, William A. Hughes
  • Patent number: 8990509
    Abstract: Embodiments herein relate to selecting an accelerated path based on a number of write requests and a sequential trend. One of an accelerated path and a cache path is selected between a host and a storage device based on at least one of a number of write requests and a sequential trend. The cache path connects the host to the storage device via a cache. The number of write requests is based on a total number of random and sequential write requests from a set of outstanding requests from the host to the storage device. The sequential trend is based on a percentage of sequential read and sequential write requests from the set of outstanding requests.
    Type: Grant
    Filed: September 24, 2012
    Date of Patent: March 24, 2015
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventor: Weimin Pan
  • Patent number: 8972645
    Abstract: Embodiments herein relate to sending a request to a storage device based on a moving average. A threshold is determined based on a storage device type and a bandwidth of a cache bus connecting a cache to a controller. The moving average of throughput is measured between the storage device and a host. The request of the host to access the storage device is sent directly to the storage device, if the moving average is equal to the threshold.
    Type: Grant
    Filed: September 19, 2012
    Date of Patent: March 3, 2015
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Weimin Pan, Mark Lyndon Oelke
  • Patent number: 8972661
    Abstract: The population of data to be inserted into secondary data storage cache is controlled by determining a heat metric of candidate data; adjusting a heat metric threshold; rejecting candidate data provided to the secondary data storage cache whose heat metric is less than the threshold; and admitting candidate data whose heat metric is equal to or greater than the heat metric threshold. The adjustment of the heat metric threshold is determined by comparing a reference metric related to hits of data most recently inserted into the secondary data storage cache, to a reference metric related to hits of data most recently evicted from the secondary data storage cache; if the most recently inserted reference metric is greater than the most recently evicted reference metric, decrementing the threshold; and if the most recently inserted reference metric is less than the most recently evicted reference metric, incrementing the threshold.
    Type: Grant
    Filed: October 31, 2011
    Date of Patent: March 3, 2015
    Assignee: International Business Machines Corporation
    Inventors: Michael T. Benhase, Stephen L. Blinick, Evangelos S. Eleftheriou, Lokesh M. Gupta, Robert Haas, Xiao-Yu Hu, Ioannis Koltsidas, Roman A. Pletka
  • Patent number: 8972662
    Abstract: The population of data to be inserted into secondary data storage cache is controlled by determining a heat metric of candidate data; adjusting a heat metric threshold; rejecting candidate data provided to the secondary data storage cache whose heat metric is less than the threshold; and admitting candidate data whose heat metric is equal to or greater than the heat metric threshold. The adjustment of the heat metric threshold is determined by comparing a reference metric related to hits of data most recently inserted into the secondary data storage cache, to a reference metric related to hits of data most recently evicted from the secondary data storage cache; if the most recently inserted reference metric is greater than the most recently evicted reference metric, decrementing the threshold; and if the most recently inserted reference metric is less than the most recently evicted reference metric, incrementing the threshold.
    Type: Grant
    Filed: April 26, 2012
    Date of Patent: March 3, 2015
    Assignee: International Business Machines Corporation
    Inventors: Michael T. Benhase, Stephen L. Blinick, Evangelos S. Eleftheriou, Lokesh M. Gupta, Robert Haas, Xiao-Yu Hu, Ioannis Koltsidas, Roman A. Pletka