User Data Cache And Instruction Data Cache Patents (Class 711/123)
  • Patent number: 11307999
    Abstract: The data cache of a processor is segregated by execution mode, eliminating the danger of certain malware by no longer sharing the resource. Kernel-mode software can adjust the relative size of the two portions of the data cache, to dynamically accommodate the data-cache needs of varying workloads.
    Type: Grant
    Filed: May 8, 2020
    Date of Patent: April 19, 2022
    Assignee: Sony Interactive Entertainment Inc.
    Inventor: Paul T. Robinson
  • Patent number: 11144330
    Abstract: An algorithm program loading method and a related apparatus are provided. The method includes: determining basic storage capacity of a second storage resource; obtaining an algorithm program, determining whether the algorithm capacity of the algorithm program is greater than the basic storage capacity, and if the algorithm capacity of the algorithm program is greater than the basic storage capacity, segmenting the algorithm program by taking the basic storage capacity as a unit to obtain algorithm subprograms; controlling a direct memory access module to load a master control program of a neural network processor to a first storage resource and executing the master control program; and controlling the direct memory access module to load the first algorithm subprogram in the algorithm subprograms to the second storage resource, confirming that the loading of the first algorithm subprogram is completed, executing the first algorithm subprogram, and loading in parallel a second algorithm subprogram.
    Type: Grant
    Filed: November 28, 2019
    Date of Patent: October 12, 2021
    Assignee: Shenzhen Intellifusion Technologies Co., Ltd.
    Inventor: Qingxin Cao
  • Patent number: 11138014
    Abstract: A branch predictor provides a predicted branch instruction outcome for a current block of at least one instruction. The branch predictor comprises branch prediction tables to store branch prediction entries providing branch prediction information; lookup circuitry to perform, based on indexing information associated with the current block, a table lookup in a looked up subset of the branch prediction tables; and prediction generating circuitry to generate the predicted branch instruction outcome for the current block based on the branch prediction information in the branch prediction entries looked up in the looked up subset of branch prediction tables. The looked up subset of branch prediction tables is selected based on lookup filtering information obtained for the current block. Lookups to tables other than the looked up subset are suppressed.
    Type: Grant
    Filed: January 29, 2020
    Date of Patent: October 5, 2021
    Assignee: Arm Limited
    Inventors: Yasuo Ishii, Houdhaifa Bouzguarrou, Thibaut Elie Lanois, Guillaume Bolbenes
  • Patent number: 11106586
    Abstract: Systems and methods for rebuilding an index for a flash cache are provided. The index is rebuilt by reading headers of containers stored in the cache and inserting information from the headers into the index. The index is enabled while being rebuild such that lookup operations can be performed using the index even when the index is incomplete. New containers can be inserted into used or unused regions of the cache while the index is being rebuilt.
    Type: Grant
    Filed: June 13, 2019
    Date of Patent: August 31, 2021
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Philip N. Shilane, Grant R. Wallace
  • Patent number: 11030041
    Abstract: The present invention provides a decoding method of a flash memory controller, wherein the decoding method includes the steps of: reading first data from a flash memory module; decoding the first data, and recording at least one specific address of the flash memory module according to decoding results of the first data, wherein said at least one specific address corresponds to a bit having high reliability errors (HRE) of the first data; reading second data from the flash memory module; and decoding the second data according to said at least one specific address.
    Type: Grant
    Filed: March 24, 2020
    Date of Patent: June 8, 2021
    Assignee: Silicon Motion, Inc.
    Inventor: Tsung-Chieh Yang
  • Patent number: 10977175
    Abstract: A system and method of handling access demands in a virtual cache comprising, by a processing system, checking if a virtual cache access demand missed because of a synonym tagged in the virtual cache; in response to the virtual cache access demand missing because of a synonym tagged in the virtual cache, updating the virtual address tag in the virtual cache to a new virtual address tag; searching for additional synonyms tagged in the virtual cache; and in response to finding additional synonyms tagged in the virtual cache, updating the virtual address tag of the additional synonyms to the new virtual address tag.
    Type: Grant
    Filed: February 1, 2019
    Date of Patent: April 13, 2021
    Assignee: International Business Machines Corporation
    Inventors: David Campbell, Bryan Lloyd
  • Patent number: 10789175
    Abstract: A computing system comprises one or more cores. Each core comprises a processor and switch with each processor coupled to a communication network among the cores. Also disclosed are techniques for implementing an adaptive last level allocation policy in a last level cache in a multicore system receiving one or more new blocks for allocating for storage in the cache, accessing a selected profile from plural profiles that define allocation actions, according to a least recently used type of allocation and based on a cache action, a state bit, and traffic pattern type for the new blocks of data and handling the new block according to the selected profile for a selected least recently used (LRU) position in the cache.
    Type: Grant
    Filed: June 1, 2017
    Date of Patent: September 29, 2020
    Assignee: Mellanox Technologies Ltd.
    Inventors: Gilad Tal, Gil Moran, Miriam Menes, Gil Kopilov, Shlomo Raikin
  • Patent number: 10776118
    Abstract: A computing system comprising a central processing unit (CPU), a memory processor and a memory device comprising a data array and an index array. The computing system is configured to store data lines comprising data elements in the data array and to store index lines comprising a plurality of memory indices in the index array. The memory indices indicate memory positions of data elements in the data array with respect to a start address of the data array. There is further provided a related computer implemented method and a related computer program product.
    Type: Grant
    Filed: September 9, 2016
    Date of Patent: September 15, 2020
    Assignee: International Business Machines Corporation
    Inventors: Heiner Giefers, Raphael Polig, Jan Van Lunteren
  • Patent number: 10705987
    Abstract: A control circuit for controlling memory prefetch requests to system level cache (SLC). The control circuit includes a circuit identifying memory access requests received at the system level cache (SLC), where each of the memory access requests includes an address (ANEXT) of memory to be accessed. Another circuit associates a tracker with each of the memory access streams. A further circuit performs tracking for the memory access streams by: when the status is tracking and the address (ANEXT) points to an interval between the current address (ACURR) and the last prefetched address (ALAST), issuing a prefetch request to the SLC; and when the status is tracking, and distance (ADIST) between the current address (ACURR) and the last prefetched address (ALAST) is greater than a specified maximum prefetch for the associated tracker, waiting for further requests to control a prefetch process.
    Type: Grant
    Filed: May 12, 2017
    Date of Patent: July 7, 2020
    Assignee: LG ELECTRONICS INC.
    Inventors: Arkadi Avrukin, Seungyoon Song, Tariq Afzal, Yongjae Hong, Michael Frank, Thomas Zou, Hoshik Kim, Jungsook Lee
  • Patent number: 10691621
    Abstract: The data cache of a processor is segregated by execution mode, eliminating the danger of certain malware by no longer sharing the resource. Kernel-mode software can adjust the relative size of the two portions of the data cache, to dynamically accommodate the data-cache needs of varying workloads.
    Type: Grant
    Filed: April 12, 2018
    Date of Patent: June 23, 2020
    Assignee: Sony Interactive Entertainment Inc.
    Inventor: Paul T. Robinson
  • Patent number: 10671426
    Abstract: Data processing apparatus comprises one or more interconnected processing elements; each processing element being configured to execute processing instructions of program tasks; each processing element being configured to save context data relating to a program task following execution of that program task by that processing element; and to load context data, previously saved by that processing element or another of the processing elements, at resumption of execution of a program task; each processing element having respective associated format definition data to define one or more sets of data items for inclusion in the context data; the apparatus comprising format selection circuitry to communicate the format definition data of each of the processing elements with others of the processing elements and to determine, in response to the format definition data for each of the processing elements, a common set of data items for inclusion in the context data.
    Type: Grant
    Filed: November 28, 2016
    Date of Patent: June 2, 2020
    Assignee: ARM Limited
    Inventors: Curtis Glenn Dunham, Jonathan Curtis Beard, Roxana Rusitoru
  • Patent number: 10621038
    Abstract: The present invention provides a decoding method of a flash memory controller, wherein the decoding method includes the steps of: reading first data from a flash memory module; decoding the first data, and recording at least one specific address of the flash memory module according to decoding results of the first data, wherein said at least one specific address corresponds to a bit having high reliability errors (HRE) of the first data; reading second data from the flash memory module; and decoding the second data according to said at least one specific address.
    Type: Grant
    Filed: August 8, 2018
    Date of Patent: April 14, 2020
    Assignee: Silicon Motion, Inc.
    Inventor: Tsung-Chieh Yang
  • Patent number: 10579376
    Abstract: A processor includes a performance monitor that logs reservation losses, and additionally logs reasons for the reservation losses. By logging reasons for the reservation losses, the performance monitor provides data that can be used to determine whether the reservation losses were due to valid programming, such as two threads competing for the same lock, or whether the reservation losses were due to bad programming. When the reservation losses are due to bad programming, the information can be used to improve the programming to obtain better performance.
    Type: Grant
    Filed: July 15, 2016
    Date of Patent: March 3, 2020
    Assignee: International Business Machines Corporation
    Inventors: Shakti Kapoor, John A. Schumann, Karen E. Yokum
  • Patent number: 10565122
    Abstract: The lookup of accesses (including snoops) to cache tag ways is serialized to perform one (or less than all) tag way access per clock (or even slower). Thus, for a N-way set associative cache, instead of performing lookup/comparison on the N tag ways in parallel, the lookups are performed one tag way a time. Way prediction is utilized to select an order to look in the N ways. This can include selecting which tag way will be looked in first. This helps to reduce the average number of cycles and lookups required.
    Type: Grant
    Filed: May 30, 2017
    Date of Patent: February 18, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Patrick P. Lai, Robert Allen Shearer
  • Patent number: 10387162
    Abstract: Aspects of the invention include a computer-implemented method for executing one or more instructions by a processing unit. The method includes fetching, by an instruction fetch unit, a first instruction from an instruction cache. The method further includes associating, by an effective address table logic, an entry in an effective address table (EAT) with the first instruction. The method further includes fetching, by the instruction fetch unit, a second instruction from the instruction cache, wherein the first instruction occurs before a branch has been taken and the second instruction occurs after the branch has been taken. The method further includes associating at least a portion of the entry in the EAT associated with the first instruction in response to the second instruction utilizing a cache line utilized by the first instruction and processing the first instruction and the second instruction through a processor pipeline utilizing the entry of the EAT.
    Type: Grant
    Filed: September 20, 2017
    Date of Patent: August 20, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Richard J. Eickemeyer, Balaram Sinharoy
  • Patent number: 10353816
    Abstract: A system includes a non-volatile memory to store a page cache that contains pages of data allocated by an operating system, the pages in the page cache being persistent across a power cycle of the system. The page cache is located in a specified region of the non-volatile memory and is to store the pages of data without tagging a memory region.
    Type: Grant
    Filed: January 28, 2015
    Date of Patent: July 16, 2019
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Christian Perone, Diego Rahn Medaglia, Joao Claudio Ambrosi, James M Mann, Craig Walrath
  • Patent number: 10255184
    Abstract: Disclosed aspects relate to a computer system having a plurality of processor chips and a plurality of memory buffer chips, and for transferring data in the computer system. One or more of the processor chips is communicatively coupled to at least one memory module which is assigned to the processor chip. One or more of the processor chips includes a cache and is communicatively coupled to one or more of the memory buffer chips via a memory-buffer-chip-specific bidirectional point-to-point communication connection. At least one of the memory buffer chips includes a coherence directory and is configured for being exclusively in charge for implementing directory-based coherence over the caches of the processor chips for at least one pre-defined address-based subset of memory lines stored in at least one of the memory modules assigned to a processor chip.
    Type: Grant
    Filed: November 7, 2016
    Date of Patent: April 9, 2019
    Assignee: International Business Machines Corporation
    Inventor: Burkhard Steinmacher-Burow
  • Patent number: 10248555
    Abstract: Methods and apparatus for managing an effective address table (EAT) in a multi-slice processor including receiving, from an instruction sequence unit, a next-to-complete instruction tag (ITAG); obtaining, from the EAT, a first ITAG from a tail-plus-one EAT row, wherein the EAT comprises a tail EAT row that precedes the tail-plus-one EAT row; determining, based on a comparison of the next-to-complete ITAG and the first ITAG, that the tail EAT row has completed; and retiring the tail EAT row based on the determination.
    Type: Grant
    Filed: May 31, 2016
    Date of Patent: April 2, 2019
    Assignee: International Business Machines Corporation
    Inventors: Akash V. Giri, David S. Levitan, Mehul Patel, Albert J. Van Norstrand, Jr.
  • Patent number: 10241905
    Abstract: Methods and apparatus for managing an effective address table (EAT) in a multi-slice processor including receiving, from an instruction sequence unit, a next-to-complete instruction tag (ITAG); obtaining, from the EAT, a first ITAG from a tail-plus-one EAT row, wherein the EAT comprises a tail EAT row that precedes the tail-plus-one EAT row; determining, based on a comparison of the next-to-complete ITAG and the first ITAG, that the tail EAT row has completed; and retiring the tail EAT row based on the determination.
    Type: Grant
    Filed: July 27, 2016
    Date of Patent: March 26, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Akash V. Giri, David S. Levitan, Mehul Patel, Albert J. Van Norstrand, Jr.
  • Patent number: 10230971
    Abstract: A video encoder (70) for coding moving pictures comprising a buffer (16c) with a plurality of memory areas capable of storing frames composed of top fields and bottom fields, a motion estimation unit (19) operable to code, field by field, inputted pictures performing moving estimation and moving compensation by referring, field by field, to the picture data stored in a memory area, a motion compensation unit (16d), a subtractor (11), a transformation unit (13) and a quantization unit (14), a memory management unit (71) operable to manage, frame by frame, a plurality of memory areas, an inverse quantization unit (16a) and inverse discrete cosine transform unit (16b) operable to decode picture data in coded fields and store the picture data in the decoded field in any of the plurality of memory areas under the management by the memory management unit (71).
    Type: Grant
    Filed: February 28, 2018
    Date of Patent: March 12, 2019
    Assignee: GODO KAISHA IP BRIDGE 1
    Inventors: Martin Schlockermann, Bernhard Schuur, Shinya Kadono
  • Patent number: 10120814
    Abstract: An apparatus and method are described for managing TLB coherence. For example, one embodiment of a processor comprises: one or more cores to execute instructions and process data; one or more translation lookaside buffers (TLBs) each comprising a plurality of entries to cache virtual-to-physical address translations usable by the set of one or more cores when executing the instructions; one or more epoch counters each programmed with a specified epoch value; and TLB validation logic to validate a specified set of TLB entries at intervals specified by the epoch value.
    Type: Grant
    Filed: April 1, 2016
    Date of Patent: November 6, 2018
    Assignee: Intel Corporation
    Inventors: Kshitij A. Doshi, Christopher J. Hughes
  • Patent number: 10120800
    Abstract: A cache memory that selectively enables and disables speculative reads from system memory is disclosed. The cache memory may include a plurality of partitions, and a plurality of registers. Each register may be configured to stored data indicative of a source of returned data for previous requests directed to a corresponding partition. Circuitry may be configured to receive a request for data to a given partition. The circuitry may be further configured to read contents of a register corresponding to the given partition, and initiate a speculative read dependent upon the contents of the register.
    Type: Grant
    Filed: December 29, 2014
    Date of Patent: November 6, 2018
    Assignee: Oracle International Corporation
    Inventors: Ramaswamy Sivaramakrishnan, Serena Leung, David Smentek
  • Patent number: 10095417
    Abstract: A method for reading data from persistent storage. The method includes receiving a client read request for data from a client. The client read request includes a logical address. The method further includes determining a physical address corresponding to the logical address, determining that the physical address is directed to an open block in the persistent storage and determining that the physical address is directed to a last closed word line of the open block. The method further includes, based on these determinations, obtaining at least one read threshold value for the reading from last closed word lines, issuing a control module read request comprising the at least one read threshold value to a storage module that includes the open block, and obtaining the data from the open block using the at least one read threshold value.
    Type: Grant
    Filed: December 13, 2016
    Date of Patent: October 9, 2018
    Assignee: EMC IP Holding Company LLC
    Inventors: Seungjune Jeon, Haleh Tabrizi, Alan Hanson, Andrew Cullen, Justin Ha, Michael Rijo, Samuel Hudson
  • Patent number: 9990131
    Abstract: In an example, a circuit to manage memory between a first and second microprocessors each of which is coupled to a control circuit, includes: first and second memory circuits; and a switch circuit coupled to the first and second memory circuits, and memory interfaces of the first and second microprocessors, the switch circuit having a mode signal as input. The switch is configured to selectively operate in one of a first mode or a second mode based on the mode signal such that, in the first mode, the switch circuit couples the first memory circuit to the memory interface of the first microprocessor and the second memory circuit to the memory interface of the second microprocessor and, in the second mode, the switch circuit selectively couples the first or second memory circuits to the memory interface of either the first or second microprocessor.
    Type: Grant
    Filed: September 22, 2014
    Date of Patent: June 5, 2018
    Assignee: XILINX, INC.
    Inventors: Ygal Arbel, Sagheer Ahmad, James J. Murray, Nishit Patel, Ahmad R. Ansari
  • Patent number: 9967358
    Abstract: A computer-implemented method for collaborative caching of files during a collaboration session includes receiving a request from a first electronic device for a first file. The method determines whether the first file is stored in one or more caches, wherein the one or more caches are associated with one or more electronic devices. Responsive to determining the first file is stored in a cache of a second electronic device, the method determines whether the first file stored in the cache of the second electronic device meets a set of guidelines. Responsive to determining the first file stored in the cache of the second electronic device meets the set of guidelines, the method sends the first file from the cache of the second electronic device via an internal network to the first electronic device.
    Type: Grant
    Filed: March 26, 2015
    Date of Patent: May 8, 2018
    Assignee: International Business Machines Corporation
    Inventors: Joel Duquene, Morris S. Johnson, Jr., Henri F. Meli, Adrienne Y. Miller
  • Patent number: 9942561
    Abstract: A video encoder (70) for coding moving pictures comprising a buffer (16c) with a plurality of memory areas capable of storing frames composed of top fields and bottom fields, a motion estimation unit (19) operable to code, field by field, inputted pictures performing moving estimation and moving compensation by referring, field by field, to the picture data stored in a memory area, a motion compensation unit (16d), a subtractor (11), a transformation unit (13) and a quantization unit (14), a memory management unit (71) operable to manage, frame by frame, a plurality of memory areas, an inverse quantization unit (16a) and inverse discrete cosine transform unit (16b) operable to decode picture data in coded fields and store the picture data in the decoded field in any of the plurality of memory areas under the management by the memory management unit (71).
    Type: Grant
    Filed: March 15, 2016
    Date of Patent: April 10, 2018
    Assignee: GODO KAISHA IP BRIDGE 1
    Inventors: Martin Schlockermann, Bernhard Schuur, Shinya Kadono
  • Patent number: 9936210
    Abstract: A video encoder (70) for coding moving pictures comprising a buffer (16c) with a plurality of memory areas capable of storing frames composed of top fields and bottom fields, a motion estimation unit (19) operable to code, field by field, inputted pictures performing moving estimation and moving compensation by referring, field by field, to the picture data stored in a memory area, a motion compensation unit (16d), a subtractor (11), a transformation unit (13) and a quantization unit (14), a memory management unit (71) operable to manage, frame by frame, a plurality of memory areas, an inverse quantization unit (16a) and inverse discrete cosine transform unit (16b) operable to decode picture data in coded fields and store the picture data in the decoded field in any of the plurality of memory areas under the management by the memory management unit (71).
    Type: Grant
    Filed: June 30, 2016
    Date of Patent: April 3, 2018
    Assignee: GODO KAISHA IP BRIDGE 1
    Inventors: Martin Schlockermann, Bernhard Schuur, Shinya Kadono
  • Patent number: 9921962
    Abstract: Maintaining cache coherency using conditional intervention among multiple master devices is disclosed. In one aspect, a conditional intervention circuit is configured to receive intervention responses from multiple snooping master devices. To select a snooping master device to provide intervention data, the conditional intervention circuit determines how many snooping master devices have a cache line granule size the same as or larger than a requesting master device. If one snooping master device has a same or larger cache line granule size, that snooping master device is selected. If more than one snooping master device has a same or larger cache line granule size, a snooping master device is selected based on an alternate criteria. The intervention responses provided by the unselected snooping master devices are canceled by the conditional intervention circuit, and intervention data from the selected snooping master device is provided to the requesting master device.
    Type: Grant
    Filed: September 24, 2015
    Date of Patent: March 20, 2018
    Assignee: QUALCOMM Incorporated
    Inventors: Kun Xu, Thuong Quang Truong, Jaya Prakash Subramaniam Ganasan, Hien Minh Le, Cesar Aaron Ramirez
  • Patent number: 9906806
    Abstract: A video encoder (70) for coding moving pictures comprising a buffer (16c) with a plurality of memory areas capable of storing frames composed of top fields and bottom fields, a motion estimation unit (19) operable to code, field by field, inputted pictures performing moving estimation and moving compensation by referring, field by field, to the picture data stored in a memory area, a motion compensation unit (16d), a subtractor (11), a transformation unit (13) and a quantization unit (14), a memory management unit (71) operable to manage, frame by frame, a plurality of memory areas, an inverse quantization unit (16a) and inverse discrete cosine transform unit (16b) operable to decode picture data in coded fields and store the picture data in the decoded field in any of the plurality of memory areas under the management by the memory management unit (71).
    Type: Grant
    Filed: June 30, 2016
    Date of Patent: February 27, 2018
    Assignee: GODO KAISHA IP BRIDGE 1
    Inventors: Martin Schlockermann, Bernhard Schuur, Shinya Kadono
  • Patent number: 9898206
    Abstract: A memory access processing method and apparatus, and a system. The method includes receiving a memory access request sent by a processor, combining multiple memory access requests received within a preset time period to form a new memory access request, where the new memory access request includes a code bit vector corresponding to memory addresses. A first code bit identifier is configured for the code bits that are in the code bit vector and corresponding to the memory addresses accessed by the multiple memory access requests. The method further includes sending the new memory access request to a memory controller, so that the memory controller executes a memory access operation on a memory address corresponding to the first code bit identifier. The method effectively improves memory bandwidth utilization.
    Type: Grant
    Filed: February 5, 2016
    Date of Patent: February 20, 2018
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Dongrui Fan, Fenglong Song, Da Wang, Xiaochun Ye
  • Patent number: 9710276
    Abstract: In a normal, non-loop mode a uOp buffer receives and stores for dispatch the uOps generated by a decode stage based on a received instruction sequence. In response to detecting a loop in the instruction sequence, the uOp buffer is placed into a loop mode whereby, after the uOps associated with the loop have been stored at the uOp buffer, storage of further uOps at the buffer is suspended. To execute the loop, the uOp buffer repeatedly dispatches the uOps associated with the loop's instructions until the end condition of the loop is met and the uOp buffer exits the loop mode.
    Type: Grant
    Filed: November 9, 2012
    Date of Patent: July 18, 2017
    Assignee: Advanced Micro Devices, Inc.
    Inventors: David N. Suggs, Luke Yen, Steven Beigelmacher
  • Patent number: 9678884
    Abstract: A method, computer program product, and computing system for receiving an indication of a cold cache event within a storage system. The storage system includes a multi-tiered data array including at least a faster data tier and a slower data tier. A data list that identifies at least a portion of the data included within the faster data tier of the multi-tiered data array is obtained from the multi-tiered data array. At least a portion of the data identified within the data list is requested from the multi-tiered data array, thus defining the requested data. The requested data is received from the multi-tiered data array.
    Type: Grant
    Filed: April 2, 2015
    Date of Patent: June 13, 2017
    Assignee: EMC IP Holding Company LLC
    Inventors: Philip Derbeko, Arieh Don, Alex Veprinsky, Marik Marshak
  • Patent number: 9569356
    Abstract: A method for referencing and updating objects in a shared resource environment. A reference counter counts is incremented for every use of an object subtype in a session and decremented for every release of an object subtype in a session. A session counter is incremented upon the first instance of fetching an object type into a session cache and decremented upon having no instances of the object type in use in the session. When both the reference counter and the session counter are zero, the object type may be removed from the cache. When the object type needs to be updated, it is cloned into a local cache, and changes are made on the local copy. The global cache is then locked to all other users, the original object type is detached, and the cloned object type is swapped into the global cache, after which the global cache in unlocked.
    Type: Grant
    Filed: June 15, 2012
    Date of Patent: February 14, 2017
    Assignee: EMC Corporation
    Inventors: Shu-Shang Sam Wei, Shuaib Hasan Khwaja, Pankaj Pradhan
  • Patent number: 9569212
    Abstract: A processor includes an allocator with logic assigning alias hardware resources to instructions within an atomic region of instructions. The atomic region includes reordered instructions. The processor also includes a dispatcher with logic to dispatch instructions from the atomic region of instructions for execution. Furthermore, the processor includes a memory execution unit with logic to populate the memory execution unit with the instructions from the atomic region of instructions including reordered instructions, receive snoop requests and determine whether the snoop request matches memory address data of elements within the memory execution unit, and prevent reassignment of alias hardware resources for any load instructions that are eligible to match the snoop requests.
    Type: Grant
    Filed: March 28, 2014
    Date of Patent: February 14, 2017
    Assignee: Intel Corporation
    Inventors: John H. Kelm, Denis M. Khartikov, Naveen Neelakantam
  • Patent number: 9565214
    Abstract: Technologies for securing an electronic device include trapping an attempt to access a secured system resource of the electronic device, determining a module associated with the attempt, determining a subsection of the module associated with the attempt, the subsection including a memory location associated with the attempt, accessing a security rule to determine whether to allow the attempted access based on the determination of the module and the determination of the subsection, and handling the attempt based on the security rule. The module includes a plurality of distinct subsections.
    Type: Grant
    Filed: February 29, 2016
    Date of Patent: February 7, 2017
    Assignee: McAfee, Inc.
    Inventors: Aditya Kapoor, Jonathan L. Edwards, Craig Schmugar, Vladimir Konobeev, Michael Hughes
  • Patent number: 9483275
    Abstract: A method and system uses exceptions for code specialization in a system that supports transactions. The method and system includes inserting one or more branchless instructions into a sequence of computer instructions. The branchless instructions include one or more instructions that are executable if a commonly occurring condition is satisfied and include one or more instructions that are configured to raise an exception if the commonly occurring condition is not satisfied.
    Type: Grant
    Filed: December 16, 2011
    Date of Patent: November 1, 2016
    Assignee: Intel Corporation
    Inventors: Arvind Krishnaswamy, Daniel M Lavery
  • Patent number: 9405690
    Abstract: A processor may include a cache configured to store instructions and memory data for the processor. The cache may store instructions in which a relative address, such as for a branch instruction has been calculated, such that the instruction stored in the cache is modified from how the instruction is stored in main memory. The cache may include additional information in the tag to identify an instruction entry versus a memory data entry. When receiving a cache request, the cache may look at a type tag in addition to an address tag to determine if the request is a hit or a miss based upon the request being for an instruction from an instruction fetch unit or for memory data from a memory management unit. A cache entry may be invalidated and evicted if the address matches but the data type does not match.
    Type: Grant
    Filed: August 7, 2013
    Date of Patent: August 2, 2016
    Assignee: Oracle International Corporation
    Inventor: Mark A Luttrell
  • Patent number: 9275223
    Abstract: Technologies for securing an electronic device include trapping an attempt to access a secured system resource of the electronic device, determining a module associated with the attempt, determining a subsection of the module associated with the attempt, the subsection including a memory location associated with the attempt, accessing a security rule to determine whether to allow the attempted access based on the determination of the module and the determination of the subsection, and handling the attempt based on the security rule. The module includes a plurality of distinct subsections.
    Type: Grant
    Filed: October 19, 2012
    Date of Patent: March 1, 2016
    Assignee: McAfee, Inc.
    Inventors: Aditya Kapoor, Jonathan L. Edwards, Craig Schmugar, Vladimir Konobeev, Michael Hughes
  • Patent number: 9058117
    Abstract: Described in detail herein are systems and methods for single instancing blocks of data in a data storage system. For example, the data storage system may include multiple computing devices (e.g., client computing devices) that store primary data. The data storage system may also include a secondary storage computing device, a single instance database, and one or more storage devices that store copies of the primary data (e.g., secondary copies, tertiary copies, etc.). The secondary storage computing device receives blocks of data from the computing devices and accesses the single instance database to determine whether the blocks of data are unique (meaning that no instances of the blocks of data are stored on the storage devices). If a block of data is unique, the single instance database stores it on a storage device. If not, the secondary storage computing device can avoid storing the block of data on the storage devices.
    Type: Grant
    Filed: October 9, 2013
    Date of Patent: June 16, 2015
    Assignee: CommVault Systems, Inc.
    Inventors: Deepak Raghunath Attarde, Rajiv Kottomtharayil, Manoj Kumar Vijayan
  • Publication number: 20150149724
    Abstract: An arithmetic processing device includes: a arithmetic cores, wherein the arithmetic core comprises: an instruction controller configured to request processing corresponding to an instruction; a memory configured to store lock information indicating that a locking target address is locked, the locking target address, and priority information of the instruction; and a cache controller configured to, when storing data of a first address in a cache memory to execute a first instruction including locking of the first address from the instruction controller, suppress updating of the memory if the lock information is stored in the memory and a priority of the priority information is higher than a first priority of the first instruction.
    Type: Application
    Filed: October 27, 2014
    Publication date: May 28, 2015
    Applicant: FUJITSU LIMITED
    Inventors: Kenji FUJISAWA, Yuji SHIRAHIGE
  • Publication number: 20150149723
    Abstract: A method is provided for facilitating operation of a processor core coupled to a first memory containing executable instructions, a second memory faster than the first memory and a third memory faster than the second memory. The method includes examining instructions being filled from the second memory to the third memory, extracting instruction information containing at least branch information; creating a plurality of tracks based on the extracted instruction information; filling at least one or more instructions that possibly be executed by the processor core based on one or more tracks from a plurality of instruction tracks from the first memory to the second memory; filling at least one or more instructions based on one or more tracks from the plurality of tracks from the second memory to the third memory before the processor core executes the instructions, such that the processor core fetches the instructions from the third memory.
    Type: Application
    Filed: June 25, 2013
    Publication date: May 28, 2015
    Inventor: Chenghao Kenneth Lin
  • Publication number: 20150127911
    Abstract: Cache lines of a data cache may be assigned to a specific page type or color. In addition, the computing system may monitor when a cache line assigned to the specific page color is allocated in the cache. As each cache line assigned to a particular page color is allocated, the computing system may compare a respective index associated with each of the cache lines to determine maximum and minimum indices for that page color. These indices define a block of the cache that stores the data assigned to the page color. Thus, when the data of a page color is evicted from the cache, instead of searching the entire cache to locate the cache lines, the computing system uses the maximum and minimum indices as upper and lower bounds to reduce the portion of the cache that is searched.
    Type: Application
    Filed: November 1, 2013
    Publication date: May 7, 2015
    Applicant: CISCO TECHNOLOGY, INC.
    Inventor: Donald Edward STEISS
  • Patent number: 9015272
    Abstract: Between a CPU and a communication module, a write buffer, a write control section, a read buffer and a read control section are provided. The CPU directly accesses and the write buffer and the read buffer. By periodically outputting a communication request, the read control section reads data, which the communication module received from other nodes, and transfers the data to the read buffer. The write control section transfers to the communication module the data written in the write buffer as transmission data. In addition, a bypass access control section and an access sequence control section are provided. The bypass access control section controls direct data read and data write between the CPU and the communication module. The access sequence control section controls sequence of accesses of the control sections to the communication module.
    Type: Grant
    Filed: February 23, 2012
    Date of Patent: April 21, 2015
    Assignee: DENSO CORPORATION
    Inventors: Hirofumi Yamamoto, Yuki Horii, Takashi Abe, Shinichirou Taguchi
  • Patent number: 9015722
    Abstract: A method of determining a thread from a plurality of threads to execute a task in a multi-processor computer system. The plurality of threads is grouped into at least one subset associated with a cache memory of the computer system. The task has a type determined by a set of instructions. The method obtains an execution history of the subset of plurality of threads and determines a weighting for each of the set of instructions and the set of data, the weightings depending on the type of the task. A suitability of the subset of the threads to execute the task based on the execution history and the determined weightings, is then determined. Subject to the determined suitability of the subset of threads, the method determining a thread from the subset of threads to execute the task using content of the cache memory associated with the subset of threads.
    Type: Grant
    Filed: August 17, 2012
    Date of Patent: April 21, 2015
    Assignee: Canon Kabushiki Kaisha
    Inventors: Ekaterina Stefanov, David Robert James Monaghan, Paul William Morrison
  • Publication number: 20150100733
    Abstract: A computer system and method is disclosed for efficient cache memory organization. One embodiment of the disclosed system include dividing the tag memory into physically separated memory arrays with the entries of each array referencing cache lines in such a way that no two cache lines, which are consecutively aligned in data cache memory, reside in the same array. In another embodiment, the entries of the two memory arrays reference consecutively aligned cache lines in an alternating manner.
    Type: Application
    Filed: October 2, 2014
    Publication date: April 9, 2015
    Inventors: Carlos Basto, Karthik Thucanakkenpalayam Sundararajan
  • Patent number: 8994740
    Abstract: A cache line allocation method, wherein the cache is coupled to a graphic processing unit and the cache comprising a plurality of cache lines, each cache line stores one of a plurality of instructions the method comprising the steps of: putting the plurality of instructions in whole cache lines; locking the whole cache lines if an instruction size is less than a cache size; locking a first number of cache lines when the instruction size is larger than the cache size and a difference between the instruction size and the cache size is less than or equal to a threshold; and locking a second number of cache lines when the instruction size is larger than the cache size and a difference between the instruction size and the cache size is large than the threshold; wherein the first number is greater than the second number.
    Type: Grant
    Filed: April 16, 2012
    Date of Patent: March 31, 2015
    Assignee: VIA Technologies, Inc.
    Inventors: Bingxu Gao, Xian Chen
  • Publication number: 20150089140
    Abstract: In a write by-peer-reference, a storage device client writes a data block to a target storage device in the storage system by sending a write request to the target storage device, the write request specifying information used to obtain the data block from a source storage device in the storage system. The target storage device sends a read request to the source storage device for the data block. The source storage device sends the data block to the target storage device, which then writes the data block to the target storage device. The data block is thus written to the target storage device without the storage device client transmitting the data block itself to the target storage device.
    Type: Application
    Filed: September 18, 2014
    Publication date: March 26, 2015
    Inventors: Vijay Sridharan, Alexander Tsukerman, Jia Shi, Kothanda Umamageswaran
  • Publication number: 20150081975
    Abstract: Method, process, and apparatus to efficiently store, read, and/or process syllables of word data. A portion of a data word, which includes multiple syllables, may be read by a computer processor. The processor may read a first syllable of the data word from a first memory. The processor may read a second syllable of the data word from a second portion of memory. The second syllable may include bits which are less critical than the bits of the first syllable. The second memory may be distinct from the first memory based on one or more physical attributes.
    Type: Application
    Filed: November 26, 2014
    Publication date: March 19, 2015
    Inventors: Lutz Naethke, Axel Borkowski, Bert Bretschneider, Kyriakos A. Stavrou, Rainer Theuer
  • Patent number: 8977816
    Abstract: A cache and disk management method is provided. In the cache and disk management method, a command to delete all valid data stored in a cache, or specific data corresponding to a part of the valid data may be transmitted to a plurality of member disks. That is, all of the valid data or the specific data may exist in the cache only, and may be deleted from the plurality of member disks. Accordingly, the plurality of member disks may secure more space, an internal copy overhead may be reduced, and more particularly, solid state disks may achieve better performance.
    Type: Grant
    Filed: December 23, 2009
    Date of Patent: March 10, 2015
    Assignee: OCZ Storage Solutions Inc.
    Inventor: Soo Gil Jeong
  • Patent number: 8977815
    Abstract: A processing pipeline 6, 8, 10, 12 is provided with a main query stage 20 and a fetch stage 22. A buffer 24 stores program instructions which have missed within a cache memory 14. Query generation circuitry within the main query stage 20 and within a buffer query stage 26 serve to concurrently generate a main query request and a buffer query request sent to the cache memory 14. The cache memory returns a main query response and a buffer query response. Arbitration circuitry 28 controls multiplexers 30, 32 and 34 to direct the program instruction at the main query stage 20, and the program instruction stored within the buffer 24 and the buffer query stage 26 to pass either to the fetch stage 22 or to the buffer 24. The multiplexer 30 can also select a new instruction to be passed to the main query stage 20.
    Type: Grant
    Filed: November 29, 2010
    Date of Patent: March 10, 2015
    Assignee: ARM Limited
    Inventors: Frode Heggelund, Rune Holm, Andreas Due Engh-Halstvedt, Edvard Feilding