Addressing Cache Memories Patents (Class 711/3)
-
Patent number: 10073981Abstract: A number of transmissions of secure data communicated between a secure trusted device and an unsecure untrusted device in a DBMS is controlled. The data is communicated for database transaction processing in the secure trusted device. The number of transmissions may be controlled by receiving, from the untrusted device, an encrypted key value of a key and a representation of an index of a B-tree structure, decrypting, at the trusted device, the key and one or more encrypted index values, and initiating a transmission, a pointer value that identifies a lookup position in the index for the key. The index comprises secure, encrypted index values. Other optimizations for secure processing are also described, including controlling available computation resources on a secure trusted device in a DBMS and controlling transmissions of secure data that is communicated between a secure trusted device and an unsecure untrusted device in a DBMS.Type: GrantFiled: October 9, 2015Date of Patent: September 11, 2018Assignee: Microsoft Technology Licensing, LLCInventors: Arvind Arasu, Kenneth Eguro, Manas Rajendra Joglekar, Raghav Kaushik, Donald Kossmann, Ravishankar Ramamurthy
-
Patent number: 9984003Abstract: A mapping processing method and apparatus for a cache address, where the method includes acquiring a physical address corresponding to an access address sent by a processing core, where the physical address includes a physical page number (PPN) and a page offset, mapping the physical address to a Cache address, where the Cache address includes a Cache set index 1, a Cache tag, a Cache set index 2, and a Cache block offset in sequence, where the Cache set index 1 with a high-order bit and the Cache set index 2 with a low-order bit together form a Cache set index, and the Cache set index 1 falls within a range of the PPN. Some bits of a PPN of a huge page PPN are mapped to a set index of a Cache so that the bits can be colored by an operating system.Type: GrantFiled: September 6, 2016Date of Patent: May 29, 2018Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Zehan Cui, Licheng Chen, Mingyu Chen
-
Patent number: 9934159Abstract: What is provided is an enhanced dynamic address translation facility. In one embodiment, a virtual address to be translated is first obtained and an initial origin address of a translation table of the hierarchy of translation tables is obtained. Based on the obtained initial origin, a segment table entry is obtained. The segment table entry is configured to contain a format control and access validity fields. If the format control and access validity fields are enabled, the segment table entry further contains an access control field, a fetch protection field, and a segment-frame absolute address. Store operations are permitted only if the access control field matches a program access key provided by any one of a Program Status Word or an operand of a program instruction being emulated. Fetch operations are permitted if the program access key associated with the virtual address is equal to the segment access control field or the fetch protection field is not enabled.Type: GrantFiled: May 20, 2016Date of Patent: April 3, 2018Assignee: International Business Machines CorporationInventors: Dan F. Greiner, Charles W. Gainey, Jr., Lisa C. Heller, Damian L. Osisek, Erwin Pfeffer, Timothy J. Slegel, Charles F. Webb
-
Patent number: 9898415Abstract: A prefetch unit generates a prefetch address in response to an address associated with a memory read request received from the first or second cache. The prefetch unit includes a prefetch buffer that is arranged to store the prefetch address in an address buffer of a selected slot of the prefetch buffer, where each slot of the prefetch unit includes a buffer for storing a prefetch address, and two sub-slots. Each sub-slot includes a data buffer for storing data that is prefetched using the prefetch address stored in the slot, and one of the two sub-slots of the slot is selected in response to a portion of the generated prefetch address. Subsequent hits on the prefetcher result in returning prefetched data to the requestor in response to a subsequent memory read request received after the initial received memory read request.Type: GrantFiled: September 15, 2011Date of Patent: February 20, 2018Assignee: TEXAS INSTRUMENTS INCORPORATEDInventors: Kai Chirca, Joseph R M Zbiciak, Matthew D Pierson
-
Patent number: 9874498Abstract: An in-vitro diagnostic apparatus includes a loading unit which receives a test medium including a test object; a first clock including first time information that is set as a standard clock time and used to determine whether an expiration date of the test medium has passed; a second clock including second time information that can be set as an arbitrary time; a sensor which acquires the expiration date of the test medium; a controller which determines whether the expiration date of the test medium has passed, based on the first time information; and an analyzer which analyzes the test object based on the second time information when it is determined that the expiration date of the test medium has not yet passed.Type: GrantFiled: August 12, 2015Date of Patent: January 23, 2018Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Im-ho Shin, Eun-jeong Jang, Chung-ung Kim
-
Patent number: 9853831Abstract: A control module for use within a control network, the control module comprising: interface circuitry for enabling communication with an external device; communication means configured to communicate with the external device over the control network by communication with the interface circuitry; coupling means configured to mechanically couple the control module to an adjacent control module and provide a data connection between the communication means and the adjacent module; and an electrical isolation in the data connection between the communication means and the coupling means.Type: GrantFiled: February 5, 2014Date of Patent: December 26, 2017Assignee: NIDEC CONTROL TECHNIQUES LIMITEDInventors: Richard Mark Wain, Bryce Trevor Beeston, Luke Duane Orehawa, James Robert Douglas Kirkwood
-
Patent number: 9813235Abstract: Technology is generally described for improving resistance to cache timing attacks made on block cipher encryption implementations. In some examples, the technology can include identifying one or more tunable parameters of the block cipher encryption algorithm; creating multiple encryption algorithm implementations by varying one or more of the parameter values; causing a computing system to encrypt data using the implementations; measuring average execution times at the computing system for the implementations subjecting the implementations to a cache timing attack; measuring average execution times at the computing system for the implementations subjected to a cache timing attack; computing a time difference between the average execution times for the implementations when not subjected and when subjected to a cache timing attack; selecting an implementation having a lower time difference; and using the selected implementation for a subsequent encryption operation.Type: GrantFiled: April 25, 2013Date of Patent: November 7, 2017Assignee: INDIAN INSTITUTE OF TECHNOLOGY KHARAGPURInventors: Debdeep Mukhopadhyay, Chester Dominic Rebeiro
-
Patent number: 9793918Abstract: A digital data storage and retrieval system. The system has a first memory for storing a plurality of data quantities, and each data quantity, in the plurality of data quantities, consists of a first number of bits. The system also has a second memory for storing a plurality of compressed data quantities, and each compressed data quantity, in the plurality of compressed data quantities, consists of a second number of bits that is less than the first number of bits. The system also has circuitry for reading data quantities from the first memory and circuitry for writing compressed data quantities, corresponding to respective read data quantities, to non-sequential addresses in the second memory. The system also may include circuitry for reading compressed data quantities from the second memory, and circuitry for writing decompressed data quantities, corresponding to respective read compressed data quantities, to non-sequential addresses in the first memory.Type: GrantFiled: July 31, 2015Date of Patent: October 17, 2017Assignee: TEXAS INSTRUMENTS INCORPORATEDInventors: Sundarrajan Rangachari, Desmond Pravin Martin Fernandes, Rakesh Channabasappa Yaraduyathinahalli
-
Patent number: 9772949Abstract: Aspects of the present disclosure involve a level two persistent cache. In various aspects, a solid-state drive is employed as a level-two cache to expand the capacity of existing caches. In particular, any data that is scheduled to be evicted or otherwise removed from a level-one cache is stored in the level-two cache with corresponding metadata in a manner that is quickly retrievable.Type: GrantFiled: October 18, 2012Date of Patent: September 26, 2017Assignee: ORACLE INTERNATIONAL CORPORATIONInventors: Mark Maybee, Mark J. Musante, Victor Latushkin
-
Patent number: 9753883Abstract: A Network Interface Device (NID) of a web hosting server implements multiple virtual NIDs. A virtual NID is configured by configuration information in an appropriate one of a set of smaller blocks in a high-speed memory on the NID. There is a smaller block for each virtual NID. A virtual machine on the host can configure its virtual NID by writing configuration information into a larger block in PCIe address space. Circuitry on the NID detects that the PCIe write is into address space occupied by the larger blocks. If the write is into this space, then address translation circuitry converts the PCIe address into a smaller address that maps to the appropriate one of the smaller blocks associated with the virtual NID to be configured. If the PCIe write is detected not to be an access of a larger block, then the NID does not perform the address translation.Type: GrantFiled: February 4, 2014Date of Patent: September 5, 2017Assignee: Netronome Systems, Inc.Inventors: Gavin J. Stark, Rolf Neugebauer
-
Patent number: 9733981Abstract: A data processing system includes a processor core and a hardware module. The processor core performs tasks on data packets. The ordering scope manager stores a first value in a first storage location. The first value indicates that exclusive execution of a first task in a first ordering scope is enabled. In response to a relinquish indicator being received, the ordering scope manager stores a second value in the first storage location. The second value indicates that the exclusively execution of the first task in the first ordering scope is disabled.Type: GrantFiled: June 10, 2014Date of Patent: August 15, 2017Assignee: NXP USA, Inc.Inventors: Tommi M. Jokinen, Michael Kardonik, David B. Kramer, Peter W. Newton, John F. Pillar, Kun Xu
-
Patent number: 9686152Abstract: Techniques to track resource usage statistics per transaction across multiple layers of protocols and across multiple threads, processes and/or devices are disclosed. In one embodiment, for example, a technique may comprise assigning an activity context to a request at the beginning of a first stage, where the activity context has an initial set of properties. The values of the properties may be assigned to the properties in the initial set during the first stage. The value of a property may be stored on a data store local to the first stage. The activity context may be transferred to a second stage when the request begins the second stage. The transferred activity context may include a property from the initial set of properties. The stored values may be analyzed to determine a resource usage statistic. Other embodiments are described and claimed.Type: GrantFiled: January 27, 2012Date of Patent: June 20, 2017Assignee: Microsoft Technology Licensing, LLCInventors: Raghu Kolluru, David Nunez Tejerina, Siddhartha Mathur, James Kleewein, Adrian Hamza, Ozan Ozhan
-
Patent number: 9632856Abstract: A memory controller includes a controller input/output circuit configured to output a first command to read first data, and output a second command to read an error corrected portion of the first data. A memory device includes: an error detector, a data storage circuit and an error correction circuit. The error detector is configured to detect a number of error bits in data read from a memory cell in response to a first command. The data storage circuit is configured to store the read data if the detected number of error bits is greater than or equal to a first threshold value. The error correction circuit is configured to correct the stored data.Type: GrantFiled: January 11, 2016Date of Patent: April 25, 2017Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Hoi-ju Chung, Su-a Kim, Mu-jin Seo, Hak-soo Yu, Jae-youn Youn, Hyo-jin Choi
-
Patent number: 9632801Abstract: Conversion of an array of structures (AOS) to a structure of arrays (SOA) improves the efficiency of transfer from the AOS to the SOA. A similar technique can be used to convert efficiently from an SOA to an AOS. The controller performing the conversion computes a partition size as the highest common factor between the structure size of structures in AOS and the number of banks in a first memory device, and transfers data based on the partition size, rather than on the structure size. The controller can read a partition size number of elements from multiple different structures to ensure that full data transfer bandwidth is used for each transfer.Type: GrantFiled: April 9, 2014Date of Patent: April 25, 2017Assignee: Intel CorporationInventors: Supratim Pal, Murali Sundaresan
-
Patent number: 9588826Abstract: Embodiments of the invention provide a programming model for CPU-GPU platforms. In particular, embodiments of the invention provide a uniform programming model for both integrated and discrete devices. The model also works uniformly for multiple GPU cards and hybrid GPU systems (discrete and integrated). This allows software vendors to write a single application stack and target it to all the different platforms. Additionally, embodiments of the invention provide a shared memory model between the CPU and GPU. Instead of sharing the entire virtual address space, only a part of the virtual address space needs to be shared. This allows efficient implementation in both discrete and integrated settings.Type: GrantFiled: December 10, 2014Date of Patent: March 7, 2017Assignee: Intel CorporationInventors: Hu Chen, Ying Gao, Xiaocheng Zhou, Shoumeng Yan, Peinan Zhang, Mohan Rajagopalan, Jesse Fang, Avi Mendelson, Bratin Saha
-
Patent number: 9563536Abstract: Without using a high-level programming language source code, a set of sync points is identified in an initial binary code. The initial binary code is executed at a first system. A value of the user data is captured from a user space of a memory as a baseline of the user data. A set of comparative sync points is identified in a second binary code. During an execution of the second binary code, a second value of the user data from a second user space of a second memory is found to fail in matching the baseline of the user data. An instruction before the comparative sync point in the second binary code is identified as a location of a faulty operation due to the failing.Type: GrantFiled: October 19, 2015Date of Patent: February 7, 2017Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Steven Cooper, Reid T. Copeland, Toshihiko Koju, Roger H. E. Pett, Trong Truong
-
Patent number: 9542409Abstract: An apparatus and a method for maintaining a file system is described. An allocation module receives a request from a kernel module to allocate a block of the file system to a file. The allocation module examines an other block of the file system to determine whether the other block contain a same data as the block. The allocation module also determines an external reference count of the other block containing the same data. The other block is then allocated to the file and the external reference count is updated accordingly.Type: GrantFiled: November 26, 2008Date of Patent: January 10, 2017Assignee: Red Hat, Inc.Inventor: James Paul Schneider
-
Patent number: 9542708Abstract: An event server adapted to receive events from an input stream and produce an output event stream. The event server uses a processor using code in an event processing language to process the events. The event server obtaining input events from and/or producing output events to a cache.Type: GrantFiled: December 10, 2008Date of Patent: January 10, 2017Assignee: ORACLE INTERNATIONAL CORPORATIONInventors: Andrew Piper, Alexandre de Castro Alves, Seth White
-
Patent number: 9501351Abstract: A data storage device includes a storage memory device; a signal generation block suitable for generating control signals to be provided to the storage memory device; and an error correction code (ECC) block suitable for ECC-encoding data to be stored in the storage memory device, wherein the ECC block operates before the signal generation block.Type: GrantFiled: September 29, 2014Date of Patent: November 22, 2016Assignee: SK Hynix Inc.Inventors: Jae Woo Kim, Joong Hyun An, Kwang Hyun Kim
-
Patent number: 9430198Abstract: A data processing method and apparatus, which relate to the computer field and are capable of effectively improving scalability of a database system. The data processing method includes: receiving source code of an external routine, where the source code of the external routine is compiled by using an advanced programming language; compiling the source code to obtain intermediate code, where the intermediate code is a byte stream identifiable to a virtual machine on any operating platform; converting, according to an instruction set on the operating platform, the intermediate code into machine code capable of running on the operating platform; and storing the machine code to a database. The data processing method and apparatus provided by the embodiments of the present invention are used to process data.Type: GrantFiled: June 29, 2015Date of Patent: August 30, 2016Assignee: Huawei Technologies Co., Ltd.Inventors: Dongwang Sun, Jijun Wen, Chuanting Wang
-
Patent number: 9430277Abstract: A method is described for scheduling in an intelligent manner a plurality of threads on a processor having a plurality of cores and a shared last level cache (LLC). In the method, a first and second scenario having a corresponding first and second combination of threads are identified. The cache occupancies of each of the threads for each of the scenarios are predicted. The predicted cache occupancies being a representation of an amount of the LLC that each of the threads would occupy when running with the other threads on the processor according to the particular scenario. One of the scenarios is identified that results in the least objectionable impacts on all threads, the least objectionable impacts taking into account the impact resulting from the predicted cache occupancies. Finally, a scheduling decision is made according to the one of the scenarios that results in the least objectionable impacts.Type: GrantFiled: March 29, 2013Date of Patent: August 30, 2016Assignee: VMware, Inc.Inventors: Puneet Zaroo, Richard West, Carl A. Waldspurger, Xiao Zhang
-
Patent number: 9396135Abstract: A physical cache memory that is divided into one or more virtual segments using multiple circuits to decode addresses is provided. An address mapping and an address decoder is selected for each virtual segment. The address mapping comprises two or more address bits as set indexes for the virtual segment and the selected address bits are different for each virtual segment. A cache address decoder is provided for each virtual segment to enhance execution performance of programs or to protect against the side channel attack. Each physical cache address decoder comprises an address mask register to extract the selected address bits to locate objects in the virtual segment. The foregoing can be implemented as a method or apparatus for protecting against a side channel attack.Type: GrantFiled: April 27, 2012Date of Patent: July 19, 2016Assignee: University of North TexasInventor: Krishna M. Kavi
-
Patent number: 9389908Abstract: A transactional memory (TM) receives a lookup command across a bus from a processor. The command includes a memory address. In response to the command, the TM pulls an input value (IV). The memory address is used to read a word containing multiple result values (RVs), multiple reference values, and multiple mask values from memory. A selecting circuit within the TM uses a starting bit position and a mask size to select a portion of the IV. The portion of the IV is a lookup key value (LKV). The LKV is masked by each mask value thereby generating multiple masked values. Each masked value is compared to a reference value thereby generating multiple comparison values. A lookup table generates a selector value based upon the comparison values. A result value is selected based on the selector value. The selected result value is then communicated to the processor via the bus.Type: GrantFiled: December 31, 2014Date of Patent: July 12, 2016Assignee: Netronome Systems, Inc.Inventor: Gavin J. Stark
-
Patent number: 9383943Abstract: An alignment data structure is used to map a logical data block start address to a physical data block start address dynamically, to service a client data access request. A separate alignment data structure can be provided for each data object managed by the storage system. Each such alignment data structure can be stored in, or referenced by a pointer in, the inode of the corresponding data object. A consequence of the mapping is that certain physical storage medium regions are not mapped to any logical data blocks. These unmapped regions may be visible only to the file system layer and layers that reside between the file system layer and the mass storage subsystem. They can be used, if desired, to store system information, i.e., information that is not visible to any storage client.Type: GrantFiled: October 1, 2013Date of Patent: July 5, 2016Assignee: NETAPP, INC.Inventors: Shravan Gaonkar, Rahul Narayan Iyer, Deepak Kenchammana-hosekote
-
Patent number: 9378128Abstract: What is provided is an enhanced dynamic address translation facility. In one embodiment, a virtual address to be translated is first obtained and an initial origin address of a translation table of the hierarchy of translation tables is obtained. Based on the obtained initial origin, a segment table entry is obtained. The segment table entry is configured to contain a format control and access validity fields. If the format control and access validity fields are enabled, the segment table entry further contains an access control field, a fetch protection field, and a segment-frame absolute address. Store operations are permitted only if the access control field matches a program access key provided by any one of a Program Status Word or an operand of a program instruction being emulated. Fetch operations are permitted if the program access key associated with the virtual address is equal to the segment access control field or the fetch protection field is not enabled.Type: GrantFiled: February 27, 2015Date of Patent: June 28, 2016Assignee: International Business Machines CorporationInventors: Dan F. Greiner, Charles W. Gainey, Jr., Lisa C. Heller, Damian L. Osisek, Erwin Pfeffer, Timothy J. Slegel, Charles F. Webb
-
Patent number: 9348778Abstract: A transactional memory (TM) receives a lookup command across a bus from a processor. The command includes a memory address, a starting bit position, and a mask size. In response to the command, the TM pulls an input value (IV). The memory address is used to read a word containing multiple result values (RVs) and multiple key values from memory. Each key value indicates a single RV to be output by the TM. A selecting circuit within the TM uses the starting bit position and mask size to select a portion of the IV. The portion of the IV is a key selector value. A key value is selected based upon the key selector value. A RV is selected based upon the key value. The key value is selected by a key selection circuit. The RV is selected by a result value selection circuit.Type: GrantFiled: February 25, 2015Date of Patent: May 24, 2016Assignee: Netronome Systems, Inc.Inventor: Gavin J. Stark
-
Patent number: 9311004Abstract: A transactional memory (TM) receives a lookup command across a bus from a processor. The command includes a memory address. In response to the command, the TM pulls an input value (IV). The memory address is used to read a word containing multiple result values (RVs), multiple reference values, and multiple prefix values from memory. A selecting circuit within the TM uses a starting bit position and a mask size to select a portion of the IV. The portion of the IV is a lookup key value (LKV). Mask values are generated based on the prefix values. The LKV is masked by each mask value thereby generating multiple masked values that are compared to the reference values. Based on the comparison a lookup table generates a selector value that is used to select a result value. The selected result value is then communicated to the processor via the bus.Type: GrantFiled: December 31, 2014Date of Patent: April 12, 2016Assignee: Netronome Systems, Inc.Inventor: Gavin J. Stark
-
Patent number: 9298643Abstract: This invention optimizes DMA writes to directly addressable level two memory that is cached in level one and the line is valid and dirty. When the level two controller detects that a line is valid and dirty in level one, the level two memory need not update its copy of the data. Level one memory will replace the level two copy with a victim writeback at a future time. Thus the level two memory need not store write a copy. This limits the number of DMA writes to level two directly addressable memory and thus improves performance and minimizes dynamic power. This also frees the level two memory for other master/requestors.Type: GrantFiled: June 2, 2015Date of Patent: March 29, 2016Assignee: TEXAS INSTRUMENTS INCORPORATEDInventors: Jonathan (Son) Hung Tran, Raguram Damodaran, Abhijeet Ashok Chachad, Joseph Raymond Michael Zbiciak
-
Patent number: 9275003Abstract: A network interface controller atomic operation unit and a network interface control method comprising, in an atomic operation unit of a network interface controller, using a write-through cache and employing a rate-limiting functional unit.Type: GrantFiled: October 1, 2008Date of Patent: March 1, 2016Assignee: Sandia CorporationInventors: Karl Scott Hemmert, Keith D. Underwood, Michael J. Levenhagen
-
Patent number: 9250913Abstract: Embodiments relate to collision-based alternate hashing. An aspect includes receiving an incoming instruction address. Another aspect includes determining whether an entry for the incoming instruction address exists in a history table based on a hash of the incoming instruction address. Another aspect includes based on determining that the entry for the incoming instruction address exists in the history table, determining whether the incoming instruction address matches an address tag in the determined entry. Another aspect includes based on determining that the incoming instruction address does not match the address tag in the determined entry, determining whether a collision exists for the incoming instruction address. Another aspect includes based on determining that the collision exists for the incoming instruction address, activating alternate hashing for the incoming instruction address using an alternate hash buffer.Type: GrantFiled: June 15, 2012Date of Patent: February 2, 2016Assignee: International Business Machines CorporationInventors: Khary J. Alexander, Ilia Averbouch, Ariel J. Birnbaum, Jonathan T. Hsieh, Chung-Lung K. Shum
-
Patent number: 9235512Abstract: A system, method, and computer program product are provided for GPU demand paging. In operation, input data is addressed in terms of a virtual address space. Additionally, the input data is organized into one or more pages of data. Further, the input data organized as the one or more pages of data is at least temporarily stored in a physical cache. In addition, access to the input data in the physical cache is facilitated.Type: GrantFiled: January 9, 2014Date of Patent: January 12, 2016Assignee: NVIDIA CorporationInventors: Andreas Dietrich, David K. McAllister, Heiko Friedrich, Konstantin Anatolievich Vostryakov, Steven Parker, James Lawrence Bigler, Russell Keith Morley
-
Patent number: 9208088Abstract: A shared virtual memory management apparatus for ensuring cache coherence. When two or more cores request write permission to the same virtual memory page, the shared virtual memory management apparatus allocates a physical memory page for the cores to change data in the allocated physical memory page. Thereafter, changed data is updated in an original physical memory page, and accordingly it is feasible to achieve data coherence in a multi-core hardware environment that does not provide cache coherence.Type: GrantFiled: January 3, 2013Date of Patent: December 8, 2015Assignee: Seoul National University R&DB FoundationInventors: Jaejin Lee, Junghyun Kim
-
Patent number: 9201608Abstract: Systems, methods, and devices for dynamically mapping and remapping memory when a portion of memory is activated or deactivated are provided. In accordance with an embodiment, an electronic device may include several memory banks, one or more processors, and a memory controller. The memory banks may store data in hardware memory locations and may be independently deactivated. The processors may request the data using physical memory addresses, and the memory controller may translate the physical addresses to hardware memory locations. The memory controller may use a first memory mapping function when a first number of memory banks is active and a second memory mapping function when a second number is active. When one of the memory banks is to be deactivated, the memory controller may copy data from only the memory bank that is to be deactivated to the active remainder of memory banks.Type: GrantFiled: July 14, 2014Date of Patent: December 1, 2015Assignee: Apple Inc.Inventors: Ian C. Hendry, Rajabali Koduri, Jeffry E. Gonion
-
Patent number: 9189408Abstract: A system and method of offline annotation of future access are disclosed. According to one embodiment, a request is received at a storage system to read a portion of a file stored in the storage system. In response to the request, chunks of the file are cached in a cache memory of the storage system. In response to a request for cache space reclamation, the system then determines future request to the file based in part on a next access auxiliary table (NAAT) associated with the file, which was created prior to receiving the request to read and stored in a persistent storage location of the storage system. Based on the determination, the system evicts from the cache memory at least one chunk of a read unit (RU) whose next access is a furthest among the cached chunks.Type: GrantFiled: August 31, 2012Date of Patent: November 17, 2015Assignee: EMC CorporationInventors: Frederick Douglis, Windsor W. Hsu, Xing Lin
-
Patent number: 9164904Abstract: A method of accessing remote memory comprising receiving a request for access to a page from a computing device, adding an address of the accessed page to a recent list memory on the remote memory, associating a recent list group identifier to a number of addresses of accessed pages, transferring the requested page to the computing device with the recent list group identifier and temporarily maintaining a copy of the transferred page on the remote memory.Type: GrantFiled: August 28, 2012Date of Patent: October 20, 2015Assignee: Hewlett-Packard Development Company, L.P.Inventors: Kevin T. Lim, Naveen Muralimanohar, Norman Paul Jouppi, Robert Schreiber
-
Patent number: 9146869Abstract: A method and apparatus for state encoding of cache lines is described. Some embodiments of the method and apparatus support probing, in response to a first probe of a cache line in a first cache, a copy of the cache line in a second cache when the cache line is stale and the cache line is associated with a copy of the cache line stored in the second cache that can bypass notification of the first cache in response to modifying the copy of the cache line.Type: GrantFiled: December 5, 2012Date of Patent: September 29, 2015Assignee: Advanced Micro Devices, Inc.Inventor: Robert Krick
-
Patent number: 9141544Abstract: In a particular embodiment, a method of managing a cache memory includes, responsive to a cache size change command, changing a mode of operation of the cache memory to a write through/no allocate mode. The method also includes processing instructions associated with the cache memory while executing a cache clean operation when the mode of operation of the cache memory is the write through/no allocate mode. The method further includes after completion of the cache clean operation, changing a size of the cache memory and changing the mode of operation of the cache to a mode other than the write through/no allocate mode.Type: GrantFiled: October 19, 2012Date of Patent: September 22, 2015Assignee: QUALCOMM IncorporatedInventors: Manojkumar Pyla, Lucian Codrescu
-
Patent number: 9128848Abstract: A system comprises a storage device, a cache coupled to the storage device and a metadata structure, coupled to the storage device and the cache, having metadata corresponding to each data location in the cache to control data promoted to the cache from the storage device.Type: GrantFiled: February 25, 2013Date of Patent: September 8, 2015Assignee: Intel CorporationInventor: Rayan Zachariassen
-
Patent number: 9110832Abstract: The present disclosure includes methods, devices, and systems for object oriented memory in solid state devices. One embodiment of a method for object oriented memory in solid state devices includes accessing a defined set of data as a single object in an atomic operation manner, where the accessing is from a source other than a host. The embodiment also includes storing the defined set of data as the single object in a number of solid state memory blocks as formatted by a control component of a solid state device that includes the number of solid state memory blocks.Type: GrantFiled: May 5, 2014Date of Patent: August 18, 2015Assignee: Micron Technology, Inc.Inventors: Peter Feeley, Neal A. Galbo, James Cooke, Victor Y. Tsai, Robert N. Leibowitz, William H. Radke
-
Patent number: 9110816Abstract: Methods for accessing, storing and replacing data in a cache memory are provided, wherein a plurality of index bits and a plurality of tag bits at the cache memory are received. The plurality of index bits are processed to determine whether a matching index exists in the cache memory and the plurality of tag bits are processed to determine whether a matching tag exists in the cache memory, and a data line is retrieved from the cache memory if both a matching tag and a matching index exist in the cache memory. A random line in the cache memory can be replaced with a data line from a main memory, or evicted without replacement, based on the combination of index and tag misses, security contexts and protection bits. User-defined and/or vendor-defined replacement procedures can be utilized to replace data lines in the cache memory.Type: GrantFiled: September 27, 2013Date of Patent: August 18, 2015Assignee: Teleputers, LLCInventors: Ruby B. Lee, Zhenghong Wang
-
Patent number: 9098284Abstract: A method and apparatus for disabling ways of a cache memory in response to history based usage patterns is herein described. Way predicting logic is to keep track of cache accesses to the ways and determine if an access to some ways are to be disabled to save power, based upon way power signals having a logical state representing a predicted miss to the way. One or more counters associated with the ways count accesses, wherein a power signal is set to the logical state representing a predicted miss when one of said one or more counters reaches a saturation value. Control logic adjusts said one or more counters associated with the ways according to the accesses.Type: GrantFiled: December 2, 2014Date of Patent: August 4, 2015Assignee: Intel CorporationInventors: Martin Licht, Jonathan Combs, Andrew Huang
-
Patent number: 9092343Abstract: A virtual hint based data cache way prediction scheme, and applications thereof. In an embodiment, a processor retrieves data from a data cache based on a virtual hint value or an alias way prediction value and forwards the data to dependent instructions before a physical address for the data is available. After the physical address is available, the physical address is compared to a physical address tag value for the forwarded data to verify that the forwarded data is the correct data. If the forwarded data is the correct data, a hit signal is generated. If the forwarded data is not the correct data, a miss signal is generated. Any instructions that operate on incorrect data are invalidated and/or replayed.Type: GrantFiled: September 21, 2009Date of Patent: July 28, 2015Assignee: ARM Finance Overseas LimitedInventors: Meng-Bing Yu, Era K. Nangia, Michael Ni, Vidya Rajagopalan
-
Patent number: 9081711Abstract: An embodiment provides a virtual address cache memory including: a TLB virtual page memory configured to, when a rewrite to a TLB occurs, rewrite entry data; a data memory configured to hold cache data using a virtual page tag or a page offset as a cache index; a cache state memory configured to hold a cache state for the cache data stored in the data memory, in association with the cache index; a first physical address memory configured to, when the rewrite to the TLB occurs, rewrite a held physical address; and a second physical address memory configured to, when the cache data is written to the data memory after the occurrence of the rewrite to the TLB, rewrite a held physical address.Type: GrantFiled: November 26, 2013Date of Patent: July 14, 2015Assignee: KABUSHIKI KAISHA TOSHIBAInventors: Kenta Yasufuku, Shigeaki Iwasa, Yasuhiko Kurosawa, Hiroo Hayashi, Seiji Maeda, Mitsuo Saito
-
Patent number: 9081661Abstract: According to one embodiment, a memory management device includes a history management unit, an address translation table, an address management unit, and a data management unit. The history management unit manages an access history for data stored in a nonvolatile semiconductor memory. The address translation table includes a translation table of a logical address and a physical address corresponding to the data. The address management unit specifies, based on the access history, second data to be accessed after access to first data being stored in the nonvolatile semiconductor memory, and registers a second physical address corresponding to the second data in the address translation table in association with a first logical address corresponding to the first data. The data management unit reads out the second data from the nonvolatile semiconductor memory to a buffer.Type: GrantFiled: September 16, 2011Date of Patent: July 14, 2015Assignee: KABUSHIKI KAISHA TOSHIBAInventors: Tsutomu Unesaki, Yoshiyuki Endo
-
Patent number: 9075744Abstract: This invention optimizes DMA writes to directly addressable level two memory that is cached in level one and the line is valid and dirty. When the level two controller detects that a line is valid and dirty in level one, the level two memory need not update its copy of the data. Level one memory will replace the level two copy with a victim writeback at a future time. Thus the level two memory need not store write a copy. This limits the number of DMA writes to level two directly addressable memory and thus improves performance and minimizes dynamic power. This also frees the level two memory for other master/requestors.Type: GrantFiled: September 26, 2011Date of Patent: July 7, 2015Assignee: TEXAS INSTRUMENTS INCORPORATEDInventors: Jonathan (Son) Hung Tran, Raguram Damodaran, Abhijeet Ashok Chachad, Joseph Raymond Michael Zbiciak
-
Patent number: 9069545Abstract: Systems and methods are disclosed that allow atomic updates to global data to be at least partially eliminated to reduce synchronization overhead in parallel computing. A compiler analyzes the data to be processed to selectively permit unsynchronized data transfer for at least one type of data. A programmer may provide a hint to expressly identify the type of data that are candidates for unsynchronized data transfer. In one embodiment, the synchronization overhead is reducible by generating an application program that selectively substitutes codes for unsynchronized data transfer for a subset of codes for synchronized data transfer. In another embodiment, the synchronization overhead is reducible by employing a combination of software and hardware by using relaxation data registers and decoders that collectively convert a subset of commands for synchronized data transfer into commands for unsynchronized data transfer.Type: GrantFiled: July 18, 2011Date of Patent: June 30, 2015Assignee: International Business Machines CorporationInventors: Lakshminarayanan Renganarayana, Vijayalakshmi Srinivasan
-
Patent number: 9053033Abstract: A method, computer program product, and computing system for defining a first assigned cache portion within a cache system, wherein the first assigned cache portion is associated with a first machine. At least one additional assigned cache portion within the cache system is defined. The at least one additional assigned cache portion is associated with at least one additional machine. Content received by the first machine is written to the first assigned cache portion. After the occurrence of a reclassifying event, the first assigned cache portion is reclassified as a public cache portion that is added to an initial cache portion within the cache system. The public cache portion is associated with the first machine and the at least one additional machine.Type: GrantFiled: December 30, 2011Date of Patent: June 9, 2015Assignee: EMC CorporationInventors: Philip Derbeko, Anat Eyal, Roy E. Clark
-
Patent number: 9021401Abstract: A method comprises creating a first node, determining whether an indicator associated with a head node is present, and designating the first node as a head node, defining and associating a head node identifier with the first node, define a link from the first node to the first node, and create and save an indicator associated with the head node responsive to determining that the indicator associated with a head node is not present.Type: GrantFiled: September 21, 2009Date of Patent: April 28, 2015Assignee: International Business Machines CorporationInventors: Anthony M. Cocuzza, Shayne Grant, Pu Liu
-
Publication number: 20150113199Abstract: In one embodiment of the present invention, a method includes switching between a first address space and a second address space, determining if the second address space exists in a list of address spaces; and maintaining entries of the first address space in a translation buffer after the switching. In such manner, overhead associated with such a context switch may be reduced.Type: ApplicationFiled: December 22, 2014Publication date: April 23, 2015Applicant: Intel CorporationInventors: Jason W. Brandt, Sanjoy K. Mondal, Richard A. Uhlig, Gilbert Neiger, Robert T. George
-
Publication number: 20150113200Abstract: In one embodiment of the present invention, a method includes switching between a first address space and a second address space, determining if the second address space exists in a list of address spaces; and maintaining entries of the first address space in a translation buffer after the switching. In such manner, overhead associated with such a context switch may be reduced.Type: ApplicationFiled: December 22, 2014Publication date: April 23, 2015Applicant: Intel CorporationInventors: Jason W. Brandt, Sanjoy K. Mondal, Richard A. Uhlig, Gilbert Neiger, Robert T. George