Predicting, Look-ahead Patents (Class 711/204)
-
Patent number: 8898427Abstract: Embodiments relate to target buffer address region tracking. An aspect includes receiving a restart address, and comparing, by a processing circuit, the restart address to a first stored address and to a second stored address. The processing circuit determines which of the first and second stored addresses is identified as a same range and a different range to form a predicted target address range defining an address region associated with an entry in the target buffer. Based on determining that the restart address matches the first stored address, the first stored address is identified as the same range and the second stored address is identified as the different range. Based on determining that the restart address matches the second stored address, the first stored address is identified as the different range and the second stored address is identified as the same range.Type: GrantFiled: November 25, 2013Date of Patent: November 25, 2014Assignee: International Business Machines CorporationInventors: James J. Bonanno, Brian R. Prasky, Aaron Tsai
-
Patent number: 8898426Abstract: Embodiments relate to target buffer address region tracking. An aspect includes receiving a restart address, and comparing, by a processing circuit, the restart address to a first stored address and to a second stored address. The processing circuit determines which of the first and second stored addresses is identified as a same range and a different range to form a predicted target address range defining an address region associated with an entry in the target buffer. Based on determining that the restart address matches the first stored address, the first stored address is identified as the same range and the second stored address is identified as the different range. Based on determining that the restart address matches the second stored address, the first stored address is identified as the different range and the second stored address is identified as the same range.Type: GrantFiled: June 11, 2012Date of Patent: November 25, 2014Assignee: International Business Machines CorporationInventors: James J. Bonanno, Brian R. Prasky, Aaron Tsai
-
Patent number: 8880844Abstract: A chip multiprocessor includes a plurality of cores each having a translation lookaside buffer (TLB) and a prefetch buffer (PB). Each core is configured to determine a TLB miss on the core's TLB for a virtual page address and determine whether or not there is a PB hit on a PB entry in the PB for the virtual page address. If it is determined that there is a PB hit, the PB entry is added to the TLB. If it is determined that there is not a PB hit, the virtual page address is used to perform a page walk to determine a translation entry, the translation entry is added to the TLB and the translation entry is prefetched to each other one of the plurality of cores.Type: GrantFiled: March 12, 2010Date of Patent: November 4, 2014Assignee: Trustees of Princeton UniversityInventors: Abhishek Bhattacharjee, Margaret Martonosi
-
Patent number: 8874840Abstract: In one aspect of the present description, at least one of the value of a prestage trigger and the value of the prestage amount, may be modified as a function of the drive speed of the storage drive from which the units of read data are prestaged into a cache memory. Thus, cache prestaging operations in accordance with another aspect of the present description may take into account storage devices of varying speeds and bandwidths for purposes of modifying a prestage trigger and the prestage amount. Other features and aspects may be realized, depending upon the particular application.Type: GrantFiled: March 6, 2013Date of Patent: October 28, 2014Assignee: International Business Machines CorporationInventors: Michael T. Benhase, Nedlaya Y Francisco, Binny S. Gill, Lokesh M. Gupta, Suguang Li
-
Patent number: 8856452Abstract: A method and apparatus for prefetching data from memory for a multicore data processor. A prefetcher issues a plurality of requests to prefetch data from a memory device to a memory cache. Consecutive cache misses are recorded in response to at least two of the plurality of requests. A time between the cache misses is determined and a timing of a further request to prefetch data from the memory device to the memory cache is altered as a function of the determined time between the two cache misses.Type: GrantFiled: May 31, 2011Date of Patent: October 7, 2014Assignee: Illinois Institute of TechnologyInventors: Xian-He Sun, Yong Chen, Huaiyu Zhu
-
Patent number: 8850123Abstract: An apparatus generally having a processor, a cache and a circuit is disclosed. The processor may be configured to generate (i) a plurality of access addresses and (ii) a plurality of program counter values corresponding to the access addresses. The cache may be configured to present in response to the access addresses (i) a plurality of data words and (ii) a plurality of address information corresponding to the data words. The circuit may be configured to record a plurality of events in a file in response to a plurality of cache misses. A first of the events in the file due to a first of the cache misses generally includes (i) a first of the program counter values, (ii) a first of the address information and (iii) a first time to prefetch a first of the data word from a memory to the cache.Type: GrantFiled: October 19, 2010Date of Patent: September 30, 2014Assignee: Avago Technologies General IP (Singapore) Pte. Ltd.Inventors: Leonid Dubrovin, Alexander Rabinovitch, Dmitry Podvalny
-
Publication number: 20140258674Abstract: A system on chip (SoC) includes a central processing unit (CPU), an intellectual property (IP) block, and a memory management unit (MMU). The CPU is configured to set a prefetch direction corresponding to a working set of data. The IP block is configured to process the working set of data. The MMU is configured to prefetch a next page table entry from a page table based on the prefetch direction during address translation between a virtual address of the working set of data and a physical address.Type: ApplicationFiled: March 11, 2014Publication date: September 11, 2014Inventors: Kwan Ho Kim, Seok Min Kim
-
Patent number: 8832415Abstract: A multiprocessor system includes nodes. Each node includes a data path that includes a core, a TLB, and a first level cache implementing disambiguation. The system also includes at least one second level cache and a main memory. For thread memory access requests, the core uses an address associated with an instruction format of the core. The first level cache uses an address format related to the size of the main memory plus an offset corresponding to hardware thread meta data. The second level cache uses a physical main memory address plus software thread meta data to store the memory access request. The second level cache accesses the main memory using the physical address with neither the offset nor the thread meta data after resolving speculation. In short, this system includes mapping of a virtual address to a different physical addresses for value disambiguation for different threads.Type: GrantFiled: January 4, 2011Date of Patent: September 9, 2014Assignee: International Business Machines CorporationInventors: Alan Gala, Martin Ohmacht
-
Patent number: 8819390Abstract: Patterns of access and/or behavior can be analyzed and persisted for use in pre-fetching data from a physical storage device. In at least some embodiments, data can be aggregated across volumes, instances, users, applications, or other such entities, and that data can be analyzed to attempt to determine patterns for any of those entities. The patterns and/or analysis can be persisted such that the information is not lost in the event of a reboot or other such occurrence. Further, aspects such as load and availability across the network can be analyzed to determine where to send and/or store data that is pre-fetched from disk or other such storage in order to reduce latency while preventing bottlenecks or other such issues with resource availability.Type: GrantFiled: March 11, 2013Date of Patent: August 26, 2014Assignee: Amazon Technoligies, Inc.Inventors: Swaminathan Sivasubramanian, Bradley Eugene Marshall, Tate Andrew Certain, Nicholas J. Maniscalco
-
Patent number: 8806135Abstract: A load/store unit with an outstanding load miss buffer and a load miss result buffer is configured to read data from a memory system having a level one cache. Missed load instructions are stored in the outstanding load miss buffer. The load/store unit retrieves data for multiple dependent missed load instructions using a single cache access and stores the data in the load miss result buffer. When missed load instructions are reissued from the outstanding load miss buffer, data for the missed load instructions are read from the load miss result buffer rather than the level one cache. Because the data is stored in the load miss result buffer, other instructions that may change the data in level one cache do not cause data hazards with the missed load instructions.Type: GrantFiled: September 30, 2011Date of Patent: August 12, 2014Assignee: Applied Micro Circuits CorporationInventors: Matthew W. Ashcraft, John Gregory Favor, David A. Kruckemyer
-
Publication number: 20140195771Abstract: In a particular embodiment, a method of anticipatorily loading a page of memory is provided. The method may include, during execution of first program code using a first page of memory, collecting data for at least one attribute of the first page of memory, including collecting data about at least one next page of memory that interacts with the first page of memory for a historical topology attribute of the first page of memory. The method may also include, during execution of second program code using the first page of memory, determining a second page of memory to anticipatorily load based on the historical topology attribute of the first page of memory.Type: ApplicationFiled: January 4, 2013Publication date: July 10, 2014Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Shawn A. Adderly, Paul A Niekrewicz, Aydin Suren, Sebastian T. Ventrone
-
Patent number: 8775741Abstract: A storage control system includes a prefetch controller that identifies memory regions for prefetching according to temporal memory access patterns. The memory access patterns identify a number of sequential memory accesses within different time ranges and a highest number of memory accesses to the different memory regions within a predetermine time period.Type: GrantFiled: January 8, 2010Date of Patent: July 8, 2014Assignee: Violin Memory Inc.Inventor: Erik de la Iglesia
-
Patent number: 8775740Abstract: The present disclosure describes a system and method for high performance, power efficient store buffer forwarding. Some illustrative embodiments may include a system, comprising: a processor coupled to an address bus; a cache memory that couples to the address bus and comprises cache data (the cache memory divided into a plurality of ways); and a store buffer that couples to the address bus, and comprises store buffer data, a store buffer way and a store buffer index. The processor selects the store buffer data for use by a data load operation if a selected way of the plurality of ways matches the store buffer way, and if at least part of the bus address matches the store buffer index.Type: GrantFiled: August 30, 2005Date of Patent: July 8, 2014Assignee: Texas Instruments IncorporatedInventor: Muralidharan S. Chinnakonda
-
Patent number: 8769184Abstract: The prioritization of large memory page mapping is a function of the access bits in the L1 page table. In a first phase of operation, the number of set access bits in each of the L1 page tables is counted periodically and a current count value is calculated therefrom. During the first phase, no pages are mapped large even if identified as such. After the first phase, the current count value is used to prioritize among potential large memory pages to determine which pages to map large. The system continues to calculate the current count value even after the first phase ends. When using hardware assist, the access bits in the nested page tables are used and when using software MMU, the access bits in the shadow page tables are used for large page prioritization.Type: GrantFiled: January 29, 2013Date of Patent: July 1, 2014Assignee: VMware, Inc.Inventors: Qasim Ali, Ravisprasad Mummidi, Vivek Pandey, Kiran Tati
-
Patent number: 8738608Abstract: A database access model and storage structure that efficiently support concurrent OLTP and OLAP activity independently of the data model or schema used, are described. The storage structure and access model presented avoid the need to design schemas for particular workloads or query patterns and avoid the need to design or implement indexing to support specific queries. Indeed, the access model presented is independent of the database model used and can equally support relational, object and hierarchical models amongst others.Type: GrantFiled: April 6, 2011Date of Patent: May 27, 2014Assignee: Justone Database, Inc.Inventor: Duncan G. Pauly
-
Patent number: 8738889Abstract: Embodiments of an invention for generating multiple address space identifiers per virtual machine to switch between protected micro-contexts are disclosed. In one embodiment, a method includes receiving an instruction requiring an address translation; initiating, in response to receiving the instruction, a page walk from a page table pointed to by the contents of a page table pointer storage location; finding, during the page walk, a transition entry; storing the address translation and one of a plurality of address source identifiers in a translation lookaside buffer, the one of the plurality of address source identifiers based on one of a plurality of a virtual partition identifiers, at least two of the plurality of virtual partition identifiers associated with one of a plurality of virtual machines; and re-initiating the page walk.Type: GrantFiled: October 12, 2012Date of Patent: May 27, 2014Assignee: Intel CorporationInventors: Uday Savagaonkar, Madhavan Parthasarathy, Ravi Sahita, David Durham
-
Patent number: 8725987Abstract: Systems and methods are disclosed for pre-fetching data into a cache memory system. These systems and methods comprise retrieving a portion of data from a system memory and storing a copy of the retrieved portion of data in a cache memory. These systems and methods further comprise monitoring data that has been placed into pre-fetch memory.Type: GrantFiled: September 19, 2008Date of Patent: May 13, 2014Assignee: STMicroelectronics (Research & Development) LimitedInventors: Andrew Michael Jones, Stuart Ryan
-
Patent number: 8725984Abstract: In computing environments that use virtual addresses (or other indirectly usable addresses) to access memory, the virtual addresses are translated to absolute addresses (or other directly usable addresses) prior to accessing memory. To facilitate memory access, however, address translation is omitted in certain circumstances, including when the data to be accessed is within the same unit of memory as the instruction accessing the data. In this case, the absolute address of the data is derived from the absolute address of the instruction, thus avoiding address translation for the data. Further, in some circumstances, access checking for the data is also omitted.Type: GrantFiled: September 6, 2012Date of Patent: May 13, 2014Assignee: International Business Machines CorporationInventors: Viktor S. Gyuris, Ali Sheikh, Kirk A. Stewart
-
Patent number: 8719593Abstract: A secure processing device may include an external memory storing encrypted data, and a processor cooperating with the external memory. The processor is configured to generate address requests for the encrypted data in the external memory, cache keystreams based upon an encryption key, and generate decrypted plaintext based upon the cached keystreams and the encrypted data requested from the external memory. For example, the processor may be further configured to predict a future address request, and the future address request may be associated with a cached keystream.Type: GrantFiled: May 20, 2009Date of Patent: May 6, 2014Assignee: Harris CorporationInventors: Christopher David Mackey, Michael Thomas Kurdziel
-
Patent number: 8719548Abstract: A method (and structure) of mapping a memory addressing of a multiprocessing system when it is emulated using a virtual memory addressing of another multiprocessing system includes accessing a local lookaside table (LLT) on a target processor with a target virtual memory address. Whether there is a “miss” in the LLT is determined and, with the miss determined in the LLT, a lock for a global page table is obtained.Type: GrantFiled: April 13, 2011Date of Patent: May 6, 2014Assignee: International Business Machines CorporationInventors: Erik Richter Altman, Ravi Nair, John Kevin O'Brien, Kathryn Mary O'Brien, Peter Howland Oden, Daniel Arthur Prener, Sumeda Wasudeo Sathaye
-
Publication number: 20140115294Abstract: According to one embodiment, a method for operating a memory device includes receiving a first request from a requestor, wherein the first request includes accessing data at a first memory location in a memory bank, opening a first page in the memory bank, wherein opening the first page includes loading a row including the first memory location into a buffer, the row being loaded from a row location in the memory bank and transmitting the data from the first memory location to the requestor. The method also includes determining, by a memory controller, whether to close the first page following execution of the first request based on information relating to a likelihood that a subsequent request will access the first page.Type: ApplicationFiled: October 19, 2012Publication date: April 24, 2014Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Bruce M. Fleischer, Hans M. Jacobson, Ravi Nair
-
Patent number: 8635427Abstract: An object of the present invention is to improve the usage efficiency of a storage extent in a storage system using the Allocation on Use (AOU) technique. A controller in the storage system allocates a storage extent in an actual volume to an extent in a virtual volume accessed by a host computer, detects any decrease in necessity for maintaining that allocation, and cancels the allocation of the storage extent in the actual volume to the extent in the virtual volume based on the detection result.Type: GrantFiled: May 21, 2012Date of Patent: January 21, 2014Assignee: Hitachi, Ltd.Inventors: Kentaro Kakui, Kyosuke Achiwa
-
Patent number: 8621179Abstract: A method and system for simulating in software a digital computer system by performing virtual to physical translations of simulated instructions is disclosed. The number of virtual to physical translations using hash lookups is reduced by analyzing sequences of the instructions for determining with high probability whether the memory accesses made by the instructions perform the same virtual to physical translation in order to reduce the number of necessary hash lookups to enable faster simulation performance.Type: GrantFiled: June 18, 2004Date of Patent: December 31, 2013Assignee: Intel CorporationInventors: Bengt Werner, Fredrik Larsson
-
Publication number: 20130332699Abstract: Embodiments relate to target buffer address region tracking. An aspect includes receiving a restart address, and comparing, by a processing circuit, the restart address to a first stored address and to a second stored address. The processing circuit determines which of the first and second stored addresses is identified as a same range and a different range to form a predicted target address range defining an address region associated with an entry in the target buffer. Based on determining that the restart address matches the first stored address, the first stored address is identified as the same range and the second stored address is identified as the different range. Based on determining that the restart address matches the second stored address, the first stored address is identified as the different range and the second stored address is identified as the same range.Type: ApplicationFiled: June 11, 2012Publication date: December 12, 2013Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: James J. Bonanno, Brian R. Prasky, Aaron Tsai
-
Patent number: 8607005Abstract: An apparatus, system, and method are disclosed for determining prefetch data. A start module communicates a start of a target software process to a storage device. A learning module learns data blocks accessed for the target software process. In one embodiment, a prefetch module prefetches the learned data blocks in response to the start of the target software process. An end module communicates the end of the target software process to the storage device. In one embodiment, the prefetch module terminates prefetching data blocks and the learning module terminates learning the data blocks accessed for the target software process in response to the end module's communication of the end of the target software process.Type: GrantFiled: February 17, 2006Date of Patent: December 10, 2013Assignee: International Business Machines CorporationInventors: Kenneth Wayne Boyd, Kenneth Fairclough Day, III, David Allan Pease, John Jay Wolfgang
-
Publication number: 20130318306Abstract: A method and system for implementing vector prefetch with streaming access detection is contemplated in which an execution unit such as a vector execution unit, for example, executes a vector memory access instruction that references an associated vector of effective addresses. The vector of effective addresses includes a number of elements, each of which includes a memory pointer. The vector memory access instruction is executable to perform multiple independent memory access operations using at least some of the memory pointers of the vector of effective addresses. A prefetch unit, for example, may detect a memory access streaming pattern based upon the vector of effective addresses, and in response to detecting the memory access streaming pattern, the prefetch unit may calculate one or more prefetch memory addresses based upon the memory access streaming pattern. Lastly, the prefetch unit may prefetch the one or more prefetch memory addresses into a memory.Type: ApplicationFiled: May 22, 2012Publication date: November 28, 2013Inventor: Jeffry E. Gonion
-
Patent number: 8595465Abstract: Some of the embodiments of the present disclosure provide a method for predicting, for a first virtual address, a first descriptor based at least in part on the one or more past descriptors associated with one or more past virtual addresses; and determining, for the first virtual address, a first physical address based at least in part on the predicted first descriptor. Other embodiments are also described and claimed.Type: GrantFiled: September 8, 2010Date of Patent: November 26, 2013Assignee: Marvell Israel (M.I.S.L) Ltd.Inventor: Moshe Raz
-
Patent number: 8566496Abstract: A SAS expander collects data access information associated with a nexus and determines whether a data prefetch is appropriate. The SAS expander identifies potential data blocks utilizing previous data requests of the nexus. The SAS expander issues a data request to the target for the potential data blocks. The SAS expander stores the potential data blocks within a prefetch cache for future utilization within a data read.Type: GrantFiled: December 3, 2010Date of Patent: October 22, 2013Assignee: LSI CorporationInventors: Gabriel L. Romero, Frederick G. Smith
-
Publication number: 20130246733Abstract: A parallel processing device includes a processing sequence management unit that reads commands of the command corresponding to a parallel processing start bit to the command corresponding to a parallel processing completion bit from a sequence command storage in sequence to make the sequence command storage output the commands to a first address management unit and a second address management unit, the first address management unit refers to the sequence commands read from the sequence command storage in order from the head to find the command that a first processing execution unit executes, and then instructs the first processing execution unit to execute the command, and the second address management unit refers to the sequence commands read from the sequence command storage in order from the head to find the command that a second processing execution unit executes, and then instructs the second processing execution unit to execute the command.Type: ApplicationFiled: December 20, 2012Publication date: September 19, 2013Inventor: Takayasu MOCHIDA
-
Publication number: 20130246695Abstract: An integrated circuit device comprising at least one prefetching module for prefetching lines of data from at least one memory element. The prefetching module is configured to determine a position of a requested block of data within a respective line of data of the at least one memory element, determine a number of subsequent lines of data to prefetch, based at least partly on the determined position of the requested block of data within the respective line of data of the at least one memory element, and cause the prefetching of n successive lines of data from the at least one memory element.Type: ApplicationFiled: November 22, 2010Publication date: September 19, 2013Applicant: Freescale Semiconductor, Inc.Inventors: Alistair Robertson, Mark Maiolani
-
Patent number: 8539163Abstract: Patterns of access and/or behavior can be analyzed and persisted for use in pre-fetching data from a physical storage device. In at least some embodiments, data can be aggregated across volumes, instances, users, applications, or other such entities, and that data can be analyzed to attempt to determine patterns for any of those entities. The patterns and/or analysis can be persisted such that the information is not lost in the event of a reboot or other such occurrence. Further, aspects such as load and availability across the network can be analyzed to determine where to send and/or store data that is pre-fetched from disk or other such storage in order to reduce latency while preventing bottlenecks or other such issues with resource availability.Type: GrantFiled: December 17, 2010Date of Patent: September 17, 2013Assignee: Amazon Technologies, Inc.Inventors: Swaminathan Sivasubramanian, Bradley E. Marshall, Tate Andrew Certain, Nicholas J. Maniscalco
-
Publication number: 20130198485Abstract: A data array 20 to be stored is first divided into a plurality of blocks 21. Each block 21 is further sub-divided into a set of sub-blocks 22, and a set of data for each sub-block 22 is then stored in a body data buffer 30. A header data block 23 is stored for each block 21 at a predictable memory address within a header buffer 24. Each header data block contains pointer data indicating the position within the body buffer 30 where the data for the sub-blocks for the block 21 that that header data block 23 relates to is stored, and data indicating the size of the stored data for each respective sub-block 22.Type: ApplicationFiled: August 3, 2012Publication date: August 1, 2013Inventors: Jorn Nystad, Ola Hugosson, Oskar Flordal
-
Patent number: 8489851Abstract: A memory controller provided according to an aspect of the present invention includes a predictor block which predicts future read requests after converting the memory address in a prior read request received from the processor to an address space consistent with the implementation of a memory unit. According to another aspect of the present invention, the predicted requests are granted access to a memory unit only when there are no requests pending from processors and the peripherals sending access requests to the memory unit.Type: GrantFiled: December 11, 2008Date of Patent: July 16, 2013Assignee: NVIDIA CorporationInventors: Balajee Vamanan, Tukaram Methar, Mrudula Kanuri, Sreenivas Krishnan
-
Publication number: 20130166874Abstract: An I/O controller, coupled to a processing unit and to a memory, includes an I/O link interface configured to receive data packets having virtual addresses; an address translation unit having an address translator to translate received virtual addresses into real addresses by translation control entries and a cache allocated to the address translator to cache a number of the translation control entries; an I/O packet processing unit for checking the data packets received at the I/O link interface and for forwarding the checked data packets to the address translation unit; and a prefetcher to forward address translation prefetch information from a data packet received to the address translation unit; the address translator configured to fetch the translation control entry for the data packet by the address translation prefetch information from the allocated cache or, if the translation control entry is not available in the allocated cache, from the memory.Type: ApplicationFiled: December 5, 2012Publication date: June 27, 2013Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventor: International Business Machines Corporation
-
Patent number: 8473689Abstract: A system for prefetching memory in caching systems includes a processor that generates requests for data. A cache of a first level stores memory lines retrieved from a lower level memory in response to references to addresses generated by the processor's requests for data. A prefetch buffer is used to prefetch an adjacent memory line from the lower level memory in response to a request for data. The adjacent memory line is a memory line that is adjacent to a first memory line that is associated with an address of the request for data. An indication that a memory line associated with an address associated with the requested data has been prefetched is stored. A prefetched memory line is transferred to the cache of the first level in response to the stored indication that a memory line associated with an address associated with the requested data has been prefetched.Type: GrantFiled: July 27, 2010Date of Patent: June 25, 2013Assignee: Texas Instruments IncorporatedInventors: Timothy D. Anderson, Kai Chirca
-
Patent number: 8473711Abstract: A method for predicting memory access, where each data processing procedure is performed in a plurality of stages with segment processing, and the plurality of stages include at least a first stage and a second stage, includes: dividing a memory into a plurality of memory blocks, generating a predicting value of a second position information according to a correct value of a first position information at the first stage, accessing the memory blocks of the corresponding position in the memory according to the predicting value of the second position information, and identifying whether the predicting value of the second position information is correct or not for determining whether the memory is re-accessed, where the first stage occurs before the second stage in a same data processing procedure.Type: GrantFiled: January 6, 2009Date of Patent: June 25, 2013Assignee: Realtek Semiconductor Corp.Inventors: Yu-Ming Chang, Yen-Ju Lu
-
Patent number: 8443166Abstract: Systems and methods for tracking changes and performing backups to a storage device are provided. For virtual disks of a virtual machine, changes are tracked from outside the virtual machine in the kernel of a virtualization layer. The changes can be tracked in a lightweight fashion with a bitmap, with a finer granularity stored and tracked at intermittent intervals in persistent storage. Multiple backup applications can be allowed to accurately and efficiently backup a storage device. Each backup application can determine which block of the storage device has been updated since the last backup of a respective application. This change log is efficiently stored as a counter value for each block, where the counter is incremented when a backup is performed. The change log can be maintained with little impact on I/O by using a coarse bitmap to update the finer grained change log.Type: GrantFiled: June 23, 2009Date of Patent: May 14, 2013Assignee: VMware, Inc.Inventors: Christian Czezatke, Krishna Yadappanavar, Andrew Tucker
-
Patent number: 8433853Abstract: A microprocessor includes a translation lookaside buffer, a request to load a page table entry into the microprocessor generated in response to a miss of a virtual address in the translation lookaside buffer, and a prefetch unit. The prefetch unit receives a physical address of a first cache line that includes the requested page table entry and responsively generates a request to prefetch into the microprocessor a second cache line that is the next physically sequential cache line to the first cache line.Type: GrantFiled: March 6, 2012Date of Patent: April 30, 2013Assignee: VIA Technologies, IncInventors: Colin Eddy, Rodney E. Hooker
-
Patent number: 8412911Abstract: A system and method for invalidating obsolete virtual/real address to physical address translations may employ translation lookaside buffers to cache translations. TLB entries may be invalidated in response to changes in the virtual memory space, and thus may need to be demapped. A non-cacheable unit (NCU) residing on a processor may be configured to receive and manage a global TLB demap request from a thread executing on a core residing on the processor. The NCU may send the request to local cores and/or to NCUs of external processors in a multiprocessor system using a hardware instruction to broadcast to all cores and/or processors or to multicast to designated cores and/or processors. The NCU may track completion of the demap operation across the cores and/or processors using one or more counters, and may send an acknowledgement to the initiator of the demap request when the global demap request has been satisfied.Type: GrantFiled: June 29, 2009Date of Patent: April 2, 2013Assignee: Oracle America, Inc.Inventors: Gregory F. Grohoski, Paul J. Jordan, Mark A. Luttrell, Zeid Hartuon Samoail
-
Publication number: 20130036290Abstract: A data array 20 to be stored is first divided into a plurality of blocks 21. Each block 21 is further sub-divided into a set of sub-blocks 22, and a set of data for each sub-block 22 is then stored in one or more body blocks 25. A header data block 23 is stored for each block 21 at a predictable memory address within a header buffer 24. Each header data block contains pointer data indicating the position within a body block 25 where the data for the sub-blocks for the block 21 that that header data block 23 relates to is stored, and data indicating the size of the stored data for each respective sub-block 22.Type: ApplicationFiled: August 4, 2011Publication date: February 7, 2013Inventors: Jorn Nystad, Ola Hugosson
-
Patent number: 8356088Abstract: Embodiments of the present invention provide apparatuses and methods for managing the configuration settings of one or more servers within a business. The configuration management apparatuses and methods generally relate to creating a configuration schema; creating configuration items to assign to the configuration schema; assigning the configuration schema to one or more servers in the business; capturing a snapshot of the configuration settings for a server based on the configuration schema and assigning it to the configuration schema as the reference snapshot; capturing a current snapshot of the actual configuration settings for one or more servers; and comparing the reference snapshot with the current snapshot to determine the differences in the configuration settings. Embodiments of the present invention also allow a report to be generated that displays the differences between the reference snapshot and the current snapshot, such as the new settings, changed settings, and missing settings.Type: GrantFiled: October 29, 2010Date of Patent: January 15, 2013Assignee: Bank of America CorporationInventors: James Charles Montagna, Martin Thomas Gajan, Anthony Keith Stone
-
Patent number: 8307183Abstract: A recording and/or reproducing method, a recording and/or reproducing apparatus, and an information storage medium are provided. The method of recording data to an information storage medium includes: according to a change in a method of using the information storage medium, rearranging the order of a first information structure with a variable size and a second information structure with a fixed size, both of which are included in management information of the information storage medium, so that the first information structure with the variable size can be positioned following the second information structure with the fixed size; and recording the rearranged management information on the information storage medium. According to the method and apparatus, recording management information can be found in a fixed location of a finalized information storage medium, thereby allowing the recording management information to be found easily and quickly.Type: GrantFiled: November 8, 2007Date of Patent: November 6, 2012Assignee: Samsung Electronics Co., Ltd.Inventors: Sung-hee Hwang, Joon-hwan Kwon
-
Patent number: 8301865Abstract: A system and method for servicing translation lookaside buffer (TLB) misses may manage separate input and output pipelines within a memory management unit. A pending request queue (PRQ) in the input pipeline may include an instruction-related portion storing entries for instruction TLB (ITLB) misses and a data-related portion storing entries for potential or actual data TLB (DTLB) misses. A DTLB PRQ entry may be allocated to each load/store instruction selected from the pick queue. The system may select an ITLB- or DTLB-related entry for servicing dependent on prior PRQ entry selection(s). A corresponding entry may be held in a translation table entry return queue (TTERQ) in the output pipeline until a matching address translation is received from system memory. PRQ and/or TTERQ entries may be deallocated when a corresponding TLB miss is serviced. PRQ and/or TTERQ entries associated with a thread may be deallocated in response to a thread flush.Type: GrantFiled: June 29, 2009Date of Patent: October 30, 2012Assignee: Oracle America, Inc.Inventors: Gregory F. Grohoski, Paul J. Jordan, Mark A. Luttrell, Zeid Hartuon Samoail, Robert T. Golla
-
Publication number: 20120265962Abstract: A method for data storage includes, in a storage device that communicates with a host over a storage interface for executing a storage command in a memory of the storage device, estimating an expected data under-run between fetching data for the storage command from the memory and sending the data over the storage interface. A data size to be prefetched from the memory, in order to complete uninterrupted execution of the storage command, is calculated in the storage device based on the estimated data under-run. The storage command is executed in the memory while prefetching from the memory data of at least the calculated data size.Type: ApplicationFiled: April 5, 2012Publication date: October 18, 2012Applicant: ANOBIT TECHNOLOGIES LTD.Inventor: Arie Peled
-
Patent number: 8291125Abstract: Systems and methods for a mass storage device attached to a host device use speculation about the host command likely to be received next from the host device based on a previously received command to improve throughput of accesses to the mass storage device. Host commands are used to speculatively produce commands for data storage devices of the mass storage device, such that host commands speculated as being likely next can be started during idle time of the data storage devices, based upon the probability that the speculation will be correct some of the time, and otherwise wasted idle time will be more efficiently used. Time taken by the host device to produce successive commands to the mass storage system is monitored, and future speculatively produced commands are parameterized to complete within the observed host time to produce new commands, making more efficient use of the data storage devices.Type: GrantFiled: February 16, 2011Date of Patent: October 16, 2012Assignee: SMSC Holdings S.a.r.l.Inventors: Gideon David Intrater, Biao Jia, Teck Huat Kerk, Qing Yun Li
-
Patent number: 8290349Abstract: The present invention relates to a playback apparatus, a method, and a program which can appropriately perform jump playback when content transmitted through a network is played back in real time. A terminal 3 receives stream data transmitted from a server 1, buffers the stream data, and plays back the buffered data. The terminal 3 has multiple buffers to allow content data of the positions of jump destinations that can be specified as a jump destination during jump playback to be pre-buffered in the multiple buffers. As a result, upon receiving a request for jump playback, the terminal 3 can start playback from a jump destination without delay, since the data of the jump destination has already been buffered. The present invention is applied to, for example, television receivers.Type: GrantFiled: June 22, 2007Date of Patent: October 16, 2012Assignee: Sony CorporationInventor: Kei Matsubayashi
-
Patent number: 8285968Abstract: In computing environments that use virtual addresses (or other indirectly usable addresses) to access memory, the virtual addresses are translated to absolute addresses (or other directly usable addresses) prior to accessing memory. To facilitate memory access, however, address translation is omitted in certain circumstances, including when the data to be accessed is within the same unit of memory as the instruction accessing the data. In this case, the absolute address of the data is derived from the absolute address of the instruction, thus avoiding address translation for the data. Further, in some circumstances, access checking for the data is also omitted.Type: GrantFiled: September 29, 2009Date of Patent: October 9, 2012Assignee: International Business Machines CorporationInventors: Viktor S. Gyuris, Ali Sheikh, Kirk A. Stewart
-
Publication number: 20120246440Abstract: A logical page identity for a logical page containing data storage application data can be mapped to a physical storage page location in a storage where the data of the logical page are stored. The mapping as well as additional page data can be retained within a persistence layer accessible to the data storage application. The additional page data can include at least one of a size of the page and a next page linkage indicating a second page that follows the page in a page sequence of related pages. The retained mapping and additional page data can be retrieved from the persistence layer to initiate a page operation on the related pages, and the page operation can be executed on the related pages based on the retrieved mapping and additional page data. Related methods, systems, and articles of manufacture are also disclosed.Type: ApplicationFiled: March 25, 2011Publication date: September 27, 2012Inventors: Dirk Thomsen, Ivan Schreter
-
Patent number: 8275598Abstract: A computer-implemented method, system and computer program product are presented for managing an Effective-to-Real Address Table (ERAT) and a Translation Lookaside Buffer (TLB) during test verification in a simulated densely threaded Network On a Chip (NOC). The ERAT and TLB are stripped out of the computer simulation before executing a test program. When the test program experiences an inevitable ERAT-miss and/or TLB-miss, an interrupt handler walks a page table until the requisite page for re-populating the ERAT and TLB is located.Type: GrantFiled: March 2, 2009Date of Patent: September 25, 2012Assignee: International Business Machines CorporationInventors: Anatoli S. Andreev, Olaf K. Hendrickson, John M. Ludden, Richard D. Peterson, Elena Tsanko
-
Patent number: RE45086Abstract: Computer systems are typically designed with multiple levels of memory hierarchy. Prefetching has been employed to overcome the latency of fetching data or instructions from or to memory. Prefetching works well for data structures with regular memory access patterns, but less so for data structures such as trees, hash tables, and other structures in which the datum that will be used is not known a priori. The present invention significantly A system and method is provided that increases the cache hit rates of many important data structure traversals, and thereby the potential throughput of the computer system and application in which it is employed. The invention This is applicable to those data structure accesses in which the traversal path is dynamically determined. The invention does this This is done by aggregating traversal requests and then pipelining the traversal of aggregated requests on the data structure.Type: GrantFiled: January 24, 2007Date of Patent: August 19, 2014Assignee: Paonessa Research, Limited Liability CompanyInventor: Dirk Coldewey