Directories And Tables (e.g., Dlat, Tlb) Patents (Class 711/205)
-
Patent number: 7596655Abstract: A flash storage comprises a flash memory, including a plurality of physical memory blocks, each of physical memory blocks comprising a plurality of memory segments, and a plurality of physical sectors, and each of physical sectors being further provided therein with at least a user data column and a logical address pointer column. When physical data is written into the user data column, writing logical address pointer data into the logical address pointer column of the same physical sector may be performed together by the control of a micro-controller. Furthermore, the logical address pointer data in the same memory segment are arranged to be a backup memory segment address mapping table and then stored in one physical memory block. The backup memory segment address mapping table may be loaded directly and stored into a registered memory by the micro-controller when the system boots.Type: GrantFiled: March 7, 2006Date of Patent: September 29, 2009Assignee: Prolific Technology Inc.Inventors: Yu-Hsien Wang, Chanson Lin, Tung-Hsien Wu, Chien-Chang Su, Gow-Jeng Lin, Ching-Chung Hsu, Kuang-Yuan Chen
-
Patent number: 7577816Abstract: The present invention provides a method of initializing shared memory in a multinode system. The method includes building a local address space in each of a plurality of nodes and exporting the local address space from each of the plurality of nodes to a Remote Translation Table (RTT) in each of the plurality of nodes. The present invention further provides system including a plurality of nodes, each node having one or more processors and a memory controller operatively coupled to the one or more processors, wherein the memory controller includes a RTT for holding translation information for an entire virtual memory address space for the node, further wherein the RTT is initialized upon the start of a process by building a local address space in the node, and exporting the local address space from the node to a RTT in each of the plurality of other nodes.Type: GrantFiled: August 18, 2003Date of Patent: August 18, 2009Assignee: Cray Inc.Inventors: Kitrick Sheets, Andrew B. Hastings
-
Publication number: 20090204785Abstract: A computer. A processor pipeline alternately executes instructions coded for first and second different computer architectures or coded to implement first and second different processing conventions. A memory stores instructions for execution by the processor pipeline, the memory being divided into pages for management by a virtual memory manager, a single address space of the memory having first and second pages. A memory unit fetches instructions from the memory for execution by the pipeline, and fetches stored indicator elements associated with respective memory pages of the single address space from which the instructions are to be fetched. Each indicator element is designed to store an indication of which of two different computer architectures and/or execution conventions under which instruction data of the associated page are to be executed by the processor pipeline.Type: ApplicationFiled: October 31, 2007Publication date: August 13, 2009Inventors: John S. Yates, JR., David L. Reese, Korbin S. Van Dyke, T. R. Ramesh, Paul H. Hohensee
-
Publication number: 20090198950Abstract: A processor includes a first address translation engine, a second address translation engine, and a prefetch engine. The first address translation engine is configured to determine a first memory address of a pointer associated with a data prefetch instruction. The prefetch engine is coupled to the first translation engine and is configured to fetch content, included in a first data block (e.g., a first cache line) of a memory, at the first memory address. The second address translation engine is coupled to the prefetch engine and is configured to determine a second memory address based on the content of the memory at the first memory address. The prefetch engine is also configured to fetch (e.g., from the memory or another memory) a second data block (e.g., a second cache line) that includes data at the second memory address.Type: ApplicationFiled: February 1, 2008Publication date: August 6, 2009Inventors: Ravi K. Arimilli, Balaram Sinharoy, William E. Speight, Lixin Zhang
-
Publication number: 20090187727Abstract: Embodiments of the present invention provide a system that generates an index for a cache memory. The system starts by receiving a request to access the cache memory, wherein the request includes address information. The system then obtains non-address information associated with the request. Next, the system generates the index using the address information and the non-address information. The system then uses the index to fulfill access the cache memory.Type: ApplicationFiled: January 23, 2008Publication date: July 23, 2009Applicant: SUN MICROSYSTEMS, INC.Inventors: Paul Caprioli, Martin Karlsson, Shailender Chaudhry
-
Patent number: 7558939Abstract: A three-tiered TLB architecture in a multithreading processor that concurrently executes multiple instruction threads is provided. A macro-TLB caches address translation information for memory pages for all the threads. A micro-TLB caches the translation information for a subset of the memory pages cached in the macro-TLB. A respective nano-TLB for each of the threads caches translation information only for the respective thread. The nano-TLBs also include replacement information to indicate which entries in the nano-TLB/micro-TLB hold recently used translation information for the respective thread. Based on the replacement information, recently used information is copied to the nano-TLB if evicted from the micro-TLB.Type: GrantFiled: March 8, 2005Date of Patent: July 7, 2009Assignee: MIPS Technologies, Inc.Inventors: Soumya Banerjee, Michael Gottlieb Jensen, Ryan C. Kinter
-
Patent number: 7558911Abstract: Processor-based systems may use more than one operating system and may have disk drives which are cached. Systems which include a write-back cache and a disk drive may develop incoherent data when operating systems are changed or when disk drives are removed. Scrambling a partition table on a disk drive and storing cache identification information may improve data coherency in a processor-based system.Type: GrantFiled: December 18, 2003Date of Patent: July 7, 2009Assignee: Intel CorporationInventors: John I. Garney, Robert J. Royer, Jr., Jeanna N. Matthews, Kirk D. Brannock
-
Patent number: 7552275Abstract: In a packet switching device or system, such as a router, switch, combination router/switch, or component thereof, a method of and system for performing a table lookup operation using a lookup table index that exceeds a CAM key size is provided. Multiple CAM accesses are performed, each using a CAM key derived from a subset of lookup table index, resulting in one or more CAM entries. One or more matching table entries are derived from the one or more CAM entries resulting from the multiple CAM accesses.Type: GrantFiled: April 3, 2006Date of Patent: June 23, 2009Assignee: Extreme Networks, Inc.Inventor: Ram Krishnan
-
Patent number: 7552255Abstract: In one embodiment of the present invention, a method includes invalidating an entry of a filter coupled to a pipeline resource if an update to the entry occurs during a first context; and flushing a portion of the pipeline resource corresponding to an address space including the entry.Type: GrantFiled: July 30, 2003Date of Patent: June 23, 2009Assignee: Intel CorporationInventors: Robert T. George, Jason W. Brandt, K. S. Venkatraman, Sangwook P. Kim
-
Patent number: 7552254Abstract: In one embodiment of the present invention, an apparatus includes a pipeline resource having different address spaces each corresponding to a different address space identifier. Each address space may have entries that include data values associated with the address space identifier.Type: GrantFiled: July 30, 2003Date of Patent: June 23, 2009Assignee: Intel CorporationInventors: Robert T. George, Jason W. Brandt, Jonathan D. Combs, Peter J. Ruscito, Sanjoy K. Mondal
-
Patent number: 7543133Abstract: A computer system having low memory access latency. In one embodiment, the computer system includes a network and one or more processing nodes connected via the network, wherein each processing node includes a plurality of processors and a shared memory connected to each of the processors. The shared memory includes a cache. Each processor includes a scalar processing unit, a vector processing unit and means for operating the scalar processing unit independently of the vector processing unit. Processors on one node can load data directly from and store data directly to shared memory on another processing node via the network.Type: GrantFiled: August 18, 2003Date of Patent: June 2, 2009Assignee: Cray Inc.Inventor: Steven L. Scott
-
Patent number: 7543291Abstract: A processor purging system comprising a translation lookaside buffer (TLB) having a plurality of translation pairs, at least one memory cache, and logic configured to detect whether at least one of the translation pairs corresponds to a purge signal. The logic is further configured to assert a purge detection signal indicative of whether at least one translation pair corresponds to the purge signal and to determine, based upon the purge detection signal, whether to search the memory cache for a translation pair corresponding to the purge signal.Type: GrantFiled: August 1, 2003Date of Patent: June 2, 2009Assignee: Hewlett-Packard Development Company, L.P.Inventors: Gregg Bernard Lesartr, Douglas Shelborn Stirrett
-
Patent number: 7543131Abstract: In an embodiment, a computer system comprises a processor; a memory management module comprising a plurality of instructions executable on the processor; a memory coupled to the processor; and an input/output memory management unit (IOMMU) coupled to the memory. The IOMMU is configured to implement address translation and memory protection for memory operations sourced by one or more input/output (I/O) devices. The memory stores a command queue during use. The memory management module is configured to write one or more control commands to the command queue, and the IOMMU is configured to read the control commands from the command queue and execute the control commands.Type: GrantFiled: August 11, 2006Date of Patent: June 2, 2009Assignee: Advanced Micro Devices, Inc.Inventors: Mark D. Hummel, Andrew W. Lueck, Geoffrey S. Strongin, Mitchell Alsup, Michael J. Haertel
-
Patent number: 7543132Abstract: A method and apparatus for improved performance for reloading translation look-aside buffers in multithreading, multi-core processors. TSB prediction is accomplished by hashing a plurality of data parameters and generating an index that is provided as an input to a predictor array to predict the TSB page size. In one embodiment of the invention, the predictor array comprises two-bit saturating up-down counters that are used to enhance the accuracy of the TSB prediction. The saturating up-down counters are configured to avoid making rapid changes in the TSB prediction upon detection of an error. Multiple misses occur before the prediction output is changed. The page size specified by the predictor index is searched first. Using the technique described herein, errors are minimized because the counter leads to the correct result at least half the time.Type: GrantFiled: June 30, 2004Date of Patent: June 2, 2009Assignee: Sun Microsystems, Inc.Inventors: Greg F. Grohoski, Ashley Saulsbury, Paul J. Jordan, Manish Shah, Rabin A. Sugumar, Mark Debbage, Venkatesh Iyengar
-
Patent number: 7536521Abstract: A disk drive or similar storage medium uses a semantic understanding of its associated file system to monitor file metadata and derive block liveness normally only known by the file system. Knowledge of block liveness can be used to improve the disk performance and to create a disk that provides for secure deletion without explicit instructions from the file system.Type: GrantFiled: September 27, 2006Date of Patent: May 19, 2009Assignee: Wisconsin Alumni Research FoundationInventors: Muthian Sivathanu, Andrea C. Arpaci-Dusseau, Remzi H. Arpaci-Dusseau
-
Patent number: 7536530Abstract: A system and method for a processor to determine a memory page management implementation used by a memory controller without necessarily having direct access to the circuits or registers of the memory controller is disclosed. In one embodiment, a matrix of counters correspond to potential page management implementations and numbers of pages per block. The counters may be incremented or decremented depending upon whether the corresponding page management implementations and numbers of pages predict a page boundary whenever a long access latency is observed. The counter with the largest value after a period of time may correspond to the actual page management implementation and number of pages per block.Type: GrantFiled: December 30, 2005Date of Patent: May 19, 2009Assignee: Intel CorporationInventors: Eric A. Sprangle, Anwar Q. Rohillah
-
Patent number: 7533220Abstract: A microprocessor coupled to a system memory has a memory subsystem with a translation look-aside buffer (TLB) for storing TLB information. The microprocessor also includes an instruction decode unit that decodes an instruction that specifies a data stream in the system memory and an abnormal TLB access policy. The microprocessor also includes a stream prefetch unit that generates a prefetch request to the memory subsystem to prefetch a cache line of the data stream from the system memory into the memory subsystem. If a virtual page address of the prefetch request causes an abnormal TLB access, the memory subsystem selectively aborts the prefetch request based on the abnormal TLB access policy specified in the instruction.Type: GrantFiled: August 11, 2006Date of Patent: May 12, 2009Assignee: MIPS Technologies, Inc.Inventor: Keith E. Diefendorff
-
Patent number: 7533224Abstract: A processing device, processing method, and information recording medium manages copyright and utilization control of each of fragmented data of contents stored on the recording medium. The information recording medium has contents of a utilization management object recorded thereon. Main contents having a data format which complies with a particular audio visual format, and sub-contents having another data format which does not comply with the audio visual format, are stored as recording data on the information recording medium. Configuration data of the main contents and the sub-contents are set as contents management units, and the data included in the contents management units are stored as encrypted data, encrypted with individual unit keys individually corresponding to the contents management units.Type: GrantFiled: November 8, 2004Date of Patent: May 12, 2009Assignee: Sony CorporationInventor: Yoshikazu Takashima
-
Patent number: 7526627Abstract: In the present invention, memory resources are effectively utilized by virtualizing external memory resources as internal memory resources, and erroneous operations that destroy the cooperative relationship of these memory resource and the like are prevented in advance. An external storage 2 is connected to a main storage 1, and real volumes 2A1 and 2A2 are respectively mapped into virtual volumes 1B1 and 1B2 (S1). The control server 3 respectively acquires construction information 1D and 2D for the respective storages 1 and 2, and stores and controls this construction information in a storage part 3D (S2). When the user inputs operation contents 4A from a control terminal 4 (S3), the control server 3 judges whether or not the operation contents 4A have an effect on the cooperation between the storages 1 and 2 on the basis of the operation contents 4A and the information in the storage part 3D (S4), and performs only operations that do not have an effect (S5).Type: GrantFiled: January 5, 2005Date of Patent: April 28, 2009Assignee: Hitachi, Ltd.Inventor: Akitatsu Harada
-
Publication number: 20090106523Abstract: Multiple pipelined Translation Look-aside Buffer (TLB) units are configured to compare a translation address with associated TLB entries. The TLB units operated in serial order comparing the translation address with associated TLB entries until an identified one of the TLB units produces a hit. The TLB units following the TLB unit producing the hit might be disabled.Type: ApplicationFiled: October 18, 2007Publication date: April 23, 2009Applicant: CISCO TECHNOLOGY INC.Inventor: Donald E. Steiss
-
Patent number: 7516279Abstract: Computer implemented method, system and computer program product for prefetching data in a data processing system. A computer implemented method for prefetching data in a data processing system includes generating attribute information of prior data streams by associating attributes of each prior data stream with a storage access instruction which caused allocation of the data stream, and then recording the generated attribute information. The recorded attribute information is accessed, and a behavior of a new data stream is modified using the accessed recorded attribute information.Type: GrantFiled: February 28, 2006Date of Patent: April 7, 2009Assignee: International Business Machines CorporationInventors: John Barry Griswell, Jr., Francis Patrick O'Connell
-
Patent number: 7516282Abstract: A control device for a memory is provided. The control device includes a micro-control unit (MCU), a command queue, a command sequencer, and a table. The control device is coupled to the memory and is used for controlling the memory to execute an operation. In which, the MCU outputs a control signal according to the operation. The command sequencer sequentially stores command sets required by the execution of the operation according to the control signal, and each command set includes plural commands. The command queue sequentially stores command set contents according to the order of the corresponding command sets. The table stores a target address of the memory required by the execution of the operation.Type: GrantFiled: September 8, 2006Date of Patent: April 7, 2009Assignee: ITE Tech. Inc.Inventors: Ming-Hsun Sung, Yu-Lin Hsieh
-
Patent number: 7516297Abstract: Systems, methods, and devices are provided for memory management. One method embodiment includes providing an operating system capable of supporting variable page sizes. The method includes providing a virtual memory address, translating the virtual memory address to a virtual memory page, and mapping the virtual memory page to a physical memory page by using a multilevel page table whose depth and/or order corresponds to page sizes that are supported by an operating system and/or hardware.Type: GrantFiled: November 10, 2005Date of Patent: April 7, 2009Assignee: Hewlett-Packard Development Company, L.P.Inventor: Clifford J. Mather
-
Patent number: 7509476Abstract: Advanced processors for executing software applications on different operating system are presented including: a number of processor cores each configured to execute multiple threads, wherein each of the number of processor cores includes a data cache and an instruction cache; a data switch interconnect ring arrangement directly coupled with the data cache of each of the number of processor cores and configured to pass memory related information among the number of processor cores; a messaging network directly coupled with the instruction cache of each of the number of processor cores and a number of communication ports; and a memory management unit (MMU) coupled with each of the number of processor cores, the MMU having a first translation-lookaside buffer (TLB) portion, a second TLB portion, and a third TLB portion, wherein each TLB portion is operable in several modes, wherein each TLB portion includes a number of entries.Type: GrantFiled: February 8, 2007Date of Patent: March 24, 2009Assignee: RMI CorporationInventors: David T. Hass, Basab Mukherjee
-
Patent number: 7506132Abstract: A system, method, and a computer readable for protecting content of a memory page are disclosed. The method includes determining a start of a semi-synchronous memory copy operation. A range of addresses is determined where the semi-synchronous memory copy operation is being performed. An issued instruction that removes a page table entry is detected. The method further includes determining whether the issued instruction is destined to remove a page table entry associated with at least one address in the range of addresses. In response to the issued instruction being destined to remove the page table entry, the execution of the issued instruction is stalled until the semi-synchronous memory copy operation is completed.Type: GrantFiled: December 22, 2005Date of Patent: March 17, 2009Assignee: International Business Machines CorporationInventors: Ravi K. Arimilli, Rama K. Govindaraju, Peter H. Hochschild, Bruce G. Mealey, Satya P. Sharma, Balaram Sinharoy
-
Patent number: 7506128Abstract: An integrated circuit (IC) module allows volatile data generated by applications to be stored within volatile data files in the volatile memory. A file system tracks the location of all data files as residing in either volatile memory or nonvolatile memory and facilitates access to the volatile data files in volatile memory in a similar manner to accessing nonvolatile data files in nonvolatile memory. The file system exposes a set of application program interfaces (APIs) to allow applications to access the data files. The same APIs are used to access both volatile data files and nonvolatile data files. When an application requests access to a data file, the file system initially determines whether the application is authorized to gain access to the data file. If it is, the file system next determines whether the data file resides in volatile memory or nonvolatile memory. Once the memory region is identified, the file system identifies the physical location of the data file.Type: GrantFiled: March 31, 2005Date of Patent: March 17, 2009Assignee: Microsoft CorporationInventors: Vinay Deo, Mihai Costea, Mahesh Sharad Lotlikar, Tak Chung Lung, David Milstein, Gilad Odinak
-
Publication number: 20090070545Abstract: A processing system includes memory management software responsive to changes in a page table to consolidate a run of contiguous page table entries into a page table entry having a larger memory page size. The memory management software determines whether the run of contiguous page table entries may be cached using the larger memory page size in an entry of a translation lookaside buffer. The translation lookaside buffer may be a MIPS-like TLB in which multiple page table entries are cached in each TLB entry.Type: ApplicationFiled: September 11, 2007Publication date: March 12, 2009Inventor: Brian Stecher
-
Patent number: 7502872Abstract: The present invention provides a method that enables application instances to pass block mode storage requests directly to a physical I/O adapter without run-time involvement from the local operating system or hypervisor. Specifically, a mechanism for providing and using a linear block address (LBA) translation protection table (TPT) to control out of user space I/O operations is provided. In one aspect of the present invention, the LBATPT includes an adapter protection table that has entries for each portion of a storage device. Entries include access control values which identify whether the entry is valid and what access type operations may be performed on a corresponding portion of a storage device. I/O requests may be checked against these access control values to determine if an application instance that submitted the I/O requests may access the LBAs identified in the I/O requests in the manner requested.Type: GrantFiled: May 23, 2005Date of Patent: March 10, 2009Assignee: International Bsuiness Machines CorporationInventors: William Todd Boyd, John Lewis Hufferd, Agustin Mena, III, Renato John Recio, Madeline Vega
-
Patent number: 7499834Abstract: A storage area network (SAN) management application generates device allocation reports displaying foundation variables, device specific parameters, and computed, derived fields for different types of storage arrays, without burdening the allocation report with extraneous parameters through the use of a layout indicative of the information included on the report, providing a streamlined and seamless allocation report. The SAN management application defines a layout indicative of the foundation variables, device attributes, and derived fields requested in an allocation report. The user selected layout indicates the requested allocation parameters for a report, indicative of the foundation variable, device attributes, and derived fields, and also indicates the device usage metrics for computing the derived fields from the foundation variables and device attributes.Type: GrantFiled: September 30, 2004Date of Patent: March 3, 2009Assignee: EMC CorporationInventors: Anuradha Shivnath, Paul J. Timmins, Christopher A. Chaulk, Serge Marokhovsky, Viren Pherwani
-
Patent number: 7500067Abstract: The present disclosure describes systems and methods for allocating memory in a multiprocessor computer system such as a non-uniform memory access (NUMA) machine having distribute shared memory. The systems and methods include allocating memory to input-output devices (I/O devices) based at least in part on which memory resource is physically closest to a particular I/O device. Through these systems and methods memory is allocated more efficiently in a NUMA machine. For example, allocating memory to an I/O device that i80s on the same node as a memory resource, reduces memory access time thereby maximizing data transmission. The present disclosure further describes a system and method for improving performance in a multiprocessor computer system by utilizing a pre-programmed device affinity table. The system and method includes listing the memory resources physically closest to each I/O device and accessing the device table to determine the closest memory resource to a particular I/O device.Type: GrantFiled: March 29, 2006Date of Patent: March 3, 2009Assignee: Dell Products L.P.Inventors: Madhusudhan Rangarajan, Vijay B. Nijhawan
-
Patent number: 7493465Abstract: A computer system having a kernel for mapping virtual memory address space to physical memory address space. The computer system uses a method for performing an input/output operation. A physical memory buffer is registered with a subsystem, and the physical memory buffer is associated with a first virtual address, a size and a key. The physical memory buffer is dynamically associated with a second virtual address which is different from the first virtual address. As part of an application program an input/output operation is requested regarding the second virtual address. An application table is used to obtain the first virtual address, the key and the size. The first virtual address, the key and the size are supplied to the subsystem. The subsystem uses the first virtual address, the key and the size, to determine the physical memory buffer and performs an input/output operation using the physical memory buffer without intervention of the kernel.Type: GrantFiled: May 17, 2004Date of Patent: February 17, 2009Assignee: Oracle International CorporationInventors: Margaret S. Susairaj, Waleed Ojeil, Peter Ogilvie, Richard Frank, Ravi Thammiah
-
Publication number: 20090043985Abstract: A data processing device employs a first translation look-aside buffer (TLB) to translate virtual addresses to physical addresses. If a virtual address to be translated is not located in the first TLB, the physical address is requested from a set of page tables. When the data processing device is in a hypervisor mode, a second TLB is accessed in response to the request to access the page tables. If the virtual address is located in the second TLB, the hypervisor page tables are bypassed and the second TLB provides a physical address or information to access another table in the set of page tables. By bypassing the hypervisor page tables, the time to translate an address in the hypervisor mode is reduced, thereby improving the efficiency of the data processing device.Type: ApplicationFiled: August 6, 2007Publication date: February 12, 2009Applicant: ADVANCED MICRO DEVICES, INC.Inventors: Michael Edward Tuuk, Michael Clark
-
Patent number: 7490216Abstract: A virtual memory system implementing the invention provides concurrent access to translations for virtual addresses from multiple address spaces. One embodiment of the invention is implemented in a virtual computer system, in which a virtual machine monitor supports a virtual machine. In this embodiment, the invention provides concurrent access to translations for virtual addresses from the respective address spaces of both the virtual machine monitor and the virtual machine. Multiple page tables contain the translations for the multiple address spaces. Information about an operating state of the computer system, as well as an address space identifier, are used to determine whether, and under what circumstances, an attempted memory access is permissible. If the attempted memory access is permissible, the address space identifier is also used to determine which of the multiple page tables contains the translation for the attempted memory access.Type: GrantFiled: September 14, 2006Date of Patent: February 10, 2009Assignee: VMware, Inc.Inventors: Xiaoxin Chen, Alberto J. Munoz
-
Patent number: 7490200Abstract: A cache memory logically partitions a cache array having a single access/command port into at least two slices, and uses a first cache directory to access the first cache array slice while using a second cache directory to access the second cache array slice, but accesses from the cache directories are managed using a single cache arbiter which controls the single access/command port. In the illustrative embodiment, each cache directory has its own directory arbiter to handle conflicting internal requests, and the directory arbiters communicate with the cache arbiter. An address tag associated with a load request is transmitted from the processor core with a designated bit that associates the address tag with only one of the cache array slices whose corresponding directory determines whether the address tag matches a currently valid cache entry.Type: GrantFiled: February 10, 2005Date of Patent: February 10, 2009Assignee: International Business Machines CorporationInventors: Leo James Clark, James Stephen Fields, Jr., Guy Lynn Guthrie, William John Starke
-
Patent number: 7487303Abstract: A memory system comprises a flash memory and a controller comprising a control logic circuit and a working memory storing a flash translation layer. The memory system performs a merge operation by selectively copying a page from a first block of the flash memory to a second block of the flash memory. Where the page is valid and marked as allocated according to a file allocation table stored in the flash memory, the page is copied to the second block. However, where the page is valid and marked as deleted in the file allocation table, the page is not copied to the second block.Type: GrantFiled: December 29, 2005Date of Patent: February 3, 2009Assignee: Samsung Electronics Co., Ltd.Inventors: Dong-Hyun Song, Chan-Ik Park, Sang-Ryul Min
-
Patent number: 7484073Abstract: Tagged translation lookaside buffer consistency is enabled in the presence of a hypervisor of a virtual machine computing environment, in which multiple processes of multiple logical processors of guests are hosted by a virtual machine monitor or hypervisor component. The virtual machine monitor or hypervisor component maintains tagged TLB data associated with the plurality of processes on behalf of each of the plurality of logical processors, thereby ensuring consistency of the tagged TLB data across all of the plurality of processes.Type: GrantFiled: July 12, 2006Date of Patent: January 27, 2009Assignee: Microsoft CorporationInventors: Ernest S. Cohen, Matthew D. Hendel
-
Patent number: 7483905Abstract: A method for accessing a database is provided. The method includes creating in a design environment a file that defines a metadata. The metadata relates at least one business object and at least one query. The method also includes communicating the file to a mobile device, storing the file on the mobile device, and transforming the file into a binary structure at an initial run of a computer application running on the mobile device. The binary structure is adapted to be read by the computer application. The method also includes recording the binary structure in a memory of the mobile device. A method for providing database access for a plurality of files with a limited number of database access channels is provided. A method for is provided for accessing a database in a computing environment for a plurality of recordsets. Each of the plurality of recordsets is associated with a database access channel for fetching records of the plurality of recordsets from the database upon occurrence of a preselected event.Type: GrantFiled: September 25, 2003Date of Patent: January 27, 2009Assignee: SAP AGInventor: Thomas Gauweiler
-
Patent number: 7480769Abstract: A microprocessor coupled to a system memory includes a load request signal that requests data be loaded from the system memory into the microprocessor in response to a load instruction. The load request signal includes a load virtual page address. The microprocessor also includes a prefetch request signal that requests a cache line be prefetched from the system memory into the microprocessor in response to a prefetch instruction. The prefetch request signal includes a prefetch virtual page address.Type: GrantFiled: August 11, 2006Date of Patent: January 20, 2009Assignee: MIPS Technologies, Inc.Inventors: Keith E. Diefendorff, Thomas A. Petersen
-
Patent number: 7475219Abstract: In one embodiment, the present invention includes a method of accessing a cache memory to determine whether requested data is present. In this embodiment, the method may include indexing a cache with a first index corresponding to a first memory region size, and indexing the cache with a second index corresponding to a second memory region size. The second index may be used if the requested data is not found using the first index.Type: GrantFiled: August 27, 2004Date of Patent: January 6, 2009Assignee: Marvell International Ltd.Inventors: Dennis M. O'Connor, Stephen J. Strazdus
-
Patent number: 7467282Abstract: A file system migrates a traditional volume to a virtual volume without data copying. In an embodiment, a traditional volume index node is selected for migration. The traditional volume index node is converted to a virtual volume index node. In one embodiment, the virtual volume index node provides both physical address information and virtual address information.Type: GrantFiled: April 5, 2005Date of Patent: December 16, 2008Assignee: Network Appliance, Inc.Inventors: Sriram Rao, John Edwards, Douglas P. Doucette, Cheryl Thompson
-
Patent number: 7464198Abstract: A method is provided for programming a DMA controller in a system on a chip. According to the method, a memory management unit translates a programming virtual address into a programming physical address according to a translation table. A first sub-block without discontinuity beginning at the programming physical address and ending at an end address equal to the physical address immediately preceding a first discontinuity is formed, with the first discontinuity being determined by a discontinuity module according to information supplied by a memory management unit. Some of the programming elements intended for the DMA controller are defined according to the first identified sub-block. Also provided is a system on a chip.Type: GrantFiled: July 22, 2005Date of Patent: December 9, 2008Assignee: STMicroelectronics SAInventors: Albert Martinez, M. William Orlando
-
Patent number: 7454590Abstract: In one embodiment, a processor comprises a plurality of processor cores and an interconnect to which the plurality of processor cores are coupled. Each of the plurality of processor cores comprises at least one translation lookaside buffer (TLB). A first processor core is configured to broadcast a demap command on the interconnect responsive to executing a demap operation. The demap command identifies one or more translations to be invalidated in the TLBs, and remaining processor cores are configured to invalidate the translations in the respective TLBs. The remaining processor cores transmit a response to the first processor core, and the first processor core is configured to delay continued processing subsequent to the demap operation until the responses are received from each of the remaining processor cores.Type: GrantFiled: September 9, 2005Date of Patent: November 18, 2008Assignee: Sun Microsystems, Inc.Inventors: Paul J. Jordan, Manish K. Shah, Gregory F. Grohoski
-
Patent number: 7447868Abstract: Typical embodiments of the present invention maintain the cache metadata in arrays, and use vector instructions to process the array elements in parallel. The cache metadata comprises virtual tags corresponding to main memory addresses and physical addresses corresponding to cache memory addresses. The virtual tags and physical addresses may be interleaved in a single array in the cache memory. Alternately, virtual tags and physical addresses may be maintained in corresponding separate arrays. A roving pointer may be used to identify the next block to be ejected from the cache memory.Type: GrantFiled: June 15, 2005Date of Patent: November 4, 2008Assignee: International Business Machines CorporationInventor: Paul E. McKenney
-
Patent number: 7447869Abstract: A method and apparatus for fragment processing in a virtual memory system are described. Embodiments of the invention include a coprocessor comprising a virtual memory system for accessing a physical memory. Page table logic and fragment processing logic scan a page table having a fixed, relatively small page size. The page table is broken into fragments made up of pages that are contiguous in physical address space and logical address space and have similar attributes. Fragments in logical address space begin on known boundaries such that the boundary indicates both a starting address of a fragment and the size of the fragment. Corresponding fragments in physical address space can begin anywhere, thus making the process transparent to physical memory. A fragment field in a page table entry conveys both fragment size and boundary information.Type: GrantFiled: April 7, 2005Date of Patent: November 4, 2008Assignee: ATI Technologies, Inc.Inventors: W. Fritz Kruger, Wade K Smith, Robert A. Drebin
-
Publication number: 20080270738Abstract: Embodiments include methods, apparatus, and systems for virtual address hashing. One embodiment evenly distributes page-table entries throughout a hash table so applications do not generate a same hash index for mapping virtual addresses to physical addresses.Type: ApplicationFiled: April 27, 2007Publication date: October 30, 2008Inventors: Thavatchai Makphaibulchoke, Linn Crosetto, Raghuram Kota
-
Publication number: 20080256282Abstract: Methods and apparatuses to calibrate read/write memory accesses through data buses of different lengths via advanced memory buffers. One embodiment includes an advanced memory buffer (AMB) having: a plurality of ports to interface respectively with a plurality of data buses; a port to interface with a common clock bus for the plurality of data buses; and an adjustable circuit coupled with the plurality of ports to level delays on the plurality of data buses. In one embodiment, the data buses have different wire lengths between the dynamic random access memory (DRAM) memory chips and the advanced memory buffer (AMB).Type: ApplicationFiled: April 16, 2007Publication date: October 16, 2008Inventors: Zhendong Guo, Larry Wu, Xiaorong Ye, Gang Shan
-
Patent number: 7434100Abstract: Systems and methods are described for replicating virtual memory translation from a target computer on a host computer, and debugging a fault that occurred on the target computer on the host computer. The described techniques are utilized on a target computer having a processor that has halted execution. Virtual to physical address translation data from the target computer is transferred to the host computer. The host computer utilizes the virtual to physical address translation data to access data pointed by virtual memory addresses that were used by the target computer, and then debugs a fault by accessing the data by reading the physical memory addresson the host computer. After the virtual to physical memory address translation data have been acquired, they can be cached at the host computer.Type: GrantFiled: March 8, 2006Date of Patent: October 7, 2008Assignee: Microsoft CorporationInventors: Gregory Hogdal, John Eldridge
-
Patent number: 7426625Abstract: A method, computer program product, and a data processing system for supporting memory addresses with holes is provided. A first physical address range allocated for system memory for an operating system run by a processor configured to support logical partitioning is virtualized to produce a first logical address range. A second physical address range allocated for system memory for the operating system is virtualized to produce a second logical address range. The first physical address range and the second physical address range are non-contiguous. Virtualization of the first and second physical address ranges is had such that the first logical address range and the second logical address range are contiguous. A memory mapped input/output physical address range that is intermediate the first physical address range and the second physical address range is virtualized to produce a third logical address range.Type: GrantFiled: March 31, 2004Date of Patent: September 16, 2008Assignee: International Business Machines CorporationInventor: Van Hoa Lee
-
Patent number: 7412585Abstract: Embodiments of the invention achieve data write in an appending manner by conversion from a logical block address to a physical block address in a HDD that has only one storage device and does not have a large-scale cache memory. In one embodiment, a check is made as to whether or not the size of an address translation table in a cache memory exceeds a threshold value. If the size exceeds the threshold value, entries whose number is specified are selected by the LRU method. The selected entries are added to a WRITE buffer, and the address translation table is saved on the HDD by executing WRITE. Seek time of a head at the time of WRITE is reduced, thereby improving WRITE performance. There is produced an effect of building such a snapshot that while a usual access to a HDD volume is allowed, it is possible to make an access to a volume of the snapshot which is a past state of the HDD. Disabling write after writing to the HDD is disabled.Type: GrantFiled: March 31, 2005Date of Patent: August 12, 2008Assignee: Hitachi Global Storage Technologies Netherlands B.V.Inventor: Tetsuya Uemura
-
Patent number: 7409494Abstract: A file system layout apportions an underlying physical volume into one or more virtual volumes (vvols) of a storage system. The underlying physical volume is an aggregate comprising one or more groups of disks, such as RAID groups, of the storage system. The aggregate has its own physical volume block number (pvbn) space and maintains metadata, such as block allocation structures, within that pvbn space. Each vvol has its own virtual volume block number (vvbn) space and maintains metadata, such as block allocation structures, within that vvbn space. Notably, the block allocation structures of a vvol are sized to the vvol, and not to the underlying aggregate, to thereby allow operations that manage data served by the storage system (e.g., snapshot operations) to efficiently work over the vvols. The file system layout extends the file system layout of a conventional write anywhere file layout system implementation, yet maintains performance properties of the conventional implementation.Type: GrantFiled: April 30, 2004Date of Patent: August 5, 2008Assignee: Network Appliance, Inc.Inventors: John K. Edwards, Blake H. Lewis, Robert M. English, Eric Hamilton, Peter F. Corbett